Important
This documentation covers IPython versions 6.0 and higher. Beginning with version 6.0, IPython stopped supporting compatibility with Python versions lower than 3.3 including all versions of Python 2.7.
If you are looking for an IPython version compatible with Python 2.7, please use the IPython 5.x LTS release and refer to its documentation (LTS is the long term support release).
Module: lib.lexers
¶
Defines a variety of Pygments lexers for highlighting IPython code.
This includes:
- IPythonLexer, IPython3Lexer
Lexers for pure IPython (python + magic/shell commands)
- IPythonPartialTracebackLexer, IPythonTracebackLexer
Supports 2.x and 3.x via keyword
python3
. The partial traceback lexer reads everything but the Python code appearing in a traceback. The full lexer combines the partial lexer with an IPython lexer.- IPythonConsoleLexer
A lexer for IPython console sessions, with support for tracebacks.
- IPyLexer
A friendly lexer which examines the first line of text and from it, decides whether to use an IPython lexer or an IPython console lexer. This is probably the only lexer that needs to be explicitly added to Pygments.
4 Classes¶
- class IPython.lib.lexers.IPythonPartialTracebackLexer(*args, **kwds)¶
Bases:
RegexLexer
Partial lexer for IPython tracebacks.
Handles all the non-python output.
- name = 'IPython Partial Traceback'¶
Full name of the lexer, in human-readable form
- tokens = {'root': [('^(\\^C)?(-+\\n)', <function bygroups.<locals>.callback>), ('^( File)(.*)(, line )(\\d+\\n)', <function bygroups.<locals>.callback>), ('(?u)(^[^\\d\\W]\\w*)(\\s*)(Traceback.*?\\n)', <function bygroups.<locals>.callback>), ('(.*)( in )(.*)(\\(.*\\)\\n)', <function bygroups.<locals>.callback>), ('(\\s*?)(\\d+)(.*?\\n)', <function bygroups.<locals>.callback>), ('(-*>?\\s?)(\\d+)(.*?\\n)', <function bygroups.<locals>.callback>), ('(?u)(^[^\\d\\W]\\w*)(:.*?\\n)', <function bygroups.<locals>.callback>), ('.*\\n', ('Other',))]}¶
At all time there is a stack of states. Initially, the stack contains a single state ‘root’. The top of the stack is called “the current state”.
Dict of
{'state': [(regex, tokentype, new_state), ...], ...}
new_state
can be omitted to signify no state transition. Ifnew_state
is a string, it is pushed on the stack. This ensure the new current state isnew_state
. Ifnew_state
is a tuple of strings, all of those strings are pushed on the stack and the current state will be the last element of the list.new_state
can also becombined('state1', 'state2', ...)
to signify a new, anonymous state combined from the rules of two or more existing ones. Furthermore, it can be ‘#pop’ to signify going back one step in the state stack, or ‘#push’ to push the current state on the stack again. Note that if you push while in a combined state, the combined state itself is pushed, and not only the state in which the rule is defined.The tuple can also be replaced with
include('state')
, in which case the rules from the state named by the string are included in the current one.
- class IPython.lib.lexers.IPythonTracebackLexer(**options)¶
Bases:
DelegatingLexer
IPython traceback lexer.
For doctests, the tracebacks can be snipped as much as desired with the exception to the lines that designate a traceback. For non-syntax error tracebacks, this is the line of hyphens. For syntax error tracebacks, this is the line which lists the File and line number.
- __init__(**options)¶
This constructor takes arbitrary options as keyword arguments. Every subclass must first process its own options and then call the
Lexer
constructor, since it processes the basic options likestripnl
.An example looks like this:
def __init__(self, **options): self.compress = options.get('compress', '') Lexer.__init__(self, **options)
As these options must all be specifiable as strings (due to the command line usage), there are various utility functions available to help with that, see `Utilities`_.
- aliases = ['ipythontb']¶
A list of short, unique identifiers that can be used to look up the lexer from a list, e.g., using
get_lexer_by_name()
.
- name = 'IPython Traceback'¶
Full name of the lexer, in human-readable form
- class IPython.lib.lexers.IPythonConsoleLexer(**options)¶
Bases:
Lexer
An IPython console lexer for IPython code-blocks and doctests, such as:
.. code-block:: ipythonconsole In [1]: a = 'foo' In [2]: a Out[2]: 'foo' In [3]: p rint a foo In [4]: 1 / 0
Support is also provided for IPython exceptions:
.. code-block:: ipythonconsole In [1]: raise Exception -------- ------------------------------------------------------------------- Exceptio n Traceback (most recent call last) <ipython -input-1-fca2ab0ca76b> in <module> ----> 1 raise Exception Exceptio n:
- __init__(**options)¶
Initialize the IPython console lexer.
- Parameters:
python3 (bool) – If
True
, then the console inputs are parsed using a Python 3 lexer. Otherwise, they are parsed using a Python 2 lexer.in1_regex (RegexObject) – The compiled regular expression used to detect the start of inputs. Although the IPython configuration setting may have a trailing whitespace, do not include it in the regex. If
None
, then the default input prompt is assumed.in2_regex (RegexObject) – The compiled regular expression used to detect the continuation of inputs. Although the IPython configuration setting may have a trailing whitespace, do not include it in the regex. If
None
, then the default input prompt is assumed.out_regex (RegexObject) – The compiled regular expression used to detect outputs. If
None
, then the default output prompt is assumed.
- aliases = ['ipythonconsole']¶
A list of short, unique identifiers that can be used to look up the lexer from a list, e.g., using
get_lexer_by_name()
.
- buffered_tokens()¶
Generator of unprocessed tokens after doing insertions and before changing to a new state.
- get_mci(line)¶
Parses the line and returns a 3-tuple: (mode, code, insertion).
mode
is the next mode (or state) of the lexer, and is always equal to ‘input’, ‘output’, or ‘tb’.code
is a portion of the line that should be added to the buffer corresponding to the next mode and eventually lexed by another lexer. For example,code
could be Python code ifmode
were ‘input’.insertion
is a 3-tuple (index, token, text) representing an unprocessed “token” that will be inserted into the stream of tokens that are created from the buffer once we change modes. This is usually the input or output prompt.In general, the next mode depends on current mode and on the contents of
line
.
- get_tokens_unprocessed(text)¶
This method should process the text and return an iterable of
(index, tokentype, value)
tuples whereindex
is the starting position of the token within the input text.It must be overridden by subclasses. It is recommended to implement it as a generator to maximize effectiveness.
- ipytb_start = re.compile('^(\\^C)?(-+\\n)|^( File)(.*)(, line )(\\d+\\n)')¶
The regex to determine when a traceback starts.
- mimetypes = ['text/x-ipython-console']¶
A list of MIME types for content that can be lexed with this lexer.
- name = 'IPython console session'¶
Full name of the lexer, in human-readable form
- class IPython.lib.lexers.IPyLexer(**options)¶
Bases:
Lexer
Primary lexer for all IPython-like code.
This is a simple helper lexer. If the first line of the text begins with “In [[0-9]+]:”, then the entire text is parsed with an IPython console lexer. If not, then the entire text is parsed with an IPython lexer.
The goal is to reduce the number of lexers that are registered with Pygments.
- __init__(**options)¶
This constructor takes arbitrary options as keyword arguments. Every subclass must first process its own options and then call the
Lexer
constructor, since it processes the basic options likestripnl
.An example looks like this:
def __init__(self, **options): self.compress = options.get('compress', '') Lexer.__init__(self, **options)
As these options must all be specifiable as strings (due to the command line usage), there are various utility functions available to help with that, see `Utilities`_.
- aliases = ['ipy']¶
A list of short, unique identifiers that can be used to look up the lexer from a list, e.g., using
get_lexer_by_name()
.
- get_tokens_unprocessed(text)¶
This method should process the text and return an iterable of
(index, tokentype, value)
tuples whereindex
is the starting position of the token within the input text.It must be overridden by subclasses. It is recommended to implement it as a generator to maximize effectiveness.
- name = 'IPy session'¶
Full name of the lexer, in human-readable form
1 Function¶
- IPython.lib.lexers.build_ipy_lexer(python3)¶
Builds IPython lexers depending on the value of
python3
.The lexer inherits from an appropriate Python lexer and then adds information about IPython specific keywords (i.e. magic commands, shell commands, etc.)
- Parameters:
python3 (bool) – If
True
, then build an IPython lexer from a Python 3 lexer.