Sindbad~EG File Manager

Current Path : /usr/local/lib/python3.12/site-packages/pip/_vendor/pygments/__pycache__/
Upload File :
Current File : //usr/local/lib/python3.12/site-packages/pip/_vendor/pygments/__pycache__/lexer.cpython-312.pyc

�

4Μg���X�dZddlZddlZddlZddlmZmZddlmZddl	m
Z
mZmZm
Z
mZddlmZmZmZmZmZmZddlmZgd�Zej2d	�Zgd
�Zed��ZGd�d
e�ZGd�de��Z Gd�de �Z!Gd�de"�Z#Gd�d�Z$e$�Z%Gd�de&�Z'Gd�d�Z(d�Z)Gd�d�Z*e*�Z+d�Z,Gd�d �Z-Gd!�d"e�Z.Gd#�d$e�Z/Gd%�d&e e/��Z0Gd'�d(�Z1Gd)�d*e0�Z2d+�Z3Gd,�d-e/�Z4Gd.�d/e0e4��Z5y)0z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2024 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�N)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
Whitespace�
_TokenType)�get_bool_opt�get_int_opt�get_list_opt�make_analysator�Future�guess_decode)�	regex_opt)
�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words�line_rez.*?
))szutf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16bec��y)N��)�xs �E/usr/local/lib/python3.12/site-packages/pip/_vendor/pygments/lexer.py�<lambda>r$"s��#�c��eZdZdZd�Zy)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    c�\�d|vrt|d�|d<tj||||�S)N�analyse_text)r�type�__new__)�mcs�name�bases�ds    r#r+zLexerMeta.__new__+s3���Q�� /��.�0A� B�A�n���|�|�C��u�a�0�0r%N)�__name__�
__module__�__qualname__�__doc__r+r!r%r#r'r'%s���
1r%r'c�`�eZdZdZdZgZgZgZgZdZ	dZ
dZdZd�Z
d�Zd�Zd�Zd�Zdd	�Zd
�Zy)ra"
    Lexer for a specific language.

    See also :doc:`lexerdevelopment`, a high-level guide to writing
    lexers.

    Lexer classes have attributes used for choosing the most appropriate
    lexer based on various criteria.

    .. autoattribute:: name
       :no-value:
    .. autoattribute:: aliases
       :no-value:
    .. autoattribute:: filenames
       :no-value:
    .. autoattribute:: alias_filenames
    .. autoattribute:: mimetypes
       :no-value:
    .. autoattribute:: priority

    Lexers included in Pygments should have two additional attributes:

    .. autoattribute:: url
       :no-value:
    .. autoattribute:: version_added
       :no-value:

    Lexers included in Pygments may have additional attributes:

    .. autoattribute:: _example
       :no-value:

    You can pass options to the constructor. The basic options recognized
    by all lexers and processed by the base `Lexer` class are:

    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    Nrc�l�||_t|dd�|_t|dd�|_t|dd�|_t|dd�|_|jdd	�|_|jd
�xs|j|_g|_	t|dd�D]}|j|��y
)a�
        This constructor takes arbitrary options as keyword arguments.
        Every subclass must first process its own options and then call
        the `Lexer` constructor, since it processes the basic
        options like `stripnl`.

        An example looks like this:

        .. sourcecode:: python

           def __init__(self, **options):
               self.compress = options.get('compress', '')
               Lexer.__init__(self, **options)

        As these options must all be specifiable as strings (due to the
        command line usage), there are various utility functions
        available to help with that, see `Utilities`_.
        �stripnlT�stripallF�ensurenl�tabsizer�encoding�guess�
inencoding�filtersr!N)�optionsrr6r7r8rr9�getr:r=r
�
add_filter)�selfr>�filter_s   r#�__init__zLexer.__init__�s���&���#�G�Y��=���$�W�j�%�@��
�$�W�j�$�?��
�"�7�I�q�9������J��8��
����L�1�B�T�]�]��
����#�G�Y��;�G��O�O�G�$�<r%c��|jr'd|jj�d|j�d�Sd|jj�d�S)Nz<pygments.lexers.z with �>)r>�	__class__r0�rAs r#�__repr__zLexer.__repr__�sI���<�<�&�t�~�~�'>�'>�&?�v�d�l�l�EU�UV�W�W�&�t�~�~�'>�'>�&?�q�A�Ar%c�r�t|t�st|fi|��}|jj	|�y)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr=�append)rArBr>s   r#r@zLexer.add_filter�s/���'�6�*�(��<�G�<�G������G�$r%c��y)a�
        A static method which is called for lexer guessing.

        It should analyse the text and return a float in the range
        from ``0.0`` to ``1.0``.  If it returns ``0.0``, the lexer
        will not be selected as the most probable one, if it returns
        ``1.0``, it will be selected immediately.  This is used by
        `guess_lexer`.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr!)�texts r#r)zLexer.analyse_text�s�r%c���t|t�st|jdk(rt|�\}}nu|jdk(r	t	d��|j|j�}|j
d�r.|td�d}n|j
d�r|td�d}|jdd�}|jd	d�}|jr|j�}n|jr|jd�}|j d
kDr|j#|j �}|j$r|j'd�s|dz
}|S#t$r}t	d�|�d}~wwxYw)zVApply preprocessing such as decoding the input, removing BOM and normalizing newlines.r;�chardetzchardet is not vendored by pipzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/Nuz
�
�
r)rJ�strr:r�ImportError�
_encoding_map�
startswith�len�decoderO�detectr?�replacer7�stripr6r9�
expandtabsr8�endswith)rArM�_�e�decoded�bomr:�encs        r#�_preprocess_lexer_inputzLexer._preprocess_lexer_input�sH���$��$��}�}��'�&�t�,���a����)�+�T�&�&F�G�G�$�{�{�4�=�=�1���?�?�8�,���H�
��/�D����x�(��C��M�N�+���|�|�F�D�)���|�|�D�$�'���=�=��:�:�<�D�
�\�\��:�:�d�#�D��<�<�!���?�?�4�<�<�0�D��=�=����t�!4��D�L�D����I#�T�%�'L�M�RS�T��T�s�E�	E*�E%�%E*c�x����j�����fd�}|�}|st|�j��}|S)ae
        This method is the basic interface of a lexer. It is called by
        the `highlight()` function. It must process the text and return an
        iterable of ``(tokentype, value)`` pairs from `text`.

        Normally, you don't need to override this method. The default
        implementation processes the options recognized by all lexers
        (`stripnl`, `stripall` and so on), and then yields all tokens
        from `get_tokens_unprocessed()`, with the ``index`` dropped.

        If `unfiltered` is set to `True`, the filtering mechanism is
        bypassed even if filters are defined.
        c3�N�K��j��D]\}}}||f���y�w�N)�get_tokens_unprocessed)r]�t�vrArMs   ��r#�streamerz"Lexer.get_tokens.<locals>.streamers+������6�6�t�<���1�a���d�
�=�s�"%)rbrr=)rArM�
unfilteredri�streams``   r#�
get_tokenszLexer.get_tokens�s=����+�+�D�1��	�����"�6�4�<�<��>�F��
r%c��t�)aS
        This method should process the text and return an iterable of
        ``(index, tokentype, value)`` tuples where ``index`` is the starting
        position of the token within the input text.

        It must be overridden by subclasses. It is recommended to
        implement it as a generator to maximize effectiveness.
        )�NotImplementedError)rArMs  r#rfzLexer.get_tokens_unprocesseds
��"�!r%)F)r0r1r2r3r-�aliases�	filenames�alias_filenames�	mimetypes�priority�url�
version_added�_examplerCrHr@r)rbrlrfr!r%r#rr1sl��8�v�D��G�
�I��O��I��H��C��M��H�%�<B�%��"/�b�0	"r%r)�	metaclassc� �eZdZdZefd�Zd�Zy)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    c�r�|di|��|_|di|��|_||_tj|fi|��y�Nr!)�
root_lexer�language_lexer�needlerrC)rA�_root_lexer�_language_lexer�_needler>s     r#rCzDelegatingLexer.__init__-s9��%�0��0���-�8��8������
���t�'�w�'r%c�l�d}g}g}|jj|�D]N\}}}||jur&|r|jt	|�|f�g}||z
}�;|j|||f��P|r|jt	|�|f�t||jj|��S)N�)r|rfr}rKrV�
do_insertionsr{)rArM�buffered�
insertions�
lng_buffer�irgrhs        r#rfz&DelegatingLexer.get_tokens_unprocessed3s������
��
��*�*�A�A�$�G�G�A�q�!��D�K�K����%�%�s�8�}�j�&A�B�!#�J��A�
���!�!�1�a��)�,�H�����s�8�}�j�9�:��Z�!�_�_�C�C�H�M�O�	Or%N)r0r1r2r3rrCrfr!r%r#rr#s���>C�(�Or%rc��eZdZdZy)rzI
    Indicates that a state should include rules from another state.
    N�r0r1r2r3r!r%r#rrJs���	r%rc��eZdZdZd�Zy)�_inheritzC
    Indicates the a state should inherit from its superclass.
    c��y)Nrr!rGs r#rHz_inherit.__repr__Us��r%N)r0r1r2r3rHr!r%r#r�r�Qs���r%r�c��eZdZdZd�Zd�Zy)�combinedz:
    Indicates a state combined from multiple states.
    c�.�tj||�Sre)�tupler+)�cls�argss  r#r+zcombined.__new__`s���}�}�S�$�'�'r%c��yrer!)rAr�s  r#rCzcombined.__init__cs��r%N)r0r1r2r3r+rCr!r%r#r�r�[s���(�
r%r�c�:�eZdZdZd�Zd	d�Zd	d�Zd	d�Zd�Zd�Z	y)
�_PseudoMatchz:
    A pseudo match object constructed from a string.
    c� �||_||_yre)�_text�_start)rA�startrMs   r#rCz_PseudoMatch.__init__ms����
���r%Nc��|jSre)r��rA�args  r#r�z_PseudoMatch.startqs���{�{�r%c�F�|jt|j�zSre)r�rVr�r�s  r#�endz_PseudoMatch.endts���{�{�S����_�,�,r%c�4�|rtd��|jS)Nz
No such group)�
IndexErrorr�r�s  r#�groupz_PseudoMatch.groupws����_�-�-��z�z�r%c��|jfSre)r�rGs r#�groupsz_PseudoMatch.groups|s���
�
�}�r%c��iSrer!rGs r#�	groupdictz_PseudoMatch.groupdicts���	r%re)
r0r1r2r3rCr�r�r�r�r�r!r%r#r�r�hs%�����-��
�r%r�c���d�fd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    c
3��K�t��D]�\}}|��	t|�tur1|j|dz�}|s�1|j	|dz�||f���K|j|dz�}|��b|r|j	|dz�|_||t
|j	|dz�|�|�D]	}|s�|�����|r|j�|_yy�w)N�)�	enumerater*r
r�r��posr�r�)�lexer�match�ctxr��action�data�itemr�s       �r#�callbackzbygroups.<locals>.callback�s������"�4��I�A�v��~���f���+��{�{�1�q�5�)����+�+�a�!�e�,�f�d�:�:��{�{�1�q�5�)���#��"'�+�+�a�!�e�"4��� &�u�'3�E�K�K��A��4F��'M�s�!T���"&�J�!T�)� ��i�i�k�C�G��s�<C�0C�1AC�8!Crer!)r�r�s` r#rr�s���"�&�Or%c��eZdZdZy)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    Nr�r!r%r#r�r��s��r%r�c�����i�d�vr4�jd�}t|ttf�r|�d<nd|f�d<�tur	d��fd�	}|Sd���fd�	}|S)a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootc3�*�K��	r.�	j|j�|jdi�	��}n|}|j�}|j|j�fi���D]\}}}||z||f���|r|j
�|_yy�wrz)�updater>rFr�rfr�r�r�)
r�r�r��lx�sr�rgrh�	gt_kwargs�kwargss
        ��r#r�zusing.<locals>.callback�s��������
�
�e�m�m�,�$�U�_�_�.�v�.�������
�A�4�2�4�4�U�[�[�]�P�i�P���1�a��!�e�Q��k�!�Q���)�)�+����s�BBc3��K��
j|j��di�
��}|j�}|j|j	�fi�	��D]\}}}||z||f���|r|j�|_yy�wrz)r�r>r�rfr�r�r�)r�r�r�r�r�r�rgrh�_otherr�r�s        ���r#r�zusing.<locals>.callback�s}������M�M�%�-�-�(��!�&�!�B����
�A�4�2�4�4�U�[�[�]�P�i�P���1�a��!�e�Q��k�!�Q���)�)�+����s�BBre)�poprJ�listr�r)r�r�r�r�r�s``  @r#rr�se����I��&���J�J�w����a�$���'�!"�I�g��"(�!��I�g��
��~�
	&�2�O�		&��Or%c��eZdZdZd�Zy)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    c��||_yre)r�)rAr�s  r#rCzdefault.__init__�s	����
r%N)r0r1r2r3rCr!r%r#rr�s���r%rc��eZdZdZdd�Zd�Zy)rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    c�.�||_||_||_yre)r�prefix�suffix)rArr�r�s    r#rCzwords.__init__�s����
������r%c�Z�t|j|j|j��S)N�r�r�)rrr�r�rGs r#r?z	words.get�s������D�K�K����L�Lr%N)r�r�)r0r1r2r3rCr?r!r%r#rr�s����
Mr%rc�<�eZdZdZd�Zd�Zd�Zd�Zd
d�Zd�Z	d	�Z
y)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    c��t|t�r|j�}tj||�j
S)zBPreprocess the regular expression component of a token definition.)rJrr?�re�compiler�)r��regex�rflagsr�s    r#�_process_regexzRegexLexerMeta._process_regex�s.���e�V�$��I�I�K�E��z�z�%��(�.�.�.r%c�R�t|�tust|�s
Jd|����|S)z5Preprocess the token component of a token definition.z0token type must be simple type or callable, not )r*r
�callable)r��tokens  r#�_process_tokenzRegexLexerMeta._process_tokens2���E�{�j�(�H�U�O�	I�>�u�i�H�	I�;��r%c���t|t�r5|dk(ry||vr|fS|dk(r|S|dddk(rt|dd�SJd|����t|t�rfd|jz}|xjd	z
c_g}|D]3}||k7s
Jd
|����|j|j
|||���5|||<|fSt|t�r|D]}||vr�|dvr�
Jd|z��|SJd|����)
z=Preprocess the state transition action of a token definition.�#pop����#pushN�z#pop:zunknown new state z_tmp_%dr�zcircular state ref )r�r�zunknown new state def )rJrR�intr��_tmpname�extend�_process_stater�)r��	new_state�unprocessed�	processed�	tmp_state�itokens�istates       r#�_process_new_statez!RegexLexerMeta._process_new_statesE���i��%��F�"���k�)�!�|�#��g�%� � ��2�A��'�)��I�a�b�M�*�*�*�@� 2�9�-�@�@�u�
�	�8�
,�!�C�L�L�0�I��L�L�A��L��G�#����*�L�.A�&��,L�L�*����s�1�1�+�2;�V� E�F�$�$+�I�i� ��<��
�	�5�
)�#���+�-��"3�3�2�(�6�1�2�4�$���@�2�9�-�@�@�5r%c�~�t|t�s
Jd|����|ddk7s
Jd|����||vr||Sgx}||<|j}||D�]?}t|t�r;||k7s
Jd|����|j	|j||t|����Ot|t�r�`t|t�rO|j|j||�}|jtjd�jd|f���t|�tus
Jd|����	|j!|d||�}|j'|d
�}
t)|�dk(rd}n|j|d||�}|j||
|f���B|S#t"$r }	t%d	|d�d
|�d|�d|	���|	�d}	~	wwxYw)z%Preprocess a single state definition.zwrong state name r�#zinvalid state name zcircular state reference r�Nzwrong rule def zuncompilable regex z
 in state z of z: r��)rJrR�flagsrr�r�r�rr�r�rKr�r�r�r*r�r��	Exception�
ValueErrorr�rV)r�r�r�r��tokensr��tdefr��rex�errr�s           r#r�zRegexLexerMeta._process_state)s����%��%�D�):�5�)�'D�D�%��Q�x�3��?�"5�e�Y� ?�?���I���U�#�#�$&�&���5�!�������&�D��$��(��u�}�K�(A�%��&K�K�}��
�
�c�0�0��i�14�T��<�=���$��)���$��(��2�2�4�:�:�{�I�V�	��
�
�r�z�z�"�~�3�3�T�9�E�F����:��&�B�/�$��(B�B�&�
r��(�(��a��&�%�@���&�&�t�A�w�/�E��4�y�A�~� �	��2�2�4��7�3>�	�K�	�
�M�M�3��y�1�2�A'�B�
���
r� �#6�t�A�w�k��E�9�TX�Y\�X_�_a�be�af�!g�h�nq�q��
r�s�)F�	F<�F7�7F<Nc��ix}|j|<|xs|j|}t|�D]}|j|||��|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)r�r-�	tokendefsr�r�s     r#�process_tokendefzRegexLexerMeta.process_tokendefTsN��,.�.�	�C�O�O�D�)��1����D�!1�	��)�_�E����y�)�U�;�%��r%c��i}i}|jD]�}|jjdi�}|j�D]t\}}|j|�}|�!|||<	|j	t
�}|||<�:|j|d�}|��O||||dz	|j	t
�}	||	z||<�v��|S#t$rY��wxYw#t$rY��wxYw)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nr�)�__mro__�__dict__r?�items�indexrr�r�)
r�r��inheritable�c�toksr�r��curitems�inherit_ndx�new_inh_ndxs
          r#�
get_tokendefszRegexLexerMeta.get_tokendefs\s���������A��:�:�>�>�(�B�/�D� $�
�
����u�!�:�:�e�,���#�
%*�F�5�M�!�&+�k�k�'�&:��*5�K��&��)�o�o�e�T�:���&��7<���[��]�3�C�#(�+�+�g�"6�K�*5�{�)B�K��&�9!-��B�
��)&�!� �!��"����s$�B;�C
�;	C�C�
	C�Cc���d|jvrLi|_d|_t|d�r
|jrn%|jd|j
��|_tj|g|��i|��S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsr�)
r�r�r��hasattrrr�r�rr*�__call__)r�r��kwdss   r#rzRegexLexerMeta.__call__�sh���C�L�L�(� �C�O��C�L��s�,�-�#�2D�2D��!�2�2�2�s�7H�7H�7J�K����}�}�S�0�4�0�4�0�0r%re)r0r1r2r3r�r�r�r�r�r�rr!r%r#r�r��s.���
/��!A�F)�V�/�b1r%r�c�4�eZdZdZej
ZiZdd�Zy)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    c#�:K�d}|j}t|�}||d}	|D�]&\}}}	|||�}
|
s�|�8t|�tur|||
j	�f��n|||
�Ed{���|
j�}|	��t
|	t�rX|	D]R}|dk(r t|�dkDs�|j��(|dk(r|j|d��B|j|��TnWt
|	t�r#t|	�t|�k\r|dd�=n*||	d�=n$|	dk(r|j|d�n
Jd|	����||d}n7	||dk(rd	g}|d	}|tdf��|dz
}��P|t||f��|dz
}��d7��#t$rYywxYw�w)
z~
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the initial stack (default: ``['root']``)
        rr�r�Nr�r��wrong state def: rPr�)rr�r*r
r�r�rJr�rVr�rKr��absr	rr�)rArMr�r�r��
statestack�statetokens�rexmatchr�r��mr�s            r#rfz!RegexLexer.get_tokens_unprocessed�s��������L�L�	��%�[�
��
�2��/���/:�+��&�)��T�3�'����)���<�:�5�"%�v�q�w�w�y�"8�8�'-�d�A��6�6��%�%�'�C� �,�%�i��7�)2��#(�F�?�'*�:���':�(2���(8�%*�g�%5�$.�$5�$5�j��n�$E�$.�$5�$5�e�$<�*3�(�	�3�7� #�9�~��Z��@�$.�q�r�N�$.�y�z�$:�&�'�1�&�-�-�j��n�=�K�,=�i�]�*K�K�5�&/�
�2��&?���C0;�J��C�y�D�(�&,�X�
�&/��&7��!�:�t�3�3��q��� ��u�d�3�i�/�/��1�H�C�_�7��P"����sM�8F�5F�0F	�1>F�0B!F� F�2F�4F�F�	F�F�F�FN�)r�)	r0r1r2r3r��	MULTILINEr�r�rfr!r%r#rr�s���
�L�L�E�0�F�;r%rc��eZdZdZdd�Zd�Zy)rz9
    A helper object that holds lexer position data.
    Nc�`�||_||_|xst|�|_|xsdg|_y)Nr�)rMr�rVr�r�)rArMr�r�r�s     r#rCzLexerContext.__init__s.����	�����#�#�d�)����&�v�h��
r%c�V�d|j�d|j�d|j�d�S)Nz
LexerContext(z, �))rMr�r�rGs r#rHzLexerContext.__repr__s)���t�y�y�m�2�d�h�h�\��D�J�J�>��K�Kr%�NN)r0r1r2r3rCrHr!r%r#rr�s���'�Lr%rc��eZdZdZdd�Zy)rzE
    A RegexLexer that uses a context object to store its state.
    Nc#�K�|j}|st|d�}|d}n |}||jd}|j}	|D�]�\}}}|||j|j
�}	|	s�)|�lt
|�tur5|j||	j�f��|	j�|_n&|||	|�Ed{���|s||jd}|��5t|t�r�|D]�}
|
dk(r4t|j�dkDs�!|jj��<|
dk(r)|jj|jd��j|jj|
���n�t|t�rAt|�t|j�k\r|jdd�=nH|j|d�=n8|dk(r)|jj|jd�n
Jd|����||jd}n�	|j|j
k\ry||jd	k(r9dg|_|d}|jt d	f��|xjdz
c_��;|jt"||jf��|xjdz
c_��s7���#t$$rYywxYw�w)
z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�r�Nr�r�rrP)rrr�rMr�r�r*r
r�rJr�rVr�rKr�r	rrr�)rArM�contextr�r�rrr�r�r
r�s           r#rfz)ExtendedRegexLexer.get_tokens_unprocessedsm����
�L�L�	���t�Q�'�C�#�F�+�K��C�#�C�I�I�b�M�2�K��8�8�D��/:�+��&�)��T�3�7�7�C�G�G�4����)���<�:�5�"%�'�'�6�1�7�7�9�"<�<�&'�e�e�g�C�G�'-�d�A�s�';�;�;�#,�.7��	�	�"�
�.F�� �,�%�i��7�)2��#(�F�?�'*�3�9�9�~��'9�(+�	�	�
�
��%*�g�%5�$'�I�I�$4�$4�S�Y�Y�r�]�$C�$'�I�I�$4�$4�U�$;�*3�(�	�3�7�"�9�~��S�Y�Y��?�$'�I�I�a�b�M�$'�I�I�i�j�$9�&�'�1��I�I�,�,�S�Y�Y�r�]�;�K�,=�i�]�*K�K�5�&/��	�	�"�
�&>���G0;�J
��w�w�#�'�'�)���C�G�G�}��,�%+�H��	�&/��&7��!�g�g�t�T�1�1����1��� ��'�'�5�$�s�w�w�-�7�7��G�G�q�L�G�c�<��R"����s^�A,K	�/AK	�J7�A
K	�DK	�J:�2K	�3A	J:�<K	�>7J:�5K	�:	K�K	�K�K	r)r0r1r2r3rfr!r%r#rrs
���@r%rc#�K�t|�}	t|�\}}d}d}|D]�\}}}|�|}d}	|rx|t|�z|k\rg||	||z
}
|
r|||
f��|t|
�z
}|D]\}}}
|||
f��|t|
�z
}�||z
}		t|�\}}|r|t|�z|k\r�g|	t|�ks��||||	df��|t|�|	z
z
}��|r9|xsd}|D]\}}}|||f��|t|�z
}�	t|�\}}|r�8yy#t$r|Ed{���7YywxYw#t$rd}Y��wxYw#t$rd}YywxYw�w)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrV)r�r�r�r��realpos�insleftr�rgrh�oldi�tmpval�it_index�it_token�it_value�ps               r#r�r�Ss������j�!�J���j�)���w��G��G����1�a��?��G����!�c�!�f�*��-��t�E�A�I�&�F���q�&�(�(��3�v�;�&��07�,��(�H��x��1�1��3�x�=�(��18��1�9�D�
�!%�j�!1���w��!�c�!�f�*��-��#�a�&�=��1�a���h�&�&��s�1�v��}�$�G�+�0��,�Q���G�A�q�!��1�a�-���s�1�v��G��	�!�*�-�N�E�7���E��������4!�
����
�� �	��G��	�s��E�D�A*E�D,�E�*E�9AE�?D=�
E�E�D)� D#�!D)�&E�(D)�)E�,D:�7E�9D:�:E�=E�E�
E�Ec��eZdZdZd�Zy)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.c�������t|t�r-t|j|j|j���n|�tj�|��tjf����fd�	}|S)Nr�c����jdj�
�	fddg�}tj�}�j|||�}tj�}|dxxdz
cc<|dxx||z
z
cc<|S)Nr�rr r�)�
_prof_data�
setdefault�timer�)rMr��endpos�info�t0�res�t1r��compiledr�r�s       ����r#�
match_funcz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func�sr����>�>�"�%�0�0�%����3�x�H�D�����B��.�.��s�F�3�C�����B���G�q�L�G���G�r�B�w��G��Jr%)	rJrrr�r�r�r��sys�maxsize)r�r�r�r�r1r0r�s`  ` @@r#r�z&ProfilingRegexLexerMeta._process_regex�sZ����e�U�#��E�K�K����#(�<�<�1�C��C��:�:�c�6�*��),���	�	��r%N)r0r1r2r3r�r!r%r#r%r%�s
��H�r%r%c� �eZdZdZgZdZdd�Zy)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�c#�J�K��jjji�tj	�||�Ed{����jjj�}t
d�|j�D��fd�d��}td�|D��}t�td�jjt|�|fz�td�tdd	z�td
�|D]}td|z��td�y7�ݭw)Nc3�K�|]H\\}}\}}|t|�jd�jdd�dd|d|zd|z|zf���Jy�w)zu'z\\�\N�Ai�)�reprrZrY)�.0r��r�nrgs     r#�	<genexpr>z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>�sa����@�/>�+�F�Q��F�Q���4��7�=�=��/�7�7���E�c�r�J��4�!�8�T�A�X��\�3�/>�s�AAc�"��|�jSre)�_prof_sort_index)r"rAs �r#r$z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>�s���A�d�&;�&;�$<r%T)�key�reversec3�&K�|]	}|d���y�w)�Nr!)r<r"s  r#r?z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>�s����+�d���!��d�s�z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)rFr(rKrrfr��sortedr��sum�printr0rV)rArMr��rawdatar��	sum_totalr/s`      r#rfz*ProfilingRegexLexer.get_tokens_unprocessed�s���������!�!�(�(��,��4�4�T�4��G�G�G��.�.�+�+�/�/�1���@�/6�}�}��@�=�"�	$��
�+�d�+�+�	�
��
�B��~�~�&�&��D�	�9�=�>�	?�
�i��
�4�7I�I�J�
�i���A��/�!�3�4��
�i��#	H�s�AD#�D!�CD#Nr)r0r1r2r3r(rArfr!r%r#r5r5�s��P��J���r%r5)6r3r�r2r*�pip._vendor.pygments.filterrr�pip._vendor.pygments.filtersr�pip._vendor.pygments.tokenrrrr	r
�pip._vendor.pygments.utilrrr
rrr�pip._vendor.pygments.regexoptr�__all__r�rrT�staticmethod�_default_analyser*r'rrrRrr�rr�r�r�rr�rrrrr�rrrr�r%r5r!r%r#�<module>rSsL���
�
��=�;�Q�Q�*�*�3�*���"�*�*�W�
��,�
� �
�.��	1��	1�o"�i�o"�dO�e�O�N	�c�	����*��

�u�

���6�4��
�w��/�d	�	�
M�F�
M� d1�Y�d1�N^��.�^�BL�L�E��E�P=�@�n��,�*�0G�r%

Sindbad File Manager Version 1.0, Coded By Sindbad EG ~ The Terrorists