Sindbad~EG File Manager

Current Path : /usr/local/lib/python3.12/site-packages/pip/_internal/index/__pycache__/
Upload File :
Current File : //usr/local/lib/python3.12/site-packages/pip/_internal/index/__pycache__/collector.cpython-312.pyc

�

4Μg�?�	�0�dZddlZddlZddlZddlZddlZddlZddlZddl	Z
ddlZ
ddlm
Z
ddlmZddlmZddlmZmZmZmZmZmZmZmZmZmZmZddlmZddl m!Z!dd	l"m#Z#m$Z$dd
l%m&Z&ddl'm(Z(ddl)m*Z*dd
l+m,Z,ddl-m.Z.ddl/m0Z0ddl1m2Z2ddl3m4Z4ddl5m6Z6m7Z7m8Z8ejre:�Z;ee<e<fZ=de<dee<fd�Z>Gd�de?�Z@de!ddfd�ZAGd�de?�ZBde<de,ddfd�ZCde<de,de!fd�ZDd e=dee<fd!�ZEGd"�d#�ZFGd$�d%e�ZGd&eGdeGfd'�ZHeHd(d)dee(fd*��ZIe
d+�,�Gd-�d)��ZJGd.�d/e�ZK	d<d0e(d1ee<e?fd2eed3ddfd4�ZL	d=de!d5eMdeJfd6�ZNd0e(de,ded)fd7�ZOGd8�d9e�ZPGd:�d;�ZQy)>zO
The main purpose of this module is to expose LinkCollector.collect_sources().
�N)�	dataclass)�
HTMLParser)�Values)�Callable�Dict�Iterable�List�MutableMapping�
NamedTuple�Optional�Protocol�Sequence�Tuple�Union)�requests)�Response)�
RetryError�SSLError)�NetworkConnectionError)�Link)�SearchScope)�
PipSession)�raise_for_status)�is_archive_file��redact_auth_from_url)�vcs�)�CandidatesFromPage�
LinkSource�build_source�url�returnc��tjD]6}|j�j|�s�#|t	|�dvs�4|cSy)zgLook for VCS schemes in the URL.

    Returns the matched VCS scheme, or None if there's no match.
    z+:N)r�schemes�lower�
startswith�len)r"�schemes  �H/usr/local/lib/python3.12/site-packages/pip/_internal/index/collector.py�_match_vcs_schemer+3s@��
�+�+���9�9�;�!�!�&�)�c�#�f�+�.>�$�.F��M���c�,��eZdZdededdf�fd�Z�xZS)�_NotAPIContent�content_type�request_descr#Nc�B��t�|�||�||_||_y�N)�super�__init__r/r0)�selfr/r0�	__class__s   �r*r4z_NotAPIContent.__init__?s"���
����|�4�(���(��r,)�__name__�
__module__�__qualname__�strr4�
__classcell__�r6s@r*r.r.>s"���)�S�)��)��)�)r,r.�responsec��|jjdd�}|j�}|jd�ryt	||j
j��)z�
    Check the Content-Type header to ensure the response contains a Simple
    API Response.

    Raises `_NotAPIContent` if the content type is not a valid content-type.
    �Content-Type�Unknown)z	text/htmlz#application/vnd.pypi.simple.v1+html�#application/vnd.pypi.simple.v1+jsonN)�headers�getr&r'r.�request�method)r=r/�content_type_ls   r*�_ensure_api_headerrGEs[���#�#�'�'��	�B�L�!�'�'�)�N�� � �	
��	�
��x�'7�'7�'>�'>�
?�?r,c��eZdZy)�_NotHTTPN)r7r8r9�r,r*rIrI[s��r,rI�sessionc��tjj|�\}}}}}|dvr
t��|j	|d��}t|�t
|�y)z�
    Send a HEAD request to the URL, and ensure the response contains a simple
    API Response.

    Raises `_NotHTTP` if the URL is not available for a HEAD request, or
    `_NotAPIContent` if the content type is not a valid content type.
    >�http�httpsT)�allow_redirectsN)�urllib�parse�urlsplitrI�headrrG)r"rKr)�netloc�path�query�fragment�resps        r*�_ensure_api_responserY_sV��-3�L�L�,A�,A�#�,F�)�F�F�D�%��
�&�&��j���<�<��T�<�2�D��T���t�r,c��tt|�j�r
t||��tjdt
|��|j|djgd��dd���}t|�t|�tjdt
|�|jjd	d
��|S)aYAccess an Simple API response with GET, and return the response.

    This consists of three parts:

    1. If the URL looks suspiciously like an archive, send a HEAD first to
       check the Content-Type is HTML or Simple API, to avoid downloading a
       large file. Raise `_NotHTTP` if the content type cannot be determined, or
       `_NotAPIContent` if it is not HTML or a Simple API.
    2. Actually perform the request. Raise HTTP exceptions on network failures.
    3. Check the Content-Type header to make sure we got a Simple API response,
       and raise `_NotAPIContent` otherwise.
    �rKzGetting page %sz, )rAz*application/vnd.pypi.simple.v1+html; q=0.1ztext/html; q=0.01z	max-age=0)�Acceptz
Cache-Control)rBzFetched page %s as %sr?r@)rr�filenamerY�logger�debugrrC�joinrrGrB)r"rKrXs   r*�_get_simple_responseraqs����t�C�y�)�)�*��S�'�2�
�L�L�"�$8��$=�>��;�;���i�i���()�+
���D�4�T���t��
�L�L���S�!��������3���Kr,rBc��|rHd|vrDtjj�}|d|d<|jd�}|rt	|�Sy)z=Determine if we have any encoding information in our headers.r?zcontent-type�charsetN)�email�message�Message�	get_paramr:)rB�mrcs   r*�_get_encoding_from_headersri�sK���>�W�,��M�M�!�!�#��#�N�3��.���+�+�i�(����w�<��r,c�0�eZdZdd�Zdedefd�Zdefd�Zy)�CacheablePageContentr#Nc�.�|jsJ�||_yr2)�cache_link_parsing�page�r5rns  r*r4zCacheablePageContent.__init__�s���&�&�&�&���	r,�otherc��t|t|��xr-|jj|jjk(Sr2)�
isinstance�typernr")r5rps  r*�__eq__zCacheablePageContent.__eq__�s-���%��d��,�P������%�*�*�.�.�1P�Pr,c�@�t|jj�Sr2)�hashrnr"�r5s r*�__hash__zCacheablePageContent.__hash__�s���D�I�I�M�M�"�"r,)rn�IndexContentr#N)	r7r8r9r4�object�boolrt�intrxrJr,r*rkrk�s)���Q�F�Q�t�Q�#�#�#r,rkc�"�eZdZdddeefd�Zy)�
ParseLinksrnryr#c��yr2rJros  r*�__call__zParseLinks.__call__�s��r,N)r7r8r9rrr�rJr,r*r~r~�s��C�^�C����Cr,r~�fnc�����tjd��dtdttf�fd���tj
��dddttf��fd��}|S)	z�
    Given a function that parses an Iterable[Link] from an IndexContent, cache the
    function's result (keyed by CacheablePageContent), unless the IndexContent
    `page` has `page.cache_link_parsing == False`.
    N)�maxsize�cacheable_pager#c�:��t�|j��Sr2)�listrn)r�r�s �r*�wrapperz*with_cached_index_content.<locals>.wrapper�s����B�~�*�*�+�,�,r,rnryc�`��|jr�t|��St�|��Sr2)rmrkr�)rnr�r�s ��r*�wrapper_wrapperz2with_cached_index_content.<locals>.wrapper_wrapper�s+����"�"��/��5�6�6��B�t�H�~�r,)�	functools�	lru_cacherkr	r�wraps)r�r�r�s` @r*�with_cached_index_contentr��sl�������&�-� 4�-��d��-�'�-��_�_�R���n���d�����
�r,rnryc#�TK�|jj�}|jd�r^tj|j
�}|j
dg�D])}tj||j�}|��&|���+yt|j�}|jxsd}|j|j
j|��|j}|jxs|}|jD]!}	tj |	||��}|��|���#y�w)z\
    Parse a Simple API's Index Content, and yield its anchor elements as Link objects.
    rA�filesNzutf-8)�page_url�base_url)r/r&r'�json�loads�contentrCr�	from_jsonr"�HTMLLinkParser�encoding�feed�decoder��anchors�from_element)
rnrF�data�file�link�parserr�r"r��anchors
          r*�parse_linksr��s������&�&�,�,�.�N�� � �!F�G��z�z�$�,�,�'���H�H�W�b�)�D��>�>�$����1�D��|���J�	*�
	�
�D�H�H�
%�F��}�}�'��H�
�K�K����#�#�H�-�.�
�(�(�C����%�#�H��.�.��� � ��#��I���<���
�	!�s�D&D(T)�frozenc�Z�eZdZUdZeed<eed<eeed<eed<dZe	ed<defd	�Z
y
)rya�Represents one response (or page), along with its URL.

    :param encoding: the encoding to decode the given content.
    :param url: the URL from which the HTML was downloaded.
    :param cache_link_parsing: whether links parsed from this page's url
                               should be cached. PyPI index urls should
                               have this set to False, for example.
    r�r/r�r"Trmr#c�,�t|j�Sr2)rr"rws r*�__str__zIndexContent.__str__
s��#�D�H�H�-�-r,N)r7r8r9�__doc__�bytes�__annotations__r:rrmr{r�rJr,r*ryry�s:����N����s�m��	�H�#���#�.��.r,c���eZdZdZdeddf�fd�Zdedeeeeefddfd�Z	deeeeefdeefd	�Z
�xZS)
r�zf
    HTMLParser that keeps the first base HREF and a list of all anchor
    elements' attributes.
    r"r#Nc�P��t�|�d��||_d|_g|_y)NT)�convert_charrefs)r3r4r"r�r�)r5r"r6s  �r*r4zHTMLLinkParser.__init__s(���
���$��/����'+��
�79��r,�tag�attrsc��|dk(r(|j�|j|�}|�||_yy|dk(r%|jjt	|��yy)N�base�a)r��get_hrefr��append�dict)r5r�r��hrefs    r*�handle_starttagzHTMLLinkParser.handle_starttagsT���&�=�T�]�]�2��=�=��'�D��� $��
� �
�C�Z��L�L����U��,�r,c�*�|D]\}}|dk(s�|cSy)Nr�rJ)r5r��name�values    r*r�zHTMLLinkParser.get_href&s!�� �K�D�%��v�~���!�r,)r7r8r9r�r:r4r	rrr�r�r;r<s@r*r�r�ss����
:�C�:�D�:�-�3�-�t�E�#�x��}�:L�4M�/N�-�SW�-��d�5��h�s�m�);�#<�=��(�3�-�r,r�r��reason�meth).Nc�<�|�tj}|d||�y)Nz%Could not fetch URL %s: %s - skipping)r^r_)r�r�r�s   r*�_handle_get_simple_failr�-s��
�|��|�|���	0�$��?r,rmc��t|j�}t|j|jd||j|��S)Nr?)r�r"rm)rirBryr�r")r=rmr�s   r*�_make_index_contentr�7sE��*�(�*:�*:�;�H���������(���L�L�-��r,c��|jjdd�d}t|�}|rtj	d||�yt
jj|�\}}}}}}|dk(r�tjjt
jj|��rL|jd�s|dz
}t
jj|d�}tjd|�	t!||�	�}t#||j$�
�S#t&$rtj	d|�Yyt($r6}tj	d||j*|j,�Yd}~yd}~wt.$r}t1||�Yd}~yd}~wt2$r}t1||�Yd}~yd}~wt4$r6}d
}	|	t7|�z
}	t1||	tj8��Yd}~yd}~wt:j<$r}t1|d|���Yd}~yd}~wt:j>$rt1|d�YywxYw)N�#rrzICannot look at %s URL %s because it does not support lookup as web pages.r��/z
index.htmlz# file: URL is directory, getting %sr[)rmz`Skipping page %s because it looks like an archive, and cannot be checked by a HTTP HEAD request.z�Skipping page %s because the %s request got Content-Type: %s. The only supported Content-Types are application/vnd.pypi.simple.v1+json, application/vnd.pypi.simple.v1+html, and text/htmlz4There was a problem confirming the ssl certificate: )r�zconnection error: z	timed out) r"�splitr+r^�warningrPrQ�urlparse�osrU�isdirrD�url2pathname�endswith�urljoinr_rar�rmrIr.r0r/rr�rrr:�infor�ConnectionError�Timeout)
r�rKr"�
vcs_schemer)�_rUrX�excr�s
          r*�_get_index_contentr�Ds��
�(�(�.�.��a�
 ��
#�C�#�3�'�J�����W���	
�
� &�|�|�4�4�S�9��F�A�t�Q��1�
���B�G�G�M�M�&�.�.�*E�*E�d�*K�L��|�|�C� ��3�J�C�
�l�l�"�"�3��5�����:�C�@�U�#�C��9��:#�4�D�<S�<S�T�T��9�
����
1��	
�8�/�
����
A�
�������
	
�	
�,��"�+���c�*�*����+���c�*�*����@�G���#�c�(�����f�6�;�;�?�?���
�#�#�B���(:�3�%�&@�A�A�
��	���3���k�2��	3�sT�9
D�H=�>H=�,E7�7H=�F�H=� F1�1H=�=,G.�.H=�H�"H=�<H=c�:�eZdZUeeeed<eeeed<y)�CollectedSources�
find_links�
index_urlsN)r7r8r9rrr r�rJr,r*r�r��s"����*�-�.�.���*�-�.�.r,r�c
��eZdZdZdededdfd�Ze	ddedede	ddfd	��Z
edee
fd
��Zdedeefd�Zd
e
dedefd�Zy)�
LinkCollectorz�
    Responsible for collecting Link objects from all configured locations,
    making network requests as needed.

    The class's main method is its collect_sources() method.
    rK�search_scoper#Nc� �||_||_yr2)r�rK)r5rKr�s   r*r4zLinkCollector.__init__�s��
)�����r,�options�suppress_no_indexc�0�|jg|jz}|jr0|s.tj	ddjd�|D���g}|jxsg}tj|||j��}t||��}|S)z�
        :param session: The Session to use to make requests.
        :param suppress_no_index: Whether to ignore the --no-index option
            when constructing the SearchScope object.
        zIgnoring indexes: %s�,c3�2K�|]}t|����y�wr2r)�.0r"s  r*�	<genexpr>z'LinkCollector.create.<locals>.<genexpr>�s����I�j�s�-�c�2�j�s�)r�r��no_index)rKr�)
�	index_url�extra_index_urlsr�r^r_r`r�r�creater�)�clsrKr�r�r�r�r��link_collectors        r*r�zLinkCollector.create�s����'�'�(�7�+C�+C�C�
����$5��L�L�&����I�j�I�I�
��J��'�'�-�2�
�"�)�)�!�!��%�%�
��
'��%�
���r,c�.�|jjSr2)r�r�rws r*r�zLinkCollector.find_links�s��� � �+�+�+r,�locationc�0�t||j��S)z>
        Fetch an HTML page containing package links.
        r[)r�rK)r5r�s  r*�fetch_responsezLinkCollector.fetch_response�s��"�(�D�L�L�A�Ar,�project_name�candidates_from_pagec�����tj���fd��jj��D��j	�}tj���fd��j
D��j	�}tjtj�rwtj||�D�cgc]}|�|j�d|j����!}}t|��d��d�g|z}tjdj|��t!t#|�t#|���Scc}w)Nc	3�h�K�|])}t|��jjdd������+y�w)F�r��page_validator�
expand_dirrmr�N�r!rK�is_secure_origin�r��locr�r�r5s  ���r*r�z0LinkCollector.collect_sources.<locals>.<genexpr>�sE�����
4
�P��
��%9�#�|�|�<�<� �#(�)�

�
�P���/2c	3�h�K�|])}t|��jjdd������+y�w)Tr�Nr�r�s  ���r*r�z0LinkCollector.collect_sources.<locals>.<genexpr>�sC�����
5
�'��
��%9�#�|�|�<�<��#'�)�

�
�'�r�z* z' location(s) to search for versions of �:�
)r�r�)�collections�OrderedDictr��get_index_urls_locations�valuesr�r^�isEnabledFor�logging�DEBUG�	itertools�chainr�r(r_r`r�r�)r5r�r��index_url_sources�find_links_sources�s�liness```    r*�collect_sourceszLinkCollector.collect_sources�s5���(�3�3�
4
��(�(�A�A�,�O�
4
�

��&�(�	�)�4�4�
5
����
5
�

��&�(�	����w�}�}�-�#���);�=N�O��O�A��=�Q�V�V�%7��Q�V�V�H�
�O�
���u�:�,�#�#/�.��3����E�
�L�L����5�)�*���.�/��-�.�
�	
��s�>$D<)F)r7r8r9r�rrr4�classmethodrr{r��propertyr	r:r�rrryr�rr�rrJr,r*r�r��s�������"��
�	��
#(�	����� �	�

����B�,�D��I�,��,�B�t�B���0F�B�,
��,
�1�,
�
�	,
r,r�r2)T)Rr�r��
email.messagerdr�rr�r�r��urllib.parserP�urllib.request�dataclassesr�html.parserr�optparser�typingrrrr	r
rrr
rrr�pip._vendorr�pip._vendor.requestsr�pip._vendor.requests.exceptionsrr�pip._internal.exceptionsr�pip._internal.models.linkr�!pip._internal.models.search_scoper�pip._internal.network.sessionr�pip._internal.network.utilsr�pip._internal.utils.filetypesr�pip._internal.utils.miscr�pip._internal.vcsr�sourcesrr r!�	getLoggerr7r^r:�ResponseHeadersr+�	Exceptionr.rGrIrYrarirkr~r�r�ryr�r�r{r�r�r�r�rJr,r*�<module>rs9���������	���!�"������!�)�@�;�*�9�4�8�9�9�!�A�A�	��	�	�8�	$�� ��c��*���3��8�C�=��)�Y�)�@��@�d�@�,	�y�	��c��J��4��$<�c�<�J�<�8�<�~���H�S�M��	#�	#�D��D��*����(��n���$�����8�$��.�.��.�(�Z��>+/�@�
�@��#�y�.�!�@��8�I�&�
'�@�
�	@�48�
��
�,0�
��
�:�T�:�z�:�h�~�>V�:�z/�z�/�
h
�h
r,

Sindbad File Manager Version 1.0, Coded By Sindbad EG ~ The Terrorists