Sindbad~EG File Manager

Current Path : /usr/local/lib/python3.12/site-packages/pandas/core/__pycache__/
Upload File :
Current File : //usr/local/lib/python3.12/site-packages/pandas/core/__pycache__/base.cpython-312.pyc

�

Mٜg����UdZddlmZddlZddlmZmZmZmZm	Z	m
Z
mZddlZddl
ZddlmZddlmZddlmZmZmZmZmZmZmZddlmZdd	lmZdd
l m!Z!ddl"m#Z#m$Z$ddl%m&Z&dd
l'm(Z(ddl)m*Z*m+Z+ddl,m-Z-ddl.m/Z/m0Z0m1Z1ddl2m3Z3m4Z4ddl5m6Z6m7Z7m8Z8ddl9m:Z:ddl;m<Z<ddl=m>Z>ddl?m@Z@mAZAerddlBmCZCmDZDddlmEZEmFZFmGZGmHZHddlImJZJmKZKmLZLiZMdeNd<ddddd�ZOGd�d e:�ZPGd!�d"�ZQGd#�d$ee�ZRGd%�de<�ZSy)&z.
Base and utility classes for pandas objects.
�)�annotationsN)�
TYPE_CHECKING�Any�Generic�Literal�cast�final�overload)�using_copy_on_write)�lib)�AxisInt�DtypeObj�
IndexLabel�NDFrameT�Self�Shape�npt)�PYPY)�function��AbstractMethodError)�cache_readonly�doc)�find_stack_level)�can_hold_element)�is_object_dtype�	is_scalar)�ExtensionDtype)�ABCDataFrame�ABCIndex�	ABCSeries)�isna�remove_na_arraylike)�
algorithms�nanops�ops)�
DirNamesMixin)�OpsMixin)�ExtensionArray)�ensure_wrapped_if_datetimelike�
extract_array)�Hashable�Iterator)�DropKeep�NumpySorter�NumpyValueArrayLike�
ScalarLike_co)�	DataFrame�Index�Serieszdict[str, str]�_shared_docs�
IndexOpsMixin�)�klass�inplace�unique�
duplicatedc�R��eZdZUdZded<ed��Zdd�Zd	d
d�Zd�fd�Z	�xZ
S)�PandasObjectz/
    Baseclass for various pandas objects.
    zdict[str, Any]�_cachec��t|�S)zK
        Class constructor (for this class it's just `__class__`).
        )�type��selfs �;/usr/local/lib/python3.12/site-packages/pandas/core/base.py�_constructorzPandasObject._constructorls��
�D�z��c�,�tj|�S)zI
        Return a string representation for a particular object.
        )�object�__repr__rAs rCrHzPandasObject.__repr__ss��
���t�$�$rEc��t|d�sy|�|jj�y|jj|d�y)zV
        Reset cached properties. If ``key`` is passed, only clears that key.
        r>N)�hasattrr>�clear�pop)rB�keys  rC�_reset_cachezPandasObject._reset_cachezs8���t�X�&���;��K�K�����K�K�O�O�C��&rEc���t|dd�}|r3|d��}tt|�r|�S|j��St�|��S)zx
        Generates the total memory usage for an object that returns
        either a value or Series of values
        �memory_usageNT��deep)�getattr�intr�sum�super�
__sizeof__)rBrP�mem�	__class__s   �rCrWzPandasObject.__sizeof__�sO���
�t�^�T�:����D�)�C��i��n�s�<�<�#�'�'�)�<�<��w�!�#�#rE)�return�str�N)rMz
str | NonerZ�None�rZrT)�__name__�
__module__�__qualname__�__doc__�__annotations__�propertyrDrHrNrW�
__classcell__)rYs@rCr=r=ds6����

��
����%�	'�$�$rEr=c� �eZdZdZdd�Zdd�Zy)�NoNewAttributesMixina�
    Mixin which prevents adding new attributes.

    Prevents additional attributes via xxx.attribute = "something" after a
    call to `self.__freeze()`. Mainly used to prevent the user from using
    wrong attributes on an accessor (`Series.cat/.str/.dt`).

    If you really want to add a new attribute at a later time, you need to use
    `object.__setattr__(self, key, value)`.
    c�2�tj|dd�y)z9
        Prevents setting additional attributes.
        �__frozenTN)rG�__setattr__rAs rC�_freezezNoNewAttributesMixin._freeze�s��	���4��T�2rEc��t|dd�r8|dk(s3|t|�jvst||d��td|�d���tj|||�y)NriFr>z"You cannot add any new attribute '�')rSr@�__dict__�AttributeErrorrGrj)rBrM�values   rCrjz NoNewAttributesMixin.__setattr__�s_���4��U�+��8�O��d�4�j�)�)�)��t�S�$�'�3� �#E�c�U�!�!L�M�M����4��e�,rEN)rZr])rMr[rZr])r_r`rarbrkrj�rErCrgrg�s��	�3�-rErgc���eZdZUdZded<dZded<ded<d	d
gZee�Ze	e
d���Zed��Z
e	edd
���Ze	ed���Zd�Zddd�Ze	dd��Zd�ZeZy)�SelectionMixinz�
    mixin implementing the selection & aggregation interface on a group-like
    object sub-classes need to define: obj, exclusions
    r�objNzIndexLabel | None�
_selectionzfrozenset[Hashable]�
exclusionsr>�__setstate__c��t|jtttt
tjf�s
|jgS|jSr\)�
isinstanceru�list�tupler!r �np�ndarrayrAs rC�_selection_listzSelectionMixin._selection_list�s=����O�O�d�E�9�h��
�
�K�
��O�O�$�$����rEc��|j�t|jt�r|jS|j|jSr\)ruryrtr!rAs rC�
_selected_objzSelectionMixin._selected_obj�s5���?�?�"�j����9�&E��8�8�O��8�8�D�O�O�,�,rEc�.�|jjSr\)r��ndimrAs rCr�zSelectionMixin.ndim�s���!�!�&�&�&rEc�H�t|jt�r|jS|j�%|jj	|j
�St
|j�dkDr(|jj|jdd��S|jS)Nr�T)�axis�
only_slice)	ryrtr!ru�_getitem_nocopyr~�lenrv�
_drop_axisrAs rC�_obj_with_exclusionsz#SelectionMixin._obj_with_exclusions�s|���d�h�h�	�*��8�8�O��?�?�&��8�8�+�+�D�,@�,@�A�A��t����!�#�
�8�8�&�&�t���Q�4�&�P�P��8�8�OrEc��|j�td|j�d���t|ttt
ttjf�r�t|jjj|��tt|��k7rQtt|�j|jj��}tdt!|�dd����|j#t|�d��S||jvrtd|����|j|j$}|j#||��S)	Nz
Column(s) z already selectedzColumns not found: r�����)r�zColumn not found: )ru�
IndexErrorryrzr{r!r r|r}r�rt�columns�intersection�set�
difference�KeyErrorr[�_gotitemr�)rBrM�bad_keysr�s    rC�__getitem__zSelectionMixin.__getitem__�s	���?�?�&��z�$�/�/�):�:K�L�M�M��c�D�%��H�b�j�j�I�J��4�8�8�#�#�0�0��5�6�#�c�#�h�-�G���C�� 3� 3�D�H�H�4D�4D� E�F���!4�S��]�1�R�5H�4I�J�K�K��=�=��c���=�3�3��$�(�(�"��!3�C�5�9�:�:��8�8�C�=�%�%�D��=�=��4�=�0�0rEc��t|��)a
        sub-classes to define
        return a sliced object

        Parameters
        ----------
        key : str / list of selections
        ndim : {1, 2}
            requested ndim of result
        subset : object, default None
            subset to act on
        r)rBrMr��subsets    rCr�zSelectionMixin._gotitem�s��"�$�'�'rEc��d}|jdk(r2tj|�r||vstj|�r|}|S|jdk(r&tj|�r||jk(r|}|S)zO
        Infer the `selection` to pass to our constructor in _gotitem.
        Nr�r�)r�rr�is_list_like�name)rBrMr��	selections    rC�_infer_selectionzSelectionMixin._infer_selectionsp���	��;�;�!��
�]�]�3�
�C�6�M�c�6F�6F�s�6K��I����[�[�A�
�#�-�-��"4�����9K��I��rEc��t|��r\r)rB�func�args�kwargss    rC�	aggregatezSelectionMixin.aggregates
��!�$�'�'rEr^r\)r�rT)r�zSeries | DataFrame)r_r`rarbrcru�_internal_namesr��_internal_names_setr	rdr~rr�r�r�r�r�r�r��aggrqrErCrsrs�s����

�M�$(�J�!�(�#�#���0�O��o�.��
�
������-��-���'���'������� 1� 
(�����(��CrErsc	��eZdZUdZdZedg�Zded<ed9d��Z	ed:d��Z
ed;d��Zeed	�
�Z
ed<d��Zd=d�Zed>d
��Zed��Zed=d��Zed=d��Zed?d��Zeddej,f							d@d��ZeedAd���Zeddd��	dB					dCd��Zeeddd��	dB					dCd��Zd�ZeZdDd�ZedAd��Z edBdEd ��Z!e					dF									dGd!��Z"d"�Z#edHdId#��Z$edAd$��Z%edAd%��Z&edAd&��Z'edJdKd'��Z(ee)jTd(d(d(e+jXd)��*�		dL					dMd+��Z*d,e-d-<e.		dN							dOd.��Z/e.		dN							dPd/��Z/ee-d-d0�1�		dQ							dRd2��Z/d3d4�dSd5�Z0edTdUd6��Z1d7�Z2d8�Z3y)Vr6zS
    Common ops mixin to support a unified interface / docs for Series / Index
    i��tolistzfrozenset[str]�
_hidden_attrsc��t|��r\rrAs rC�dtypezIndexOpsMixin.dtype'���"�$�'�'rEc��t|��r\rrAs rC�_valueszIndexOpsMixin._values,r�rEc�2�tj||�|S)zw
        Return the transpose, which is by definition self.

        Returns
        -------
        %(klass)s
        )�nv�validate_transpose)rBr�r�s   rC�	transposezIndexOpsMixin.transpose1s��	���d�F�+��rEa�
        Return the transpose, which is by definition self.

        Examples
        --------
        For Series:

        >>> s = pd.Series(['Ant', 'Bear', 'Cow'])
        >>> s
        0     Ant
        1    Bear
        2     Cow
        dtype: object
        >>> s.T
        0     Ant
        1    Bear
        2     Cow
        dtype: object

        For Index:

        >>> idx = pd.Index([1, 2, 3])
        >>> idx.T
        Index([1, 2, 3], dtype='int64')
        )rc�.�|jjS)z�
        Return a tuple of the shape of the underlying data.

        Examples
        --------
        >>> s = pd.Series([1, 2, 3])
        >>> s.shape
        (3,)
        )r��shaperAs rCr�zIndexOpsMixin.shapeZs���|�|�!�!�!rEc��t|��r\rrAs rC�__len__zIndexOpsMixin.__len__gs
��!�$�'�'rEc��y)a�
        Number of dimensions of the underlying data, by definition 1.

        Examples
        --------
        >>> s = pd.Series(['Ant', 'Bear', 'Cow'])
        >>> s
        0     Ant
        1    Bear
        2     Cow
        dtype: object
        >>> s.ndim
        1

        For Index:

        >>> idx = pd.Index([1, 2, 3])
        >>> idx
        Index([1, 2, 3], dtype='int64')
        >>> idx.ndim
        1
        r�rqrAs rCr�zIndexOpsMixin.ndimks��0rEc�\�t|�dk(rtt|��Std��)a�
        Return the first element of the underlying data as a Python scalar.

        Returns
        -------
        scalar
            The first element of Series or Index.

        Raises
        ------
        ValueError
            If the data is not length = 1.

        Examples
        --------
        >>> s = pd.Series([1])
        >>> s.item()
        1

        For an index:

        >>> s = pd.Series([1], index=['a'])
        >>> s.index.item()
        'a'
        r�z6can only convert an array of size 1 to a Python scalar)r��next�iter�
ValueErrorrAs rC�itemzIndexOpsMixin.item�s*��6�t�9��>���T�
�#�#��Q�R�RrEc�.�|jjS)a�
        Return the number of bytes in the underlying data.

        Examples
        --------
        For Series:

        >>> s = pd.Series(['Ant', 'Bear', 'Cow'])
        >>> s
        0     Ant
        1    Bear
        2     Cow
        dtype: object
        >>> s.nbytes
        24

        For Index:

        >>> idx = pd.Index([1, 2, 3])
        >>> idx
        Index([1, 2, 3], dtype='int64')
        >>> idx.nbytes
        24
        )r��nbytesrAs rCr�zIndexOpsMixin.nbytes�s��4�|�|�"�"�"rEc�,�t|j�S)a�
        Return the number of elements in the underlying data.

        Examples
        --------
        For Series:

        >>> s = pd.Series(['Ant', 'Bear', 'Cow'])
        >>> s
        0     Ant
        1    Bear
        2     Cow
        dtype: object
        >>> s.size
        3

        For Index:

        >>> idx = pd.Index([1, 2, 3])
        >>> idx
        Index([1, 2, 3], dtype='int64')
        >>> idx.size
        3
        )r�r�rAs rC�sizezIndexOpsMixin.size�s��4�4�<�<� � rEc��t|��)ac
        The ExtensionArray of the data backing this Series or Index.

        Returns
        -------
        ExtensionArray
            An ExtensionArray of the values stored within. For extension
            types, this is the actual array. For NumPy native types, this
            is a thin (no copy) wrapper around :class:`numpy.ndarray`.

            ``.array`` differs from ``.values``, which may require converting
            the data to a different form.

        See Also
        --------
        Index.to_numpy : Similar method that always returns a NumPy array.
        Series.to_numpy : Similar method that always returns a NumPy array.

        Notes
        -----
        This table lays out the different array types for each extension
        dtype within pandas.

        ================== =============================
        dtype              array type
        ================== =============================
        category           Categorical
        period             PeriodArray
        interval           IntervalArray
        IntegerNA          IntegerArray
        string             StringArray
        boolean            BooleanArray
        datetime64[ns, tz] DatetimeArray
        ================== =============================

        For any 3rd-party extension types, the array type will be an
        ExtensionArray.

        For all remaining dtypes ``.array`` will be a
        :class:`arrays.NumpyExtensionArray` wrapping the actual ndarray
        stored within. If you absolutely need a NumPy array (possibly with
        copying / coercing data), then use :meth:`Series.to_numpy` instead.

        Examples
        --------
        For regular NumPy types like int, and float, a NumpyExtensionArray
        is returned.

        >>> pd.Series([1, 2, 3]).array
        <NumpyExtensionArray>
        [1, 2, 3]
        Length: 3, dtype: int64

        For extension types, like Categorical, the actual ExtensionArray
        is returned

        >>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
        >>> ser.array
        ['a', 'b', 'a']
        Categories (2, object): ['a', 'b']
        rrAs rC�arrayzIndexOpsMixin.array�s��~"�$�'�'rENFc�l�t|jt�r |jj|f||d�|��S|r1tt
|j���}td|�d���|tjuxrC|tjuxr.tj|jtj�}|j}|rUt!||�stj"||��}n|j%�}||tj&t)|��<tj"||��}|r|r|sot+�retj,|jdd|dd�r?t+�r%|s#|j/�}d|j0_|S|j%�}|S)a�
        A NumPy ndarray representing the values in this Series or Index.

        Parameters
        ----------
        dtype : str or numpy.dtype, optional
            The dtype to pass to :meth:`numpy.asarray`.
        copy : bool, default False
            Whether to ensure that the returned value is not a view on
            another array. Note that ``copy=False`` does not *ensure* that
            ``to_numpy()`` is no-copy. Rather, ``copy=True`` ensure that
            a copy is made, even if not strictly necessary.
        na_value : Any, optional
            The value to use for missing values. The default value depends
            on `dtype` and the type of the array.
        **kwargs
            Additional keywords passed through to the ``to_numpy`` method
            of the underlying array (for extension arrays).

        Returns
        -------
        numpy.ndarray

        See Also
        --------
        Series.array : Get the actual data stored within.
        Index.array : Get the actual data stored within.
        DataFrame.to_numpy : Similar method for DataFrame.

        Notes
        -----
        The returned array will be the same up to equality (values equal
        in `self` will be equal in the returned array; likewise for values
        that are not equal). When `self` contains an ExtensionArray, the
        dtype may be different. For example, for a category-dtype Series,
        ``to_numpy()`` will return a NumPy array and the categorical dtype
        will be lost.

        For NumPy dtypes, this will be a reference to the actual data stored
        in this Series or Index (assuming ``copy=False``). Modifying the result
        in place will modify the data stored in the Series or Index (not that
        we recommend doing that).

        For extension types, ``to_numpy()`` *may* require copying data and
        coercing the result to a NumPy type (possibly object), which may be
        expensive. When you need a no-copy reference to the underlying data,
        :attr:`Series.array` should be used instead.

        This table lays out the different dtypes and default return types of
        ``to_numpy()`` for various dtypes within pandas.

        ================== ================================
        dtype              array type
        ================== ================================
        category[T]        ndarray[T] (same dtype as input)
        period             ndarray[object] (Periods)
        interval           ndarray[object] (Intervals)
        IntegerNA          ndarray[object]
        datetime64[ns]     datetime64[ns]
        datetime64[ns, tz] ndarray[object] (Timestamps)
        ================== ================================

        Examples
        --------
        >>> ser = pd.Series(pd.Categorical(['a', 'b', 'a']))
        >>> ser.to_numpy()
        array(['a', 'b', 'a'], dtype=object)

        Specify the `dtype` to control how datetime-aware data is represented.
        Use ``dtype=object`` to return an ndarray of pandas :class:`Timestamp`
        objects, each with the correct ``tz``.

        >>> ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
        >>> ser.to_numpy(dtype=object)
        array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
               Timestamp('2000-01-02 00:00:00+0100', tz='CET')],
              dtype=object)

        Or ``dtype='datetime64[ns]'`` to return an ndarray of native
        datetime64 values. The values are converted to UTC and the timezone
        info is dropped.

        >>> ser.to_numpy(dtype="datetime64[ns]")
        ... # doctest: +ELLIPSIS
        array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00...'],
              dtype='datetime64[ns]')
        )�copy�na_valuez/to_numpy() got an unexpected keyword argument 'rm)r�Nr�F)ryr�rr��to_numpyr�r��keys�	TypeErrorr�
no_defaultr|�nan�
issubdtype�floatingr�r�asarrayr��
asanyarrayr"r�
shares_memory�view�flags�	writeable)	rBr�r�r�r�r��fillna�values�results	         rCr�zIndexOpsMixin.to_numpysi��~�d�j�j�.�1�&�4�:�:�&�&�u�U�4�(�U�f�U�U�
��D�����/�0�H��A�(��1�M��
�

�C�N�N�*�
T�����'�R�B�M�M�$�*�*�b�k�k�,R�S�	������#�F�H�5����F�%�8�������08�F�2�=�=��d��,�-����F�%�0�����2E�2G�������R�a� 0�&��!�*�=�&�(��#�[�[�]�F�-2�F�L�L�*��
�$�[�[�]�F��
rEc��|jSr\)r�rAs rC�emptyzIndexOpsMixin.empty�s���9�9�}�rE�max�min�largest)�op�opposerpc��|j}tj|�tj|||�}t	|t
�rl|sZ|j
�j�r<tjdt|�j�d�tt���y|j�Stj ||��}|dk(r;tjdt|�j�d�tt���|S)ab
        Return int position of the {value} value in the Series.

        If the {op}imum is achieved in multiple locations,
        the first row position is returned.

        Parameters
        ----------
        axis : {{None}}
            Unused. Parameter needed for compatibility with DataFrame.
        skipna : bool, default True
            Exclude NA/null values when showing the result.
        *args, **kwargs
            Additional arguments and keywords for compatibility with NumPy.

        Returns
        -------
        int
            Row position of the {op}imum value.

        See Also
        --------
        Series.arg{op} : Return position of the {op}imum value.
        Series.arg{oppose} : Return position of the {oppose}imum value.
        numpy.ndarray.arg{op} : Equivalent method for numpy arrays.
        Series.idxmax : Return index label of the maximum values.
        Series.idxmin : Return index label of the minimum values.

        Examples
        --------
        Consider dataset containing cereal calories

        >>> s = pd.Series({{'Corn Flakes': 100.0, 'Almond Delight': 110.0,
        ...                'Cinnamon Toast Crunch': 120.0, 'Cocoa Puff': 110.0}})
        >>> s
        Corn Flakes              100.0
        Almond Delight           110.0
        Cinnamon Toast Crunch    120.0
        Cocoa Puff               110.0
        dtype: float64

        >>> s.argmax()
        2
        >>> s.argmin()
        0

        The maximum cereal calories is the third element and
        the minimum cereal calories is the first element,
        since series is zero-indexed.
        �The behavior of �x.argmax/argmin with skipna=False and NAs, or with all-NAs is deprecated. In a future version this will raise ValueError.��
stacklevelr���skipna)r�r��validate_minmax_axis�validate_argmax_with_skipnaryr)r"�any�warnings�warnr@r_�
FutureWarningr�argmaxr%�	nanargmax�rBr�r�r�r��delegater�s       rCr�zIndexOpsMixin.argmax�s���l�<�<��
����%��/�/���f�E���h��/��h�m�m�o�1�1�3��
�
�&�t�D�z�':�':�&;�<F�F�"�/�1������(�(��%�%�h�v�>�F���|��
�
�&�t�D�z�':�':�&;�<F�F�"�/�1���MrE�smallestc��|j}tj|�tj|||�}t	|t
�rl|sZ|j
�j�r<tjdt|�j�d�tt���y|j�Stj ||��}|dk(r;tjdt|�j�d�tt���|S)Nr�r�r�r�r�)r�r�r��validate_argmin_with_skipnaryr)r"r�r�r�r@r_r�r�argminr%�	nanargminr�s       rCr�zIndexOpsMixin.argmin�s����<�<��
����%��/�/���f�E���h��/��h�m�m�o�1�1�3��
�
�&�t�D�z�':�':�&;�<F�F�"�/�1������(�(��%�%�h�v�>�F���|��
�
�&�t�D�z�':�':�&;�<F�F�"�/�1���MrEc�6�|jj�S)a�
        Return a list of the values.

        These are each a scalar type, which is a Python scalar
        (for str, int, float) or a pandas scalar
        (for Timestamp/Timedelta/Interval/Period)

        Returns
        -------
        list

        See Also
        --------
        numpy.ndarray.tolist : Return the array as an a.ndim-levels deep
            nested list of Python scalars.

        Examples
        --------
        For Series

        >>> s = pd.Series([1, 2, 3])
        >>> s.to_list()
        [1, 2, 3]

        For Index:

        >>> idx = pd.Index([1, 2, 3])
        >>> idx
        Index([1, 2, 3], dtype='int64')

        >>> idx.to_list()
        [1, 2, 3]
        )r�r�rAs rCr�zIndexOpsMixin.tolists��D�|�|�"�"�$�$rEc���t|jtj�st	|j�St|jjt|jj��S)a�
        Return an iterator of the values.

        These are each a scalar type, which is a Python scalar
        (for str, int, float) or a pandas scalar
        (for Timestamp/Timedelta/Interval/Period)

        Returns
        -------
        iterator

        Examples
        --------
        >>> s = pd.Series([1, 2, 3])
        >>> for x in s:
        ...     print(x)
        1
        2
        3
        )	ryr�r|r}r��mapr��ranger�rAs rC�__iter__zIndexOpsMixin.__iter__DsK��,�$�,�,��
�
�3�����%�%��t�|�|�(�(�%����0A�0A�*B�C�CrEc�F�tt|�j��S)ak
        Return True if there are any NaNs.

        Enables various performance speedups.

        Returns
        -------
        bool

        Examples
        --------
        >>> s = pd.Series([1, 2, 3, None])
        >>> s
        0    1.0
        1    2.0
        2    3.0
        3    NaN
        dtype: float64
        >>> s.hasnans
        True
        )�boolr"r�rAs rC�hasnanszIndexOpsMixin.hasnans`s��2�D��J�N�N�$�%�%rEc��|j}t|t�r|j||��St	j
||||��S)a�
        An internal function that maps values using the input
        correspondence (which can be a dict, Series, or function).

        Parameters
        ----------
        mapper : function, dict, or Series
            The input correspondence object
        na_action : {None, 'ignore'}
            If 'ignore', propagate NA values, without passing them to the
            mapping function
        convert : bool, default True
            Try to find better dtype for elementwise function results. If
            False, leave as dtype=object. Note that the dtype is always
            preserved for some extension array dtypes, such as Categorical.

        Returns
        -------
        Union[Index, MultiIndex], inferred
            The output of the mapping function applied to the index.
            If the function returns a tuple with more than one element
            a MultiIndex will be returned.
        )�	na_action)r��convert)r�ryr)r�r$�	map_array)rB�mapperr�r��arrs     rC�_map_valueszIndexOpsMixin._map_values{sA��2�l�l���c�>�*��7�7�6�Y�7�7�7��#�#�C��9�g�V�VrEc�8�tj||||||��S)a=	
        Return a Series containing counts of unique values.

        The resulting object will be in descending order so that the
        first element is the most frequently-occurring element.
        Excludes NA values by default.

        Parameters
        ----------
        normalize : bool, default False
            If True then the object returned will contain the relative
            frequencies of the unique values.
        sort : bool, default True
            Sort by frequencies when True. Preserve the order of the data when False.
        ascending : bool, default False
            Sort in ascending order.
        bins : int, optional
            Rather than count values, group them into half-open bins,
            a convenience for ``pd.cut``, only works with numeric data.
        dropna : bool, default True
            Don't include counts of NaN.

        Returns
        -------
        Series

        See Also
        --------
        Series.count: Number of non-NA elements in a Series.
        DataFrame.count: Number of non-NA elements in a DataFrame.
        DataFrame.value_counts: Equivalent method on DataFrames.

        Examples
        --------
        >>> index = pd.Index([3, 1, 2, 3, 4, np.nan])
        >>> index.value_counts()
        3.0    2
        1.0    1
        2.0    1
        4.0    1
        Name: count, dtype: int64

        With `normalize` set to `True`, returns the relative frequency by
        dividing all values by the sum of values.

        >>> s = pd.Series([3, 1, 2, 3, 4, np.nan])
        >>> s.value_counts(normalize=True)
        3.0    0.4
        1.0    0.2
        2.0    0.2
        4.0    0.2
        Name: proportion, dtype: float64

        **bins**

        Bins can be useful for going from a continuous variable to a
        categorical variable; instead of counting unique
        apparitions of values, divide the index in the specified
        number of half-open bins.

        >>> s.value_counts(bins=3)
        (0.996, 2.0]    2
        (2.0, 3.0]      2
        (3.0, 4.0]      1
        Name: count, dtype: int64

        **dropna**

        With `dropna` set to `False` we can also see NaN index values.

        >>> s.value_counts(dropna=False)
        3.0    2
        1.0    1
        2.0    1
        4.0    1
        NaN    1
        Name: count, dtype: int64
        )�sort�	ascending�	normalize�bins�dropna)r$�value_counts_internal)rBrrrrrs      rC�value_countszIndexOpsMixin.value_counts�s*��n�/�/�������

�	
rEc��|j}t|tj�s|j	�}|Stj|�}|Sr\)r�ryr|r}r:r$�unique1d)rBr�r�s   rCr:zIndexOpsMixin.unique�sB�������&�"�*�*�-��]�]�_�F��
� �(�(��0�F��
rEc�R�|j�}|rt|�}t|�S)a�
        Return number of unique elements in the object.

        Excludes NA values by default.

        Parameters
        ----------
        dropna : bool, default True
            Don't include NaN in the count.

        Returns
        -------
        int

        See Also
        --------
        DataFrame.nunique: Method nunique for DataFrame.
        Series.count: Count non-NA/null observations in the Series.

        Examples
        --------
        >>> s = pd.Series([1, 3, 5, 7, 7])
        >>> s
        0    1
        1    3
        2    5
        3    7
        4    7
        dtype: int64

        >>> s.nunique()
        4
        )r:r#r�)rBr�uniqss   rC�nuniquezIndexOpsMixin.nuniques'��F���
���'��.�E��5�z�rEc�>�|jd��t|�k(S)a.
        Return boolean if values in the object are unique.

        Returns
        -------
        bool

        Examples
        --------
        >>> s = pd.Series([1, 2, 3])
        >>> s.is_unique
        True

        >>> s = pd.Series([1, 2, 3, 1])
        >>> s.is_unique
        False
        F)r)r
r�rAs rC�	is_uniquezIndexOpsMixin.is_unique,s��&�|�|�5�|�)�S��Y�6�6rEc�2�ddlm}||�jS)aY
        Return boolean if values in the object are monotonically increasing.

        Returns
        -------
        bool

        Examples
        --------
        >>> s = pd.Series([1, 2, 2])
        >>> s.is_monotonic_increasing
        True

        >>> s = pd.Series([3, 2, 1])
        >>> s.is_monotonic_increasing
        False
        r�r3)�pandasr3�is_monotonic_increasing�rBr3s  rCrz%IndexOpsMixin.is_monotonic_increasingA���&	!��T�{�2�2�2rEc�2�ddlm}||�jS)a\
        Return boolean if values in the object are monotonically decreasing.

        Returns
        -------
        bool

        Examples
        --------
        >>> s = pd.Series([3, 2, 2, 1])
        >>> s.is_monotonic_decreasing
        True

        >>> s = pd.Series([1, 2, 3])
        >>> s.is_monotonic_decreasing
        False
        rr)rr3�is_monotonic_decreasingrs  rCrz%IndexOpsMixin.is_monotonic_decreasingXrrEc�H�t|jd�r|jj|��S|jj}|rWt	|j
�rBts<ttj|j�}|tj|�z
}|S)a�
        Memory usage of the values.

        Parameters
        ----------
        deep : bool, default False
            Introspect the data deeply, interrogate
            `object` dtypes for system-level memory consumption.

        Returns
        -------
        bytes used

        See Also
        --------
        numpy.ndarray.nbytes : Total bytes consumed by the elements of the
            array.

        Notes
        -----
        Memory usage does not include memory consumed by elements that
        are not components of the array if deep=False or if used on PyPy

        Examples
        --------
        >>> idx = pd.Index([1, 2, 3])
        >>> idx.memory_usage()
        24
        rPrQ)
rJr�rPr�rr�rrr|r}r�r�memory_usage_of_objects)rBrR�vr�s    rC�
_memory_usagezIndexOpsMixin._memory_usageos���>�4�:�:�~�.��:�:�*�*��+��
�
�J�J�����O�D�J�J�/���"�*�*�d�l�l�3�F�
��,�,�V�4�4�A��rEr7z�            sort : bool, default False
                Sort `uniques` and shuffle `codes` to maintain the
                relationship.
            )r��order�	size_hintrc�2�tj|j||��\}}|jtj
k(r|j
tj�}t|t�r|j|�}||fSddlm}||�}||fS)N)r�use_na_sentinelrr)
r$�	factorizer�r�r|�float16�astype�float32ryr rDrr3)rBrr�codes�uniquesr3s      rCr zIndexOpsMixin.factorize�s���$$�-�-��L�L�t�_�
���w��=�=�B�J�J�&��n�n�R�Z�Z�0�G��d�H�%��'�'��0�G�
�g�~��
%��G�n�G��g�~�rEa
        Find indices where elements should be inserted to maintain order.

        Find the indices into a sorted {klass} `self` such that, if the
        corresponding elements in `value` were inserted before the indices,
        the order of `self` would be preserved.

        .. note::

            The {klass} *must* be monotonically sorted, otherwise
            wrong locations will likely be returned. Pandas does *not*
            check this for you.

        Parameters
        ----------
        value : array-like or scalar
            Values to insert into `self`.
        side : {{'left', 'right'}}, optional
            If 'left', the index of the first suitable location found is given.
            If 'right', return the last such index.  If there is no suitable
            index, return either 0 or N (where N is the length of `self`).
        sorter : 1-D array-like, optional
            Optional array of integer indices that sort `self` into ascending
            order. They are typically the result of ``np.argsort``.

        Returns
        -------
        int or array of int
            A scalar or array of insertion points with the
            same shape as `value`.

        See Also
        --------
        sort_values : Sort by the values along either axis.
        numpy.searchsorted : Similar method from NumPy.

        Notes
        -----
        Binary search is used to find the required insertion points.

        Examples
        --------
        >>> ser = pd.Series([1, 2, 3])
        >>> ser
        0    1
        1    2
        2    3
        dtype: int64

        >>> ser.searchsorted(4)
        3

        >>> ser.searchsorted([0, 4])
        array([0, 3])

        >>> ser.searchsorted([1, 3], side='left')
        array([0, 2])

        >>> ser.searchsorted([1, 3], side='right')
        array([1, 3])

        >>> ser = pd.Series(pd.to_datetime(['3/11/2000', '3/12/2000', '3/13/2000']))
        >>> ser
        0   2000-03-11
        1   2000-03-12
        2   2000-03-13
        dtype: datetime64[ns]

        >>> ser.searchsorted('3/14/2000')
        3

        >>> ser = pd.Categorical(
        ...     ['apple', 'bread', 'bread', 'cheese', 'milk'], ordered=True
        ... )
        >>> ser
        ['apple', 'bread', 'bread', 'cheese', 'milk']
        Categories (4, object): ['apple' < 'bread' < 'cheese' < 'milk']

        >>> ser.searchsorted('bread')
        1

        >>> ser.searchsorted(['bread'], side='right')
        array([3])

        If the values are not monotonically sorted, wrong locations
        may be returned:

        >>> ser = pd.Series([2, 1, 3])
        >>> ser
        0    2
        1    1
        2    3
        dtype: int64

        >>> ser.searchsorted(1)  # doctest: +SKIP
        0  # wrong result, correct would be 1
        �searchsortedc��yr\rq�rBrp�side�sorters    rCr&zIndexOpsMixin.searchsorted#���	rEc��yr\rqr(s    rCr&zIndexOpsMixin.searchsorted,r+rEr3)r8c��t|t�r$dt|�j�d�}t	|��|j
}t|tj�s|j|||��Stj||||��S)Nz(Value must be 1-D array-like or scalar, z is not supported)r)r*)
ryrr@r_r�r�r|r}r&r$)rBrpr)r*�msgr�s      rCr&zIndexOpsMixin.searchsorted5s����e�\�*�:���;�'�'�(�(9�;�
��S�/�!������&�"�*�*�-��&�&�u�4��&�G�G��&�&�����	
�	
rE�first��keepc�2�|j|��}||S�Nr0)�_duplicated)rBr1r;s   rC�drop_duplicateszIndexOpsMixin.drop_duplicatesOs"���%�%�4�%�0�
��Z�K� � rEc��|j}t|t�r|j|��St	j||��Sr3)r�ryr)r;r$)rBr1r�s   rCr4zIndexOpsMixin._duplicatedTs9���l�l���c�>�*��>�>�t�>�,�,��$�$�S�t�4�4rEc���tj||�}|j}t|dd��}tj||j
�}t
|�}t|t�r5tj|j|j|j�}tjd��5tj|||�}ddd�|j!|��S#1swY�xYw)NT)�
extract_numpy�
extract_range�ignore)�all)r�)r&�get_op_result_namer�r+�maybe_prepare_scalar_for_opr�r*ryr�r|�arange�start�stop�step�errstate�
arithmetic_op�_construct_result)rB�otherr��res_name�lvalues�rvaluesr�s       rC�
_arith_methodzIndexOpsMixin._arith_method[s����)�)�$��6���,�,����T��N���1�1�'�7�=�=�I��0��9���g�u�%��i�i��
�
�w�|�|�W�\�\�J�G�
�[�[�X�
&��&�&�w���<�F�'��%�%�f�8�%�<�<�'�
&�s�7C*�*C3c��t|��)z~
        Construct an appropriately-wrapped result from the ArrayLike result
        of an arithmetic-like operation.
        r)rBr�r�s   rCrDzIndexOpsMixin._construct_resultjs��
"�$�'�'rE)rZr)rZzExtensionArray | np.ndarray)rZr)rZrr^)rZz
Literal[1])rZr))r�znpt.DTypeLike | Noner�r�r�rGrZz
np.ndarray)rZr�)NT)r�zAxisInt | Noner�r�rZrT)rZr-)r�r�)FTFNT)
rr�rr�rr�rr�rZr4)T)rr�rZrT)F)rRr�rZrT)FT)rr�rr�rZz"tuple[npt.NDArray[np.intp], Index])..)rpr1r)�Literal['left', 'right']r*r/rZznp.intp)rpznpt.ArrayLike | ExtensionArrayr)rKr*r/rZznpt.NDArray[np.intp])�leftN)rpz$NumpyValueArrayLike | ExtensionArrayr)rKr*zNumpySorter | NonerZznpt.NDArray[np.intp] | np.intp)r1r.)r/)r1r.rZznpt.NDArray[np.bool_])4r_r`rarb�__array_priority__�	frozensetr�rcrdr�r�r	r��Tr�r�r�r�r�r�r�rr�r�r�rr�r�r��to_listr�rr�rrr:r
rrrrr$r �textwrap�dedentr5r
r&r5r4rIrDrqrErCr6r6sO���
��$-�	�
�%�M�>���(��(��(��(��	��	�	��
�	�A�:�
"��
"�(�����2�S��S�<�#��#�6�!��!�6�>(��>(�@�'+���>�>�	C�#�C��C��	C�
�
C��C�J�
�����	�E�%�y�1�:>�Q�"�Q�37�Q�	�Q�2�Q�f	��E�%�z�:�:>��"��37��	��;��B"%�H�G�D�8�&��&�4�W��W�>� ���
��
]
��]
��]
��	]
��
]
�
�]
��]
�~��%��%�N�7��7�(�3��3�,�3��3�,�'��'�R	�������
�X�_�_�
�
��� $������
,�	���,`	����R�*-�!�	���'���	�

�����*-�!�	�-��'���	�

����	��n�	%�W�5�*0�%)�	
�3�
�'�
�#�	
�

(�
�6�
�23:�!�
�5��5�
=�(rE)Trb�
__future__rrQ�typingrrrrrr	r
r��numpyr|�pandas._configr�pandas._libsr�pandas._typingr
rrrrrr�
pandas.compatr�pandas.compat.numpyrr��
pandas.errorsr�pandas.util._decoratorsrr�pandas.util._exceptionsr�pandas.core.dtypes.castr�pandas.core.dtypes.commonrr�pandas.core.dtypes.dtypesr�pandas.core.dtypes.genericrr r!�pandas.core.dtypes.missingr"r#�pandas.corer$r%r&�pandas.core.accessorr'�pandas.core.arrayliker(�pandas.core.arraysr)�pandas.core.constructionr*r+�collections.abcr,r-r.r/r0r1rr2r3r4r5rc�_indexops_doc_kwargsr=rgrsr6rqrErC�<module>rjs���#�������.������.�-��5�4��5���
�
��
/�*�-��
��
���� "��n�!�
���!�	��,$�=�,$�^-�-�Dd�W�X�&�d�NS(�H�S(rE

Sindbad File Manager Version 1.0, Coded By Sindbad EG ~ The Terrorists