Sindbad~EG File Manager

Current Path : /usr/local/lib/python3.12/site-packages/pandas/core/groupby/__pycache__/
Upload File :
Current File : //usr/local/lib/python3.12/site-packages/pandas/core/groupby/__pycache__/generic.cpython-312.pyc

�

Mٜguz���dZddlmZddlmZddlmZddlmZddl	m
Z
mZmZm
Z
mZmZmZmZddlZddlZddlmZmZdd	lmZdd
lmZddlmZmZmZddl m!Z!dd
l"m#Z#m$Z$m%Z%m&Z&m'Z'm(Z(m)Z)ddl*m+Z+m,Z,ddl-m.Z.ddl/m0Z0m1Z1ddl2m3Z3ddl4m5Z5m6Z6m7Z7m8Z8m9Z9ddl:m;cm<Z=ddl>m?Z?ddl@mAZAmBZBddlCmDZDmEZEmFZFmGZGmHZHmIZIddlJmKZKmLZLmMZMmNZNddlOmPZPddlQmRZRddlSmTZTddlUmVZVe
r.ddlWmXZXmYZYmZZZddl[m\Z\m]Z]m^Z^m_Z_m`Z`maZambZbmcZcmdZdmeZeddlfmgZgddlhmiZieejedeffZked �ZlGd!�d"e�ZmGd#�d$eDeP�ZnGd%�d&eDe?�Zo								d(d'�Zpy))z�
Define the SeriesGroupBy and DataFrameGroupBy
classes that hold the groupby interfaces (and some implementations).

These are user facing as the result of the ``df.groupby(...)`` operations,
which here returns a DataFrameGroupBy object.
�)�annotations)�abc)�partial)�dedent)�
TYPE_CHECKING�Any�Callable�Literal�
NamedTuple�TypeVar�Union�castN)�Interval�lib)�
duplicated)�SpecificationError)�Appender�Substitution�doc)�find_stack_level)�ensure_int64�is_bool�is_dict_like�is_integer_dtype�is_list_like�is_numeric_dtype�	is_scalar)�CategoricalDtype�
IntervalDtype)�is_hashable)�isna�notna)�
algorithms)�GroupByApply�maybe_mangle_lambdas�reconstruct_func�validate_func_kwargs�warn_alias_replacement)�	DataFrame)�base�ops)�GroupBy�GroupByPlot�_agg_template_frame�_agg_template_series�_apply_docs�_transform_template)�Index�
MultiIndex�all_indexes_same�
default_index)�Series)�get_group_index)�maybe_use_numba)�boxplot_frame_groupby)�Hashable�Mapping�Sequence)
�	ArrayLike�Axis�AxisInt�CorrelationMethod�
FillnaOptions�
IndexLabel�Manager�	Manager2D�
SingleManager�TakeIndexer)�Categorical)�NDFrame.�ScalarResultc�&�eZdZUdZded<ded<y)�NamedAgga�
    Helper for column specific aggregation with control over output column names.

    Subclass of typing.NamedTuple.

    Parameters
    ----------
    column : Hashable
        Column label in the DataFrame to apply aggfunc.
    aggfunc : function or str
        Function to apply to the provided column. If string, the name of a built-in
        pandas function.

    Examples
    --------
    >>> df = pd.DataFrame({"key": [1, 1, 2], "a": [-1, 0, 1], 1: [10, 11, 12]})
    >>> agg_a = pd.NamedAgg(column="a", aggfunc="min")
    >>> agg_1 = pd.NamedAgg(column=1, aggfunc=lambda x: np.mean(x))
    >>> df.groupby("key").agg(result_a=agg_a, result_1=agg_1)
         result_a  result_1
    key
    1          -1      10.5
    2           1      12.0
    r:�column�	AggScalar�aggfuncN)�__name__�
__module__�__qualname__�__doc__�__annotations__���F/usr/local/lib/python3.12/site-packages/pandas/core/groupby/generic.pyrKrK|s���2
��
�rUrKc���eZdZd-d�Zddd�					d.d�Zed�Zeedjd	ed
���d/�fd��Z
eeed�
�d0ddd�d��Z
e
Zd�Zd1d�Z		d2									d3d�Zd�Zed�Zede��ee�ddd�d���Z	d4					d5d�Z				d6d�Zd7d8d�Zd7d9d�Zeej8�d:d/�fd�
�Z					d;									d<d�Zddej>ddej>f													d=d�Z ej>f					d>d�Z!ej>ddf							d?d �Z"e#eejHjJ�d@d!���Z$eejLjJ�	dA					dBd"��Z&eejNjJ�	dA					dBd#��Z'eejPjJ�ej>df					dCd$��Z(eejRjJ�ej>df					dCd%��Z)eejTjJ�		dD							dEd&��Z*eejVjJ�	dF							dGd'��Z+e#d/d(��Z,e#d/d)��Z-eej\jJ�											dH																	dId*��Z.e#eej^jJ�d/d+���Z/d/d,�Z0�xZ1S)J�
SeriesGroupByr6c��|jj||j��}|jj|_|S�N��axes)�obj�_constructor_from_mgrr\�name�_name)�self�mgr�outs   rV�_wrap_agged_managerz!SeriesGroupBy._wrap_agged_manager�s3���h�h�,�,�S�s�x�x�,�@���H�H�M�M��	��
rUFN��numeric_onlyr_c	��|j}|j}|r?t|j�s*d}t	d|�dt|�j�d|�d���|S)NrfzCannot use z=True with �.z and non-numeric dtypes.)�_obj_with_exclusions�_mgrr�dtype�	TypeError�typerO)rarfr_�ser�single�kwd_names      rV�_get_data_to_aggregatez$SeriesGroupBy._get_data_to_aggregate�sh���'�'������� 0���� ;�%�H���h�Z�{���:�&�&�'�q���.F�H��
��
rUa�
    Examples
    --------
    >>> s = pd.Series([1, 2, 3, 4])

    >>> s
    0    1
    1    2
    2    3
    3    4
    dtype: int64

    >>> s.groupby([1, 1, 2, 2]).min()
    1    1
    2    3
    dtype: int64

    >>> s.groupby([1, 1, 2, 2]).agg('min')
    1    1
    2    3
    dtype: int64

    >>> s.groupby([1, 1, 2, 2]).agg(['min', 'max'])
       min  max
    1    1    2
    2    3    4

    The output column names can be controlled by passing
    the desired column names and aggregations as keyword arguments.

    >>> s.groupby([1, 1, 2, 2]).agg(
    ...     minimum='min',
    ...     maximum='max',
    ... )
       minimum  maximum
    1        1        2
    2        3        4

    .. versionchanged:: 1.3.0

        The resulting dtype will reflect the return value of the aggregating function.

    >>> s.groupby([1, 1, 2, 2]).agg(lambda x: x.astype(float).min())
    1    1.0
    2    3.0
    dtype: float64
    �template�series�series_examples)�input�examplesc�*��t�|�|g|��i|��S�N)�super�apply)ra�func�args�kwargs�	__class__s    �rVrzzSeriesGroupBy.apply�s����w�}�T�3�D�3�F�3�3rU�rv�klass��engine�
engine_kwargsc���|du}d}|rt|�\}}i}t|t�r+t|�r|�||d<|�||d<t	||�|i|��St|t
j�rVt|�}||d<||d<|j|g|��i|��}|r|�J�||_	|js|j�}|Stj|�}	|	r"|s |st|||	�t	||	��St|�r|j|g|��d|i|��S|j dk(r]|j"}
|j$j'g|j$j(|j*j,|
j.��S|j*j0dkDr|j2|g|��i|��S	|j2|g|��i|��S#t4$r�|j6|g|��i|��}t9j:dt=|�j>�d�t@tC���tE||j*j,�	�}|jG|�}|cYSwxYw)
Nr�r�r�r_�indexrk�z)Pinning the groupby key to each group in z�.agg is deprecated, and cases that relied on it will raise in a future version. If your operation requires utilizing the groupby keys, iterate over the groupby object instead.��
stacklevel�r�)$r'�
isinstance�strr8�getattrr�Iterabler%�_aggregate_multiple_funcs�columns�as_index�reset_index�com�get_cython_funcr(�_aggregate_with_numba�ngroupsrir]�_constructorr_�_grouper�result_indexrk�nkeys�_python_agg_general�KeyError�_aggregate_named�warnings�warnrmrO�
FutureWarningrr6�_wrap_aggregated_output)rar{r�r�r|r}�
relabelingr��ret�cyfuncr]�results            rV�	aggregatezSeriesGroupBy.aggregate�s����T�\�
����0��8�M�G�T��F��d�C� ��v�&�6�+=�
$*��x� ��(�*7���'�&�7�4��&��7��7�7�
��c�l�l�
+�(��-�D�%�F�8��&3�F�?�#�0�$�0�0��G��G��G�C���*�*�*�%����=�=��o�o�'���J��(�(��.�F��d�6�&�t�T�6�:�,�w�t�V�,�.�.��v�&�1�t�1�1�����/<��@F����|�|�q� �
�/�/���x�x�,�,��������-�-�4�4��)�)�	-����}�}�"�"�Q�&�/�t�/�/��F�t�F�v�F�F�
�/�t�/�/��F�t�F�v�F�F���
�/��.�.�t�E�d�E�f�E���
�
�?��D�z�*�*�+�,?�?�
"�/�1�� ��d�m�m�.H�.H�I���5�5�f�=���
�'
�s�<G�BI#�"I#c�:�����}tj���|�k7r tj�}t|||����fd�}|j}|j
j
||�}|j||j��}	|j|	�S)Nc����|g���i���SrxrT��xr|r{r}s ���rV�<lambda>z3SeriesGroupBy._python_agg_general.<locals>.<lambda>D����d�1�.�t�.�v�.rU)r_)
r��is_builtin_func�_builtin_table_aliasr(rir��
agg_seriesr�r_r�)
rar{r|r}�	orig_func�alias�fr]r��ress
 ```      rVr�z!SeriesGroupBy._python_agg_general>s�����	��"�"�4�(������,�,�T�2�E�"�4��E�:�.���'�'�����)�)�#�q�1�����v�C�H�H��5���+�+�C�0�0rUc���t|t�rW|jrtd��d}t	j
|tt���t|j��}nQtd�|D��r*|D�cgc]}t|ttf�s||fn|�� }}nd�|D�}t||�}i}tj|dd�5t|�D]8\}\}	}
t!j"|	|��}|j$|
g|��i|��||<�:	ddd�td	�|j'�D��r9d
dlm}||j'�d|D�cgc]}|j,��c}�
�}
|
S|j�D��cic]\}}|j.|��}}}|j0j3|d��}t5d�|D��|_|Scc}w#1swY��xYwcc}wcc}}w)Nznested renamer is not supportedz�Passing a dictionary to SeriesGroupBy.agg is deprecated and will raise in a future version of pandas. Pass a list of aggregations instead.)�message�categoryr�c3�HK�|]}t|ttf����y�wrx)r��tuple�list��.0r�s  rV�	<genexpr>z:SeriesGroupBy._aggregate_multiple_funcs.<locals>.<genexpr>]s����;�s�!��A��t�}�-�s�s� "c3�NK�|]}tj|�xs|���y�wrx)r��get_callable_name)r�r�s  rVr�z:SeriesGroupBy._aggregate_multiple_funcs.<locals>.<genexpr>as$����B�c��s�,�,�Q�/�4�1�4�c�s�#%r�T)�label�positionc3�<K�|]}t|t����y�wrx)r�r)r�s  rVr�z:SeriesGroupBy._aggregate_multiple_funcs.<locals>.<genexpr>ls����B�1A�A�z�!�Y�'�1A�s�r��concatr�)�axis�keysr�c3�4K�|]}|j���y�wrx)r�)r��keys  rVr�z:SeriesGroupBy._aggregate_multiple_funcs.<locals>.<genexpr>vs����<�G�S�s�y�y�G���)r��dictr�rr�r�r�rr��items�anyr��zipr��temp_setattr�	enumerater*�	OutputKeyr��values�pandasr�r�r�r]�_constructor_expanddimr2r�)ra�argr|r}�msgr�r��results�idxr_r{r�r��res_df�val�indexed_output�outputs                 rVr�z'SeriesGroupBy._aggregate_multiple_funcsKs����c�4� ��}�}�(�)J�K�K�/��
�
�
��*�/�1��
�3�9�9�;�'��
�;�s�;�
;�NQ�R�c���A��t�}�!=�A�q�6�1�D�c�C�R�C�c�B�G��g�s�#�C�<>��
�
�
�d�J��
5�&/�s�^�!��\�d�D��n�n�4�#�>��-�t�~�~�d�D�T�D�V�D����&4�6��B����1A�B�B�%����� �q�W�/M�W�c��	�	�W�/M��F��M�<C�M�M�O�L�O���S�#�,�,��+�O��L����0�0��t�0�L���<�G�<�<����
��5S�6�
5��0N��Ms�>#G�AG#�G/�G4�#G,c�4�t|�dk(rb|r
|j}n|jj}|jjg|jj||j��S|�J�t|dt�rs|jj}|jj||��}|j|�}|jd��}|jj|_|St|dttf�ry|j|||��}	t|	t�r|jj|	_|j s,|r*|j#|	�}	t%t|	��|	_|	S|jj||jj|jj��}	|j s*|j#|	�}	t%t|	��|	_|j|	�S)a�
        Wrap the output of SeriesGroupBy.apply into the expected result.

        Parameters
        ----------
        data : Series
            Input data for groupby operation.
        values : List[Any]
            Applied output for each group.
        not_indexed_same : bool, default False
            Whether the applied outputs are not indexed the same as the group axes.

        Returns
        -------
        DataFrame or Series
        rr�r�T)�future_stack��not_indexed_same�is_transform)�datar�r_)�lenr�r�r�r]r�r_rkr�r�r��_reindex_output�stackr6r)�_concat_objectsr��_insert_inaxis_grouperr5)
rar�r�r�r��	res_indexr�r��res_serr�s
          rV�_wrap_applied_outputz"SeriesGroupBy._wrap_applied_outputzs���.�v�;�!��� �J�J�	� �M�M�6�6�	��8�8�(�(���X�X�]�]���j�j�	)��
��!�!�!��f�Q�i��&��M�M�.�.�E��X�X�4�4�V�5�4�I�F��)�)�&�1�F��l�l��l�5�G��8�8�=�=�G�L��N�
��q�	�F�I�#6�
7��)�)��!1�)�*��F�
�&�&�)�"�h�h�m�m����=�=�%5��4�4�V�<��,�S��[�9����M��X�X�*�*��4�=�=�#=�#=�D�H�H�M�M�+��F��=�=��4�4�V�<��,�S��[�9����'�'��/�/rUc�B�i}d}|jj|j|j��D]f\}}tj|d|�||g|��i|��}t
j|�}|s"t
j||j�d}|||<�h|S)NF�r�r_T)
r��get_iteratorrir��object�__setattr__r+�extract_result�check_result_arrayrk)	rar{r|r}r��initializedr_�groupr�s	         rVr�zSeriesGroupBy._aggregate_named�s��������=�=�5�5��%�%�D�I�I�6�
�K�D�%�
���u�f�d�3��%�1�$�1�&�1�F��'�'��/�F���&�&�v�u�{�{�;�"��!�F�4�L�
��
rUa7
    >>> ser = pd.Series([390.0, 350.0, 30.0, 20.0],
    ...                 index=["Falcon", "Falcon", "Parrot", "Parrot"],
    ...                 name="Max Speed")
    >>> grouped = ser.groupby([1, 1, 2, 2])
    >>> grouped.transform(lambda x: (x - x.mean()) / x.std())
        Falcon    0.707107
        Falcon   -0.707107
        Parrot    0.707107
        Parrot   -0.707107
        Name: Max Speed, dtype: float64

    Broadcast result of the transformation

    >>> grouped.transform(lambda x: x.max() - x.min())
    Falcon    40.0
    Falcon    40.0
    Parrot    10.0
    Parrot    10.0
    Name: Max Speed, dtype: float64

    >>> grouped.transform("mean")
    Falcon    370.0
    Falcon    370.0
    Parrot     25.0
    Parrot     25.0
    Name: Max Speed, dtype: float64

    .. versionchanged:: 1.3.0

    The resulting dtype will reflect the return value of the passed ``func``,
    for example:

    >>> grouped.transform(lambda x: x.astype(int).max())
    Falcon    390
    Falcon    390
    Parrot     30
    Parrot     30
    Name: Max Speed, dtype: int64
    �r��examplec�4�|j|g|��||d�|��S�Nr���
_transform�rar{r�r�r|r}s      rV�	transformzSeriesGroupBy.transform�4���t����
��
� &�m�
�GM�
�	
rUc�<�|dk(sJ�|j}	|jjd|j||fi|��}|j||jj|j��S#t$r!}t|�d|j�d��|�d}~wwxYw)Nrr�z is not supported for z dtype�r�r_)rir��_cython_operation�_values�NotImplementedErrorrlrkr�r]r�r_)ra�howrfr�r}r]r��errs        rV�_cython_transformzSeriesGroupBy._cython_transform	s����q�y��y��'�'��	V�4�T�]�]�4�4��S�[�[�#�t��7=��F�����d�h�h�n�n�3�8�8��L�L��	#�	V��s�e�#9�#�)�)��F�K�L�RU�U��	V�s�*A1�1	B�:B�Bc�`�t|�r|j|g|��d|i|��St|�sJ�t|j�}g}|j
j
|j|j��D]K\}}	tj|	d|�||	g|��i|��}
|j||
|	j����M|r ddl
m}||�}|j|�}
n*|jj!t"j$��}
|jj&|
_|
S)z3
        Transform with a callable `func`.
        r�r�r_r�rr��rk)r8�_transform_with_numba�callablermr]r�r�rir�r�r��appendr��pandas.core.reshape.concatr��_set_result_index_orderedr��np�float64r_)rar{r�r�r|r}r�r�r_r�r�r��concatenatedr�s              rV�_transform_generalz SeriesGroupBy._transform_generals���6�"�-�4�-�-�����+8��<B��
���~��~��T�X�X������=�=�5�5��%�%�D�I�I�6�
�K�D�%�
���u�f�d�3��u�.�t�.�v�.�C��N�N�5��E�K�K�8�9�
��9�!�'�?�L��3�3�L�A�F��X�X�*�*����*�<�F��h�h�m�m����
rUTc������t�t�r���fd��n���fd��d�fd�}	|jj|j|j
��D��cgc]\}}||�r|j
|��� }}}|j||�}
|
Scc}}w#ttf$r}	td�|	�d}	~	wwxYw)a�
        Filter elements from groups that don't satisfy a criterion.

        Elements from groups are filtered if they do not satisfy the
        boolean criterion specified by func.

        Parameters
        ----------
        func : function
            Criterion to apply to each group. Should return True or False.
        dropna : bool
            Drop groups that do not pass the filter. True by default; if False,
            groups that evaluate False are filled with NaNs.

        Returns
        -------
        Series

        Notes
        -----
        Functions that mutate the passed object can produce unexpected
        behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
        for more details.

        Examples
        --------
        >>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
        ...                           'foo', 'bar'],
        ...                    'B' : [1, 2, 3, 4, 5, 6],
        ...                    'C' : [2.0, 5., 8., 1., 2., 9.]})
        >>> grouped = df.groupby('A')
        >>> df.groupby('A').B.filter(lambda x: x.mean() > 3.)
        1    2
        3    4
        5    6
        Name: B, dtype: int64
        c�(��t|���i���Srx�r�r�s ���rVr�z&SeriesGroupBy.filter.<locals>.<lambda>ds��� 0���4� 0�$� A�&� ArUc����|g���i���SrxrTr�s ���rVr�z&SeriesGroupBy.filter.<locals>.<lambda>fs����Q� 8�� 8�� 8rUc�2���|�}t|�xr|Srx)r")r��b�wrappers  �rV�true_and_notnaz,SeriesGroupBy.filter.<locals>.true_and_notnais�����
�A���8�>��!rUr�z'the filter must return a boolean resultN)�return�bool)
r�r�r�r�rir��
_get_index�
ValueErrorrl�
_apply_filter)rar{�dropnar|r}rr_r��indicesr�filteredrs ` ``      @rV�filterzSeriesGroupBy.filter=s����L�d�C� �A�G�8�G�	"�		P�$(�=�=�#=�#=��-�-�D�I�I�$>�$��$�K�D�%�"�%�(�	����%�$�
���%�%�g�v�6��������I�&�	P��E�F�C�O��	P�s)�4B�#B�B�B�B?�.B:�:B?c���|jj\}}}|jj}t	j
||d��\}}|jjr|dk\}||}||}t||g|t|�fd|��}	|r |	dk\}|j�r
||}|	|}	t|	d�}tj|||��}
t|
�}
|jj}|jj|
||jj ��}|j"s*|j%|�}t't|��|_|j+|d��S)	a�
        Return number of unique elements in the group.

        Returns
        -------
        Series
            Number of unique values within each group.

        Examples
        --------
        For SeriesGroupby:

        >>> lst = ['a', 'a', 'b', 'b']
        >>> ser = pd.Series([1, 2, 3, 3], index=lst)
        >>> ser
        a    1
        a    2
        b    3
        b    3
        dtype: int64
        >>> ser.groupby(level=0).nunique()
        a    2
        b    1
        dtype: int64

        For Resampler:

        >>> ser = pd.Series([1, 2, 3, 3], index=pd.DatetimeIndex(
        ...                 ['2023-01-01', '2023-01-15', '2023-02-01', '2023-02-15']))
        >>> ser
        2023-01-01    1
        2023-01-15    2
        2023-02-01    3
        2023-02-15    3
        dtype: int64
        >>> ser.resample('MS').nunique()
        2023-01-01    2
        2023-02-01    1
        Freq: MS, dtype: int64
        F)�use_na_sentinel�sortr)�labels�shaper!�xnull�first)�	minlengthr�)�
fill_value)r��
group_infor]r�r#�	factorize�has_dropped_nar7r�r�rr
�bincountrr�r�r_r�r�r5r�r�)
rar�ids�_r�r��codes�uniques�mask�group_indexr��rir�s
             rV�nuniquezSeriesGroupBy.nunique{s\��R�-�-�2�2���Q���h�h����#�-�-�c�6�PU�V���w��=�=�'�'��!�8�D��d�)�C��$�K�E�%���<��C��L�)���	
����!�#�D���{�{�}��$�i��)�$�/���+�w�/���k�k�#�t�e�*��8���3���
�]�]�
'�
'��%)�X�X�%:�%:��r����
�
�&;�&
���}�}��0�0��8�F�(��V��5�F�L��#�#�F�q�#�9�9rUc�(��t�|�|||��S)N)�percentiles�include�exclude)ry�describe)rar5r6r7r~s    �rVr8zSeriesGroupBy.describe�s!����w��#�W�g� �
�	
rUc���+�,�|rdnd}|�|j||||��}||_|Sddlm}ddlm}	|jj\}
}}|jj}|jj|jjgz}
t|jt�s|�Stj|�s>|j!t"j$||||��}||_|
|j&_
|S|
dk7}|
|||}}
|�t)j*|d	�
�\}}d�}nc|	t#|d�
�|d	��}t-d|j�}|j.}|j1|j2d	|j4��}d�}t|jt6�r=t-t8|�}tj:|j<|j>|
f�}ntj:||
f�}|
|||}}
dtj@|
dd|
ddk7�dz}tjBd|f}tE|
�s|}||tGdd��||tGdd��k7}tjBd	|f}tE|�s|}d	||<tjHtj@tjB|d	f�d�}tKtjLtjNjQ||���}|jjR}|D�cgc]
}||���c}|||�gz}|jjTD�cgc]}|jV��c}|gz} |r2|ddk7}|jY�rd}n|||D�cgc]}||��	}}}|r�|j[d�}tjHtjB|tE|
�f�}!|r5|
|dk(}"tjNj]|!|"d�||!�|}#n||!�}#||#z}|r>|�<|r|
||n|
|}$tj:|r|n||$f�}|||d|c}|d<|��gtj^tE|�d���+|ddD]#}�+tjBd	|dd|ddk7fz�+�%�+ja�tE| d�c}%�,tjLtjb|%��,�tjdtjb�,�|%�g}&�+jg�dz
|dg}'||&|'dd��\}}|�tjh|dk7||d�}|r0tj:|r|n||&df�}|||&d|c}|&d<d�+�,fd�}(|ddD�)cgc]
})|(|)���}})|jk|&d�tm| ||
d��}*to|j�rtq|�}|jjs||*|��}|jts|jw�}|Scc}wcc}wcc}wcc})w)N�
proportion�count)�	normalizer!�	ascendingrr)�get_join_indexers)�cut)r<r!r=�bins���T)r!c��||SrxrT��lab�incs  rVr�z,SeriesGroupBy.value_counts.<locals>.<lambda>�s��C��HrUF��copy)�include_lowestrG)�
allow_fillr'c�:�||jjdS)NrA)�_multiindexr.rCs  rVr�z,SeriesGroupBy.value_counts.<locals>.<lambda>	s��C��H�$8�$8�$>�$>�r�$BrUr�)�repeats�floatrr�left)r!rc�6��tj|���Srx)r
�repeat)�	lev_codes�diff�nbins ��rV�build_codesz/SeriesGroupBy.value_counts.<locals>.build_codes]s����y�y��4��$�7�7rU)�levelsr.�names�verify_integrityr�)rQ�
np.ndarrayrrX)<�
_value_countsr_�pandas.core.reshape.merger>�pandas.core.reshape.tiler?r�r(r]r�rVr�rkrr
�iterablerzr6�value_countsr�r#r)r�
categories�taker.�	_na_valuerr�lexsortrN�right�nonzero�r_r��slicerRrrP�add�reduceat�reconstructed_codes�	groupings�_group_index�all�astype�at�zeros�sum�arange�tile�cumsum�whererr3rrr�r�r�)-rar<r!r=r@rr_r�r>r?r,r-r��index_namesrnr0rD�lev�llab�cat_ser�cat_obj�lab_interval�sorter�	idchangesr��lchangesrErc�repr.�level_codes�pingrU�d�m�acc�cat�ncatrNrbrTrQ�mirRrSs-                                           @@rVr]zSeriesGroupBy.value_counts�s	��� )�|�g���<��'�'�#�$�)�F�(��F��F�K��M�?�0��M�M�,�,�	��Q���h�h�����m�m�)�)�T�X�X�]�]�O�;���c�i�i�!1�2���R�[�[��%6�
�*�*��#�#�#��#����C��C�H�)�C�I�I�O��J��b�y���t�9�c�$�i�S���<�!�+�+�C�d�;�H�C��,�D��&��5�1�4��M�G��=�'�/�/�:�G��$�$�C��(�(��
�
���=�=���C�
C�D��c�i�i��/���#�.�L��Z�Z��!2�!2�L�4F�4F�� L�M�F��Z�Z��c�
�+�F��v�;��F��S����
�
�3�q�r�7�c�#�2�h�#6�7��:�:�	��e�e�A�y�L�!���3�x��C���U�1�d�^�,��S�%��b�/�0J�J���e�e�D�(�N�#���3�x��C���C���g�g�b�j�j����s�D�y�!1�2�1�5�6���b�i�i�������c�)B�C���
�
�1�1��5:�;�U�k��[�!�U�;�t�C��~�>N�N��04�
�
�0G�0G�H�0G��$�#�#�0G�H�C�5�P�����9��?�D��x�x�z��� ��Y�e�(T�e�{��T�):�e�U�(T����*�*�W�%�C�������c�3�s�8�m�,�-�A����r�	�N�����	�	�!�Q��#��!�f�T�l���!�f���3�J�C��D�L�$*�#�c�(�4�.��C��C��Z�Z�	���t�S� A�B�F� ��[�%��)�F�*;�N�C��r�����8�8�C��H�F�3�D�$�S�b�z������d�K���O�{�3�B�7G�$G�G�H�H�� *�����S����_�J�D�$��I�I�b�i�i��o�t�4�b�g�g�b�i�i��o�t�6T�U�D��[�[�]�Q�&��b�	�2�E�
'��e�%�V��F�A�s����h�h�s�b�y�#�c�(�A�6������I�S�C�4��a��$I�J�� #�F��T�"�X�f�-=�
��T�"�X�
8�>C�3�B�Z�H�Z�	�[��+�Z�E�H��L�L��b��"�
���k�E�
���C�I�I�&��s�#�C����&�&�s�"�4�&�@���}�}��'�'�)�F��
��I<��H��)U��`Is�1Y�&Y�%Y�Y c	���tjt|�j�dt|j�j�d�t
t
���|jd||||||��}|S)a�
        Fill NA/NaN values using the specified method within groups.

        .. deprecated:: 2.2.0
            This method is deprecated and will be removed in a future version.
            Use the :meth:`.SeriesGroupBy.ffill` or :meth:`.SeriesGroupBy.bfill`
            for forward or backward filling instead. If you want to fill with a
            single value, use :meth:`Series.fillna` instead.

        Parameters
        ----------
        value : scalar, dict, Series, or DataFrame
            Value to use to fill holes (e.g. 0), alternately a
            dict/Series/DataFrame of values specifying which value to use for
            each index (for a Series) or column (for a DataFrame).  Values not
            in the dict/Series/DataFrame will not be filled. This value cannot
            be a list. Users wanting to use the ``value`` argument and not ``method``
            should prefer :meth:`.Series.fillna` as this
            will produce the same result and be more performant.
        method : {{'bfill', 'ffill', None}}, default None
            Method to use for filling holes. ``'ffill'`` will propagate
            the last valid observation forward within a group.
            ``'bfill'`` will use next valid observation to fill the gap.
        axis : {0 or 'index', 1 or 'columns'}
            Unused, only for compatibility with :meth:`DataFrameGroupBy.fillna`.
        inplace : bool, default False
            Broken. Do not set to True.
        limit : int, default None
            If method is specified, this is the maximum number of consecutive
            NaN values to forward/backward fill within a group. In other words,
            if there is a gap with more than this number of consecutive NaNs,
            it will only be partially filled. If method is not specified, this is the
            maximum number of entries along the entire axis where NaNs will be
            filled. Must be greater than 0 if not None.
        downcast : dict, default is None
            A dict of item->dtype of what to downcast if possible,
            or the string 'infer' which will try to downcast to an appropriate
            equal type (e.g. float64 to int64 if possible).

        Returns
        -------
        Series
            Object with missing values filled within groups.

        See Also
        --------
        ffill : Forward fill values within a group.
        bfill : Backward fill values within a group.

        Examples
        --------
        For SeriesGroupBy:

        >>> lst = ['cat', 'cat', 'cat', 'mouse', 'mouse']
        >>> ser = pd.Series([1, None, None, 2, None], index=lst)
        >>> ser
        cat    1.0
        cat    NaN
        cat    NaN
        mouse  2.0
        mouse  NaN
        dtype: float64
        >>> ser.groupby(level=0).fillna(0, limit=1)
        cat    1.0
        cat    0.0
        cat    NaN
        mouse  2.0
        mouse  0.0
        dtype: float64
        ��.fillna is deprecated and will be removed in a future version. Use obj.ffill() or obj.bfill() for forward or backward filling instead. If you want to fill with a single value, use �.fillna insteadr��fillna��value�methodr��inplace�limit�downcast�r�r�rmrOr]r�r�
_op_via_apply�rar�r�r�r�r�r�r�s        rVr�zSeriesGroupBy.fillnans���^	�
�
��D�z�"�"�#�$!�"&�d�h�h��!8�!8� 9��
J�
�'�)�
	
��#�#��������$�
���
rUc�0�|jd||d�|��}|S)a�
        Return the elements in the given *positional* indices in each group.

        This means that we are not indexing according to actual values in
        the index attribute of the object. We are indexing according to the
        actual position of the element in the object.

        If a requested index does not exist for some group, this method will raise.
        To get similar behavior that ignores indices that don't exist, see
        :meth:`.SeriesGroupBy.nth`.

        Parameters
        ----------
        indices : array-like
            An array of ints indicating which positions to take in each group.
        axis : {0 or 'index', 1 or 'columns', None}, default 0
            The axis on which to select elements. ``0`` means that we are
            selecting rows, ``1`` means that we are selecting columns.
            For `SeriesGroupBy` this parameter is unused and defaults to 0.

            .. deprecated:: 2.1.0
                For axis=1, operate on the underlying object instead. Otherwise
                the axis keyword is not necessary.

        **kwargs
            For compatibility with :meth:`numpy.take`. Has no effect on the
            output.

        Returns
        -------
        Series
            A Series containing the elements taken from each group.

        See Also
        --------
        Series.take : Take elements from a Series along an axis.
        Series.loc : Select a subset of a DataFrame by labels.
        Series.iloc : Select a subset of a DataFrame by positions.
        numpy.take : Take elements from an array along an axis.
        SeriesGroupBy.nth : Similar to take, won't raise if indices don't exist.

        Examples
        --------
        >>> df = pd.DataFrame([('falcon', 'bird', 389.0),
        ...                    ('parrot', 'bird', 24.0),
        ...                    ('lion', 'mammal', 80.5),
        ...                    ('monkey', 'mammal', np.nan),
        ...                    ('rabbit', 'mammal', 15.0)],
        ...                   columns=['name', 'class', 'max_speed'],
        ...                   index=[4, 3, 2, 1, 0])
        >>> df
             name   class  max_speed
        4  falcon    bird      389.0
        3  parrot    bird       24.0
        2    lion  mammal       80.5
        1  monkey  mammal        NaN
        0  rabbit  mammal       15.0
        >>> gb = df["name"].groupby([1, 1, 2, 2, 2])

        Take elements at positions 0 and 1 along the axis 0 in each group (default).

        >>> gb.take([0, 1])
        1  4    falcon
           3    parrot
        2  2      lion
           1    monkey
        Name: name, dtype: object

        We may take elements using negative integers for positive indices,
        starting from the end of the object, just like with Python lists.

        >>> gb.take([-1, -2])
        1  3    parrot
           4    falcon
        2  0    rabbit
           1    monkey
        Name: name, dtype: object
        �rr��r_�r��rarr�r}r�s     rVr_zSeriesGroupBy.take�s&��h$��#�#�Q�G�$�Q�&�Q���
rUc��|tjurd}|dk7r|j	d|||d�|��}|Sd�}|j	d|||d�|��S)a�
        Return unbiased skew within groups.

        Normalized by N-1.

        Parameters
        ----------
        axis : {0 or 'index', 1 or 'columns', None}, default 0
            Axis for the function to be applied on.
            This parameter is only for compatibility with DataFrame and is unused.

            .. deprecated:: 2.1.0
                For axis=1, operate on the underlying object instead. Otherwise
                the axis keyword is not necessary.

        skipna : bool, default True
            Exclude NA/null values when computing the result.

        numeric_only : bool, default False
            Include only float, int, boolean columns. Not implemented for Series.

        **kwargs
            Additional keyword arguments to be passed to the function.

        Returns
        -------
        Series

        See Also
        --------
        Series.skew : Return unbiased skew over requested axis.

        Examples
        --------
        >>> ser = pd.Series([390., 350., 357., np.nan, 22., 20., 30.],
        ...                 index=['Falcon', 'Falcon', 'Falcon', 'Falcon',
        ...                        'Parrot', 'Parrot', 'Parrot'],
        ...                 name="Max Speed")
        >>> ser
        Falcon    390.0
        Falcon    350.0
        Falcon    357.0
        Falcon      NaN
        Parrot     22.0
        Parrot     20.0
        Parrot     30.0
        Name: Max Speed, dtype: float64
        >>> ser.groupby(level=0).skew()
        Falcon    1.525174
        Parrot    1.457863
        Name: Max Speed, dtype: float64
        >>> ser.groupby(level=0).skew(skipna=False)
        Falcon         NaN
        Parrot    1.457863
        Name: Max Speed, dtype: float64
        r�r��skipnarfc�2�td|j�����Nz"'skew' is not supported for dtype=�rlrk�r]s rV�altzSeriesGroupBy.skew.<locals>.alts����@�����L�M�MrU�r�r�rf��skew�r�
no_defaultr��_cython_agg_general�rar�r�rfr}r�r�s       rVr�zSeriesGroupBy.skew's���~�3�>�>�!��D��1�9�'�T�'�'�����)�	�
��F��M�	N�
(�t�'�'��
��F��
�IO�
�	
rUc��t|�}|Srx�r-�rar�s  rV�plotzSeriesGroupBy.plot|����T�"���
rUc�~�ttj||��}|j}|j	||d��}|S�N)�n�keepT)r�)rr6�nlargestri�_python_apply_general�rar�r�r�r�r�s      rVr�zSeriesGroupBy.nlargest�s?��
�F�O�O�q�t�4���(�(���+�+�A�t�d�+�K���
rUc�~�ttj||��}|j}|j	||d��}|Sr�)rr6�	nsmallestrir�r�s      rVr�zSeriesGroupBy.nsmallest�sA��
�F�$�$���5���(�(���+�+�A�t�d�+�K���
rUc�*�|jd||��S)N�idxmin�r�r���_idxmax_idxmin�rar�r�s   rVr�zSeriesGroupBy.idxmin�����"�"�8�$�v�"�F�FrUc�*�|jd||��S)N�idxmaxr�r�r�s   rVr�zSeriesGroupBy.idxmax�r�rUc�0�|jd|||��}|S)N�corr)�otherr��min_periodsr�)rar�r�r�r�s     rVr�zSeriesGroupBy.corr�s)���#�#��%��K�$�
���
rUc�0�|jd|||��}|S)N�cov)r�r��ddofr�)rar�r�r�r�s     rVr�zSeriesGroupBy.cov�s)���#�#���K�d�$�
���
rUc�&�|jd��S)ax
        Return whether each group's values are monotonically increasing.

        Returns
        -------
        Series

        Examples
        --------
        >>> s = pd.Series([2, 1, 3, 4], index=['Falcon', 'Falcon', 'Parrot', 'Parrot'])
        >>> s.groupby(level=0).is_monotonic_increasing
        Falcon    False
        Parrot     True
        dtype: bool
        c��|jSrx)�is_monotonic_increasing�rns rVr�z7SeriesGroupBy.is_monotonic_increasing.<locals>.<lambda>��
��c�&A�&ArU�rz�ras rVr�z%SeriesGroupBy.is_monotonic_increasing����"�z�z�A�B�BrUc�&�|jd��S)ax
        Return whether each group's values are monotonically decreasing.

        Returns
        -------
        Series

        Examples
        --------
        >>> s = pd.Series([2, 1, 3, 4], index=['Falcon', 'Falcon', 'Parrot', 'Parrot'])
        >>> s.groupby(level=0).is_monotonic_decreasing
        Falcon     True
        Parrot    False
        dtype: bool
        c��|jSrx)�is_monotonic_decreasingr�s rVr�z7SeriesGroupBy.is_monotonic_decreasing.<locals>.<lambda>�r�rUr�r�s rVr�z%SeriesGroupBy.is_monotonic_decreasing�r�rUc�D�|j	d|||||||||	|
|d�|��}
|
S)N)�by�ax�grid�
xlabelsize�xrot�
ylabelsize�yrot�figsizer@�backend�legend��histr�)rar�r�r�r�r�r�r�r�r@r�r�r}r�s              rVr�zSeriesGroupBy.hist�sP�� $��#�#��
����!��!������
��
���
rUc�&�|jd��S)Nc��|jSrxrr�s rVr�z%SeriesGroupBy.dtype.<locals>.<lambda>s��c�i�irUr�r�s rVrkzSeriesGroupBy.dtypes���z�z�/�0�0rUc�(�|jd�}|S)ac
        Return unique values for each group.

        It returns unique values for each of the grouped values. Returned in
        order of appearance. Hash table-based unique, therefore does NOT sort.

        Returns
        -------
        Series
            Unique values for each of the grouped values.

        See Also
        --------
        Series.unique : Return unique values of Series object.

        Examples
        --------
        >>> df = pd.DataFrame([('Chihuahua', 'dog', 6.1),
        ...                    ('Beagle', 'dog', 15.2),
        ...                    ('Chihuahua', 'dog', 6.9),
        ...                    ('Persian', 'cat', 9.2),
        ...                    ('Chihuahua', 'dog', 7),
        ...                    ('Persian', 'cat', 8.8)],
        ...                   columns=['breed', 'animal', 'height_in'])
        >>> df
               breed     animal   height_in
        0  Chihuahua        dog         6.1
        1     Beagle        dog        15.2
        2  Chihuahua        dog         6.9
        3    Persian        cat         9.2
        4  Chihuahua        dog         7.0
        5    Persian        cat         8.8
        >>> ser = df.groupby('animal')['breed'].unique()
        >>> ser
        animal
        cat              [Persian]
        dog    [Chihuahua, Beagle]
        Name: breed, dtype: object
        �uniquer�r�s  rVr�zSeriesGroupBy.uniques��P�#�#�H�-���
rU)rbrCrr6)rfrr_�
str | NonerrE�rr6rx�rr)�FF)
r�r6r�z	list[Any]r�rr�rr�DataFrame | Series�Fr)rr�rfrr�r?)r{r	rr6�T�rr)rrr�Series | DataFrame)NNN)FTFNT)
r<rr!rr=rrrrr�)r�zobject | ArrayLike | Noner��FillnaOptions | Noner��Axis | None | lib.NoDefaultr�rr��
int | Noner�zdict | None | lib.NoDefaultrz
Series | None)rrFr��Axis | lib.NoDefaultrr6)r�r�r�rrfrrr6�rr-)�r%)r��intr�zLiteral['first', 'last', 'all']rr6)r�r�r�rrr6)�pearsonN)r�r6r�r@r�r�rr6�Nr�)r�r6r�r�r�r�rr6)NNTNNNNN�
NF)r�rr�r�r��float | Noner�r�r�r�r��tuple[int, int] | Noner@�int | Sequence[int]r�r�r�r)2rOrPrQrdrqr�_agg_examples_docrr0�formatrzrr/r��aggr�r�r�r��#_SeriesGroupBy__examples_series_docrr1r�rr
rr3r6r8r]rr�r�r_r��propertyr�rRr�r�r�r�r�r�r�r�r�rkr��
__classcell__�r~s@rVrXrX�s�����',���#��3=��	���.	�0��d��J��&�&��[�1B�%C�	'�	
��
4��
4�	�	�(9��J�Q��T�Q�K�Q�f�C�1�-�f"'�"�E0��E0��E0��	E0�
�E0�
�
E0�N�,#�'	�)��V��*?�@�
�!�"�,0��
�#�A�
�EF�M��M�&*�M�:A�M�"!��!�	�!�F<�|J:�X	�����
��
� ���
��
_��_��_��	_��
_�
�_�F,0�'+�,/�N�N�� �03���`�(�`�%�`�*�	`�
�`��
`�.�`�
�`�J&)�^�^�U��U�#�U�

�U�r&)�^�^��"�	S
�"�S
��S
��	S
�
�
S
�j�����	�	������	����	 �	 �!�BI���� ?��	��"��	��	�	�	!�	!�"�BI���� ?��	��#��	����	�	��+.�>�>�$�G�(�G�CG�G�	�G� �G�
	����	�	��+.�>�>�$�G�(�G�CG�G�	�G� �G�
	����	�	��%.�"&�		��	�"�	� �		�

�	��	�	����	�	��PQ����*4��CM��	�����C��C�$�C��C�$	����	�	�����!%�!�!%�!�*.�$&�"����	�
���
�����(��"��������@�����	�	��1���1�)rUrXc���eZdZed�Zeeed��d)ddd�d��ZeZd�Z	d*d�Z
		d+							d,d
�Z										d-d�Z		d.							d/d�Z
d
�Zed�Zede��ee�ddd�d���Zd�Zd0d�Zd1d2d�Zd3�fd�Zd)d4d�Zd	dd�					d5d�Zd6d�Zd*d�Zd1d7d�Zej<dd	f							d8d�Zej<dd	f							d8d�Z e!Z"					d9											d:d�Z#ddej<d	dej<f											d;d�Z$ej<f					d<d �Z%ej<dd	f							d8d!�Z&e'ee(jRjT�d=d"���Z)ee(jVjT�			d>							d?d$��Z+ee(jXjT�			d@							dAd%��Z,ee(jZjT�															dB																									dCd&��Z-e'ee(j\jT�dDd'���Z.ee(j^jT�ej<d	d#d	f											dEd(��Z/�xZ0S)F�DataFrameGroupBya�
    Examples
    --------
    >>> data = {"A": [1, 1, 2, 2],
    ...         "B": [1, 2, 3, 4],
    ...         "C": [0.362838, 0.227877, 1.267767, -0.562860]}
    >>> df = pd.DataFrame(data)
    >>> df
       A  B         C
    0  1  1  0.362838
    1  1  2  0.227877
    2  2  3  1.267767
    3  2  4 -0.562860

    The aggregation is for each column.

    >>> df.groupby('A').agg('min')
       B         C
    A
    1  1  0.227877
    2  3 -0.562860

    Multiple aggregations

    >>> df.groupby('A').agg(['min', 'max'])
        B             C
      min max       min       max
    A
    1   1   2  0.227877  0.362838
    2   3   4 -0.562860  1.267767

    Select a column for aggregation

    >>> df.groupby('A').B.agg(['min', 'max'])
       min  max
    A
    1    1    2
    2    3    4

    User-defined function for aggregation

    >>> df.groupby('A').agg(lambda x: sum(x) + 2)
        B	       C
    A
    1	5	2.590715
    2	9	2.704907

    Different aggregations per column

    >>> df.groupby('A').agg({'B': ['min', 'max'], 'C': 'sum'})
        B             C
      min max       sum
    A
    1   1   2  0.590715
    2   3   4  0.704907

    To control the output names with different aggregations per column,
    pandas supports "named aggregation"

    >>> df.groupby("A").agg(
    ...     b_min=pd.NamedAgg(column="B", aggfunc="min"),
    ...     c_sum=pd.NamedAgg(column="C", aggfunc="sum")
    ... )
       b_min     c_sum
    A
    1      1  0.590715
    2      3  0.704907

    - The keywords are the *output* column names
    - The values are tuples whose first element is the column to select
      and the second element is the aggregation to apply to that column.
      Pandas provides the ``pandas.NamedAgg`` namedtuple with the fields
      ``['column', 'aggfunc']`` to make it clearer what the arguments are.
      As usual, the aggregation can be a callable or a string alias.

    See :ref:`groupby.aggregate.named` for more.

    .. versionchanged:: 1.3.0

        The resulting dtype will reflect the return value of the aggregating function.

    >>> df.groupby("A")[["B"]].agg(lambda x: x.astype(float).min())
          B
    A
    1   1.0
    2   3.0
    r)rNr�c�J�t|fi|��\}}}}t|�}t|�r
||d<||d<t||||��}	|	j	�}
t|�s+|
�)|jst|�r|
j�S|
S|r:tt|
�}
|
jdd�|f}
tt|
�}
||
_|
��d|vr|d=|d=t|�r|j|g|��d|i|��S|jjdkDr|j |g|��i|��S|s|r|j"|g|��i|��}
n||j$dk(r|j#|�}
|
St||gdi��}	|j	�}
tt|
�}
|j&jj)�|
_|js*|j/|
�}
t1t3|
��|
_|
S#t*$r)}dt-|�vr�|j#|�}
Yd}~�ed}~wwxYw)Nr�r�)r|r}r�rTzNo objects to concatenate)r&r%r8r$r�rr�rr�rr)�ilocr�r�r�r�r��_aggregate_framer�rirGrr�r�r5r�r�)
rar{r�r�r|r}r�r��order�opr��gbars
             rVr�zDataFrameGroupBy.aggregate�sA��+;�D�+K�F�+K�(�
�D�'�5�#�D�)���6�"� &�F�8��&3�F�?�#�
�$��4��
?��������D�!�f�&8��=�=�\�$�%7��)�)�+�+��
�
��)�V�,�F��[�[��E��*�F��)�V�,�F�
%�F�N��>��6�!��8�$��?�+��v�&�1�t�1�1�����/<��@F����}�}�"�"�Q�&�/�t�/�/��F�t�F�v�F�F���/��.�.�t�E�d�E�f�E�����a���.�.�t�4���
�#�4�$��b��D��N� �W�W�Y�F� "�)�V�4�F�%)�%>�%>�%F�%F�%K�%K�%M�F�N��}�}��0�0��8�F�(��V��5�F�L��
��+"�	9�2�#�c�(�B��"�2�2�4�8�F��	9�s�/G0�0	H"�9H�H"c������}tj���|�k7r tj�}t|||����fd�}|jdk(r|j||jd��S|j}|jdk(r|j}t|j�s|j||j�Si}t|j��D])\}	\}
}|jj||�}|||	<�+|j j#|�}
|jj%d��|
_|j'|
�S)Nc����|g���i���SrxrTr�s ���rVr�z6DataFrameGroupBy._python_agg_general.<locals>.<lambda>�r�rUrT��is_aggr�F)�deep)r�r�r�r(r�r��
_selected_objrir��Tr�r�r�r�r�r�r]r�rGr�)rar{r|r}r�r�r�r]r�r�r_rnr�r�s ```          rVr�z$DataFrameGroupBy._python_agg_general�s3����	��"�"�4�(������,�,�T�2�E�"�4��E�:�.���<�<�1���-�-�a��1C�1C�D�-�Q�Q��'�'���9�9��>��%�%�C��3�;�;���-�-�a��1C�1C�D�D�')�� )�#�)�)�+� 6��C��$���]�]�-�-�c�1�5�F� �F�3�K�!7��h�h�#�#�F�+���k�k�&�&�E�&�2����+�+�C�0�0rUc��|jjdk7rtd��|j}i}|jj	||j
�D]\}}||g|��i|��}|||<�|jj}	|jd|j
z
}
|jj||
|	��}|j
dk(r|j}|S)Nr�zNumber of keys must be 1�r�r�r)r�r��AssertionErrorrir�r�r�r\r]r�r
)rar{r|r}r]r�r_�grp_df�fresr��other_axrcs            rVrz!DataFrameGroupBy._aggregate_frames����=�=���!�#� �!;�<�<��'�'��79�� �M�M�6�6�s�D�I�I�F�L�D�&���0��0��0�D��F�4�L�G��}�}�1�1���8�8�A��	�	�M�*���h�h�#�#�F�(�L�#�Q���9�9��>��%�%�C��
rUFc���t|�dk(rk|r
|j}n|jj}|jj||j��}|j|jd��}|Sttj|�d�}|�|jj�St|t�r|j|||��S|jr|jjnd}t|t j"t$f�rUt'|j(�st+|j(�}	n|j(}	|jj-|||	��St|t.�sd|jr|jj-||��S|jj||j(g��}|j1|�}|S|j3|||||�S)	NrrFrFr�r�r�)r�)r�r�r�r�r]r�r�rl�dtypes�nextr��not_noner�r)r�r�r
�ndarrayr2r �
_selectionr��_constructor_slicedr6r��_wrap_applied_output_series)
rar�r�r�r�r�r��first_not_none�	key_indexr_s
          rVr�z%DataFrameGroupBy._wrap_applied_outputs����v�;�!��� �J�J�	� �M�M�6�6�	��X�X�*�*��D�L�L�*�Q�F��]�]�4�;�;�U�]�;�F��M��c�l�l�F�3�T�:���!��8�8�(�(�*�*�
��	�
2��'�'��!1�)�(��
�37�-�-�D�M�M�.�.�T�	��n�r�z�z�5�&9�:��t���/��T�_�_�-��
�����8�8�/�/��i�d�/�S�S��N�F�3�
�}�}��x�x�3�3�F�)�3�L�L����.�.�v����?P�.�Q���4�4�V�<���
��3�3�� �����
rUc�:�|j�}tdi|��}|D�cgc]}|�|n|��
}}td�|D��}	|	s|j|d|��St	j
|D�
cgc]}
t	j|
���c}
�}|jdk(ri|}|jj�}
|
j�[|D�
chc]}
|
j��}}
t|�dk(r4tt|��|
_
n|j}|}
|j}|jt k(r|j#�}|j$j'|||
��}|j(s|j+|�}|j-|�Scc}wcc}
wcc}
w)Nc3�4K�|]}|j���y�wrxr�r�s  rVr�z?DataFrameGroupBy._wrap_applied_output_series.<locals>.<genexpr>hs����+D�V��A�G�G�V�r�Tr�rr�rrT)�_construct_axes_dictr6r4r�r
�vstack�asarrayr�r�rGr_r�r�iterr
rkr��tolistr]r�r�r�r�)rar�r�rrr�r}�backupr��all_indexed_same�v�stacked_valuesr�r�rVr�s                rVrz,DataFrameGroupBy._wrap_applied_output_series\s��� �4�4�6���!�&�!��<B�C�F�q��
�!�F�2�F��C�+�+D�V�+D�D����'�'��!%�)�(��
����6�#B�6�a�B�J�J�q�M�6�#B�C���9�9��>��E�$�*�*�/�/�1�G��|�|�#�)/�0��A������0��u�:��?�#'��U��#4�G�L�"�(�(�E��G�+�-�-�N����6�)�+�2�2�4�N����&�&�~�U�G�&�T���}�}��0�0��8�F��#�#�F�+�+��KD��$C��1s�
F�*F�Fc� ����|dk(sJ��j|���}d���fd�}|j|�}|jd|jd��jj||j��}�j
|�}|S)Nrrec�D���jjd|�dfi���S)Nr�r�)r�r�)�bvaluesrr}ras ���rV�arr_funcz4DataFrameGroupBy._cython_transform.<locals>.arr_func�s-���2�4�=�=�2�2��W�c�1��06��
rUr�r[)r+r=rr=)rq�grouped_reduce�set_axisr\r]r^�_maybe_transpose_result)	rarrfr�r}rbr,�res_mgrr�s	``  `    rVrz"DataFrameGroupBy._cython_transform�s�����q�y��y��4�4�%�C�5�
��	��$�$�X�.������C�H�H�Q�K�(����/�/��g�l�l�/�K���-�-�f�5���
rUc���t|�r|j|g|��d|i|��Sddlm}g}|j}|j
j
||j��}	|j|g|��i|��\}
}	t|	�\}}
tj|
d|�	|j|
||
�\}}|
jdkDr)t|j |
|�}|j#|�	|	D]\\}}
|
jdk(r�tj|
d|�|
�}t|j |
|�}|j#|��^|jdk(r|j&n|j(}|jdk(rdnd}|||jd�	�}|j+||d�
�}|j-|�S#t$r}d}t|�|�d}~wwxYw#t$$rY��wxYw)Nr�rr�r�r_z3transform must return a scalar value for each groupr�F)r�rW)r�rG)r8rrr�rir�r�r��
_define_pathsrr�r��_choose_pathr�size�_wrap_transform_general_framer]r�
StopIterationr�r��reindexr	)rar{r�r�r|r}r��appliedr]�gen�	fast_path�	slow_pathr_r��pathr�rr��concat_index�
other_axisrs                     rVr
z#DataFrameGroupBy._transform_general�s����6�"�-�4�-�-�����+8��<B��
�	6����'�'���m�m�(�(��4�9�9�(�=��1�t�1�1�$�H��H��H��	�9�
	$��s�)�K�D�%�

���u�f�d�3�
/� �-�-�i��E�J�	��c�
�z�z�A�~�3�D�H�H�e�S�I�����s�#��K�D�%��z�z�Q������u�f�d�3��u�+�C�/����%��E�C��N�N�3���'+�i�i�1�n�s�{�{�#�)�)���)�)�q�.�Q�a�
��g�D�I�I��N��#�+�+�L�z�PU�+�V���-�-�l�;�;��/�
/�K�� ��o�3�.��
/���	��	�s*�9G!�G�	G�G�G�!	G-�,G-a
    >>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
    ...                           'foo', 'bar'],
    ...                    'B' : ['one', 'one', 'two', 'three',
    ...                           'two', 'two'],
    ...                    'C' : [1, 5, 5, 2, 5, 5],
    ...                    'D' : [2.0, 5., 8., 1., 2., 9.]})
    >>> grouped = df.groupby('A')[['C', 'D']]
    >>> grouped.transform(lambda x: (x - x.mean()) / x.std())
            C         D
    0 -1.154701 -0.577350
    1  0.577350  0.000000
    2  0.577350  1.154701
    3 -1.154701 -1.000000
    4  0.577350 -0.577350
    5  0.577350  1.000000

    Broadcast result of the transformation

    >>> grouped.transform(lambda x: x.max() - x.min())
        C    D
    0  4.0  6.0
    1  3.0  8.0
    2  4.0  6.0
    3  3.0  8.0
    4  4.0  6.0
    5  3.0  8.0

    >>> grouped.transform("mean")
        C    D
    0  3.666667  4.0
    1  4.000000  5.0
    2  3.666667  4.0
    3  4.000000  5.0
    4  3.666667  4.0
    5  4.000000  5.0

    .. versionchanged:: 1.3.0

    The resulting dtype will reflect the return value of the passed ``func``,
    for example:

    >>> grouped.transform(lambda x: x.astype(int).max())
    C  D
    0  5  8
    1  5  9
    2  5  8
    3  5  9
    4  5  8
    5  5  9
    r�c�4�|j|g|��||d�|��Sr�r�r�s      rVr�zDataFrameGroupBy.transformr�rUc�v�����t�t�r���fd�}����fd�}||fS���fd�}����fd�}||fS)Nc�(��t|���i���Srxr�r�r|r{r}s ���rVr�z0DataFrameGroupBy._define_paths.<locals>.<lambda>s���&:�g�e�T�&:�D�&K�F�&KrUc�H��|j���fd��j��S)Nc�(��t|���i���Srxrr�s ���rVr�zBDataFrameGroupBy._define_paths.<locals>.<lambda>.<locals>.<lambda>s���*�'�!�T�*�D�;�F�;rUr��rzr��r�r|r{r}ras ����rVr�z0DataFrameGroupBy._define_paths.<locals>.<lambda>s���e�k�k�;�$�)�)�'2�'rUc����|g���i���SrxrTrBs ���rVr�z0DataFrameGroupBy._define_paths.<locals>.<lambda>"s���d�5�&B�4�&B�6�&BrUc�H��|j���fd��j��S)Nc����|g���i���SrxrTr�s ���rVr�zBDataFrameGroupBy._define_paths.<locals>.<lambda>.<locals>.<lambda>$s���$�q�2�4�2�6�2rUr�rErFs ����rVr�z0DataFrameGroupBy._define_paths.<locals>.<lambda>#s���e�k�k�2����'2�'rU)r�r�)rar{r|r}r:r;s````  rVr2zDataFrameGroupBy._define_pathssB����d�C� �K�I��I��)�#�#�	C�I��I��)�#�#rUc��|}||�}|jdk(r||fS	||�}t|t�r)|j
j
|j
�sA||fSt|t�r)|jj
|j
�s||fS||fS|j
|�r|}||fS#t$r�t$r||fcYSwxYwr�)	r�r�	Exceptionr�r)r��equalsr6r�)rar:r;r�r<r��res_fasts       rVr3zDataFrameGroupBy._choose_path(s����������<�<�1����9��	� ��'�H��h�	�*��#�#�*�*�5�=�=�9��S�y� �
��&�
)��>�>�(�(����7��S�y� ���9���?�?�3���D��S�y���-�	���	���9��	�s�B4�4C
�C
Tc��g}|j}|jj||j��}|D]�\}}	tj|	d|�||	g|��i|��}
	|
j
�}
t|
�st|
�r;t|
�r0t|
�s�g|
s�j|j|j|����tdt|
�j �d���|j#||�S#t$rY��wxYw)a7
        Filter elements from groups that don't satisfy a criterion.

        Elements from groups are filtered if they do not satisfy the
        boolean criterion specified by func.

        Parameters
        ----------
        func : function
            Criterion to apply to each group. Should return True or False.
        dropna : bool
            Drop groups that do not pass the filter. True by default; if False,
            groups that evaluate False are filled with NaNs.

        Returns
        -------
        DataFrame

        Notes
        -----
        Each subframe is endowed the attribute 'name' in case you need to know
        which group you are working on.

        Functions that mutate the passed object can produce unexpected
        behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
        for more details.

        Examples
        --------
        >>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
        ...                           'foo', 'bar'],
        ...                    'B' : [1, 2, 3, 4, 5, 6],
        ...                    'C' : [2.0, 5., 8., 1., 2., 9.]})
        >>> grouped = df.groupby('A')
        >>> grouped.filter(lambda x: x['B'].mean() > 3.)
             A  B    C
        1  bar  2  5.0
        3  bar  4  1.0
        5  bar  6  9.0
        r�r_zfilter function returned a z, but expected a scalar bool)rr�r�r�r�r��squeeze�AttributeErrorrrr!r"rrrlrmrOr)rar{rr|r}rr]r9r_r�r�s           rVrzDataFrameGroupBy.filterLs��R��� � ���m�m�(�(��4�9�9�(�=���K�D�%�
���u�f�d�3��u�.�t�.�v�.�C�
��k�k�m��
�s�|�	�#��4��9���:�#��N�N�4�?�?�4�#8�9� �1�$�s�)�2D�2D�1E�F1�1���%�.�!�!�'�6�2�2��"�
��
�s�"C8�8	D�Dc���|jdk(rtd��t|t�rt	|�dkDrtd��t
�|�|�S)Nr�z'Cannot subset columns when using axis=1zRCannot subset columns with a tuple with more than one element. Use a list instead.)r�rr�r�r�ry�__getitem__)rar�r~s  �rVrRzDataFrameGroupBy.__getitem__�sT����9�9��>��F�G�G��c�5�!�c�#�h��l��&��
��w�"�3�'�'rUc�@�|dk(r�|�|j}t||j|j|j|j
|j||j|j|j|j|j��S|dk(r�|�|j|}t||j|j|j
|j||j|j|j|j|j��Std��)a
        sub-classes to define
        return a sliced object

        Parameters
        ----------
        key : string / list of selections
        ndim : {1, 2}
            requested ndim of result
        subset : object, default None
            subset to act on
        �)
r��level�grouper�
exclusions�	selectionr�r!�
group_keys�observedrr�)	rUrVrWrXr�r!rYrZrzinvalid ndim for _gotitem)r]rr�r�rUr�rWr�r!rYrZrrXr)rar��ndim�subsets    rV�_gotitemzDataFrameGroupBy._gotitem�s����1�9��~�����#���	�	��Y�Y��j�j��
�
��?�?������Y�Y��?�?�����{�{�
�

��Q�Y��~����#��� ���	�	��j�j��
�
��?�?������Y�Y��?�?�����{�{��
��8�9�9rUrec��|j}|jdk(r|jj}n|j}|r|j	�}|Sr�)rir�r
rj�get_numeric_data)rarfr_r]rbs     rVrqz'DataFrameGroupBy._get_data_to_aggregate�sF���'�'���9�9��>��%�%�*�*�C��(�(�C���&�&�(�C��
rUc�P�|jj||j��SrZ)r]r^r\)rarbs  rVrdz$DataFrameGroupBy._wrap_agged_manager�s ���x�x�-�-�c����-�A�ArUc�(�ddlm}|j}|j}t	|j�D��cgc]D\}}t|jdd�|f||j|j|j����F}}}|D�cgc]
}||���}	}t|	�s#tg||jj��}
n||	|d��}
|js*tt|
��|
_|j!|
�}
|
Scc}}wcc}w)Nrr�)rXrVrWrZ�r�r�r�)r�r�)rr�rir�r�rXrr�rWrZr�r)r�r�r5r�r�)rar{r�r]r��i�colname�sgbs�sgbr�r�s           rV�_apply_to_column_groupbysz*DataFrameGroupBy._apply_to_column_groupbys�s���5��'�'���+�+��(����4�	
�5�
��7�
�����A���!��
�
��?�?����
�5�	
�	
�)-�-���4��9���-��7�|��r�7�$�-�-�:T�:T�U�F��G�'��:�F��}�}�(��V��5�F�L��0�0��8�F��
��+	
��.s�A	D	�Dc���|jdk7r!|j�fd�|jd��S|j�fd��S)a�
        Return DataFrame with counts of unique elements in each position.

        Parameters
        ----------
        dropna : bool, default True
            Don't include NaN in the counts.

        Returns
        -------
        nunique: DataFrame

        Examples
        --------
        >>> df = pd.DataFrame({'id': ['spam', 'egg', 'egg', 'spam',
        ...                           'ham', 'ham'],
        ...                    'value1': [1, 5, 5, 2, 5, 5],
        ...                    'value2': list('abbaxy')})
        >>> df
             id  value1 value2
        0  spam       1      a
        1   egg       5      b
        2   egg       5      b
        3  spam       2      a
        4   ham       5      x
        5   ham       5      y

        >>> df.groupby('id').nunique()
              value1  value2
        id
        egg        1       1
        ham        1       2
        spam       2       1

        Check for rows with the same id but conflicting values:

        >>> df.groupby('id').filter(lambda g: (g.nunique() > 1).any())
             id  value1 value2
        0  spam       1      a
        3  spam       2      a
        4   ham       5      x
        5   ham       5      y
        rc�&��|j��Srx�r3�rfrs �rVr�z*DataFrameGroupBy.nunique.<locals>.<lambda>.s���C�K�K��/rUTr	c�&��|j��Srxrjrks �rVr�z*DataFrameGroupBy.nunique.<locals>.<lambda>1s���#�+�+�f�:MrU)r�r�rirg)rars `rVr3zDataFrameGroupBy.nunique�sK���Z�9�9��>��-�-�/��1J�1J�SW�.��
��-�-�.M�N�NrUc�,�|jd|||��S)a�
        Return index of first occurrence of maximum over requested axis.

        NA/null values are excluded.

        Parameters
        ----------
        axis : {{0 or 'index', 1 or 'columns'}}, default None
            The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise.
            If axis is not provided, grouper's axis is used.

            .. versionchanged:: 2.0.0

            .. deprecated:: 2.1.0
                For axis=1, operate on the underlying object instead. Otherwise
                the axis keyword is not necessary.

        skipna : bool, default True
            Exclude NA/null values. If an entire row/column is NA, the result
            will be NA.
        numeric_only : bool, default False
            Include only `float`, `int` or `boolean` data.

            .. versionadded:: 1.5.0

        Returns
        -------
        Series
            Indexes of maxima along the specified axis.

        Raises
        ------
        ValueError
            * If the row/column is empty

        See Also
        --------
        Series.idxmax : Return index of the maximum element.

        Notes
        -----
        This method is the DataFrame version of ``ndarray.argmax``.

        Examples
        --------
        Consider a dataset containing food consumption in Argentina.

        >>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
        ...                    'co2_emissions': [37.2, 19.66, 1712]},
        ...                   index=['Pork', 'Wheat Products', 'Beef'])

        >>> df
                        consumption  co2_emissions
        Pork                  10.51         37.20
        Wheat Products       103.11         19.66
        Beef                  55.48       1712.00

        By default, it returns the index for the maximum value in each column.

        >>> df.idxmax()
        consumption     Wheat Products
        co2_emissions             Beef
        dtype: object

        To return the index for the maximum value in each row, use ``axis="columns"``.

        >>> df.idxmax(axis="columns")
        Pork              co2_emissions
        Wheat Products     consumption
        Beef              co2_emissions
        dtype: object
        r��r�rfr�r��rar�r�rfs    rVr�zDataFrameGroupBy.idxmax3�&��\�"�"��4�l�6�#�
�	
rUc�,�|jd|||��S)a�
        Return index of first occurrence of minimum over requested axis.

        NA/null values are excluded.

        Parameters
        ----------
        axis : {{0 or 'index', 1 or 'columns'}}, default None
            The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise.
            If axis is not provided, grouper's axis is used.

            .. versionchanged:: 2.0.0

            .. deprecated:: 2.1.0
                For axis=1, operate on the underlying object instead. Otherwise
                the axis keyword is not necessary.

        skipna : bool, default True
            Exclude NA/null values. If an entire row/column is NA, the result
            will be NA.
        numeric_only : bool, default False
            Include only `float`, `int` or `boolean` data.

            .. versionadded:: 1.5.0

        Returns
        -------
        Series
            Indexes of minima along the specified axis.

        Raises
        ------
        ValueError
            * If the row/column is empty

        See Also
        --------
        Series.idxmin : Return index of the minimum element.

        Notes
        -----
        This method is the DataFrame version of ``ndarray.argmin``.

        Examples
        --------
        Consider a dataset containing food consumption in Argentina.

        >>> df = pd.DataFrame({'consumption': [10.51, 103.11, 55.48],
        ...                    'co2_emissions': [37.2, 19.66, 1712]},
        ...                   index=['Pork', 'Wheat Products', 'Beef'])

        >>> df
                        consumption  co2_emissions
        Pork                  10.51         37.20
        Wheat Products       103.11         19.66
        Beef                  55.48       1712.00

        By default, it returns the index for the minimum value in each column.

        >>> df.idxmin()
        consumption                Pork
        co2_emissions    Wheat Products
        dtype: object

        To return the index for the minimum value in each row, use ``axis="columns"``.

        >>> df.idxmin(axis="columns")
        Pork                consumption
        Wheat Products    co2_emissions
        Beef                consumption
        dtype: object
        r�rnr�ros    rVr�zDataFrameGroupBy.idxmin�rprUc�,�|j|||||�S)a(
        Return a Series or DataFrame containing counts of unique rows.

        .. versionadded:: 1.4.0

        Parameters
        ----------
        subset : list-like, optional
            Columns to use when counting unique combinations.
        normalize : bool, default False
            Return proportions rather than frequencies.
        sort : bool, default True
            Sort by frequencies.
        ascending : bool, default False
            Sort in ascending order.
        dropna : bool, default True
            Don't include counts of rows that contain NA values.

        Returns
        -------
        Series or DataFrame
            Series if the groupby as_index is True, otherwise DataFrame.

        See Also
        --------
        Series.value_counts: Equivalent method on Series.
        DataFrame.value_counts: Equivalent method on DataFrame.
        SeriesGroupBy.value_counts: Equivalent method on SeriesGroupBy.

        Notes
        -----
        - If the groupby as_index is True then the returned Series will have a
          MultiIndex with one level per input column.
        - If the groupby as_index is False then the returned DataFrame will have an
          additional column with the value_counts. The column is labelled 'count' or
          'proportion', depending on the ``normalize`` parameter.

        By default, rows that contain any NA values are omitted from
        the result.

        By default, the result will be in descending order so that the
        first element of each group is the most frequently-occurring row.

        Examples
        --------
        >>> df = pd.DataFrame({
        ...     'gender': ['male', 'male', 'female', 'male', 'female', 'male'],
        ...     'education': ['low', 'medium', 'high', 'low', 'high', 'low'],
        ...     'country': ['US', 'FR', 'US', 'FR', 'FR', 'FR']
        ... })

        >>> df
                gender  education   country
        0       male    low         US
        1       male    medium      FR
        2       female  high        US
        3       male    low         FR
        4       female  high        FR
        5       male    low         FR

        >>> df.groupby('gender').value_counts()
        gender  education  country
        female  high       FR         1
                           US         1
        male    low        FR         2
                           US         1
                medium     FR         1
        Name: count, dtype: int64

        >>> df.groupby('gender').value_counts(ascending=True)
        gender  education  country
        female  high       FR         1
                           US         1
        male    low        US         1
                medium     FR         1
                low        FR         2
        Name: count, dtype: int64

        >>> df.groupby('gender').value_counts(normalize=True)
        gender  education  country
        female  high       FR         0.50
                           US         0.50
        male    low        FR         0.50
                           US         0.25
                medium     FR         0.25
        Name: proportion, dtype: float64

        >>> df.groupby('gender', as_index=False).value_counts()
           gender education country  count
        0  female      high      FR      1
        1  female      high      US      1
        2    male       low      FR      2
        3    male       low      US      1
        4    male    medium      FR      1

        >>> df.groupby('gender', as_index=False).value_counts(normalize=True)
           gender education country  proportion
        0  female      high      FR        0.50
        1  female      high      US        0.50
        2    male       low      FR        0.50
        3    male       low      US        0.25
        4    male    medium      FR        0.25
        )rY)rar\r<r!r=rs      rVr]zDataFrameGroupBy.value_counts�s��^�!�!�&�)�T�9�f�M�MrUc	���tjt|�j�dt|j�j�d�t
t
���|jd||||||��}|S)a)
        Fill NA/NaN values using the specified method within groups.

        .. deprecated:: 2.2.0
            This method is deprecated and will be removed in a future version.
            Use the :meth:`.DataFrameGroupBy.ffill` or :meth:`.DataFrameGroupBy.bfill`
            for forward or backward filling instead. If you want to fill with a
            single value, use :meth:`DataFrame.fillna` instead.

        Parameters
        ----------
        value : scalar, dict, Series, or DataFrame
            Value to use to fill holes (e.g. 0), alternately a
            dict/Series/DataFrame of values specifying which value to use for
            each index (for a Series) or column (for a DataFrame).  Values not
            in the dict/Series/DataFrame will not be filled. This value cannot
            be a list. Users wanting to use the ``value`` argument and not ``method``
            should prefer :meth:`.DataFrame.fillna` as this
            will produce the same result and be more performant.
        method : {{'bfill', 'ffill', None}}, default None
            Method to use for filling holes. ``'ffill'`` will propagate
            the last valid observation forward within a group.
            ``'bfill'`` will use next valid observation to fill the gap.
        axis : {0 or 'index', 1 or 'columns'}
            Axis along which to fill missing values. When the :class:`DataFrameGroupBy`
            ``axis`` argument is ``0``, using ``axis=1`` here will produce
            the same results as :meth:`.DataFrame.fillna`. When the
            :class:`DataFrameGroupBy` ``axis`` argument is ``1``, using ``axis=0``
            or ``axis=1`` here will produce the same results.
        inplace : bool, default False
            Broken. Do not set to True.
        limit : int, default None
            If method is specified, this is the maximum number of consecutive
            NaN values to forward/backward fill within a group. In other words,
            if there is a gap with more than this number of consecutive NaNs,
            it will only be partially filled. If method is not specified, this is the
            maximum number of entries along the entire axis where NaNs will be
            filled. Must be greater than 0 if not None.
        downcast : dict, default is None
            A dict of item->dtype of what to downcast if possible,
            or the string 'infer' which will try to downcast to an appropriate
            equal type (e.g. float64 to int64 if possible).

        Returns
        -------
        DataFrame
            Object with missing values filled.

        See Also
        --------
        ffill : Forward fill values within a group.
        bfill : Backward fill values within a group.

        Examples
        --------
        >>> df = pd.DataFrame(
        ...     {
        ...         "key": [0, 0, 1, 1, 1],
        ...         "A": [np.nan, 2, np.nan, 3, np.nan],
        ...         "B": [2, 3, np.nan, np.nan, np.nan],
        ...         "C": [np.nan, np.nan, 2, np.nan, np.nan],
        ...     }
        ... )
        >>> df
           key    A    B   C
        0    0  NaN  2.0 NaN
        1    0  2.0  3.0 NaN
        2    1  NaN  NaN 2.0
        3    1  3.0  NaN NaN
        4    1  NaN  NaN NaN

        Propagate non-null values forward or backward within each group along columns.

        >>> df.groupby("key").fillna(method="ffill")
             A    B   C
        0  NaN  2.0 NaN
        1  2.0  3.0 NaN
        2  NaN  NaN 2.0
        3  3.0  NaN 2.0
        4  3.0  NaN 2.0

        >>> df.groupby("key").fillna(method="bfill")
             A    B   C
        0  2.0  2.0 NaN
        1  2.0  3.0 NaN
        2  3.0  NaN 2.0
        3  3.0  NaN NaN
        4  NaN  NaN NaN

        Propagate non-null values forward or backward within each group along rows.

        >>> df.T.groupby(np.array([0, 0, 1, 1])).fillna(method="ffill").T
           key    A    B    C
        0  0.0  0.0  2.0  2.0
        1  0.0  2.0  3.0  3.0
        2  1.0  1.0  NaN  2.0
        3  1.0  3.0  NaN  NaN
        4  1.0  1.0  NaN  NaN

        >>> df.T.groupby(np.array([0, 0, 1, 1])).fillna(method="bfill").T
           key    A    B    C
        0  0.0  NaN  2.0  NaN
        1  0.0  2.0  3.0  NaN
        2  1.0  NaN  2.0  2.0
        3  1.0  3.0  NaN  NaN
        4  1.0  NaN  NaN  NaN

        Only replace the first NaN element within a group along rows.

        >>> df.groupby("key").fillna(method="ffill", limit=1)
             A    B    C
        0  NaN  2.0  NaN
        1  2.0  3.0  NaN
        2  NaN  NaN  2.0
        3  3.0  NaN  2.0
        4  3.0  NaN  NaN
        r�r�r�r�r�r�r�s        rVr�zDataFrameGroupBy.fillnaJ	s���|	�
�
��D�z�"�"�#�$!�"&�d�h�h��!8�!8� 9��
J�
�'�)�
	
��#�#��������$�
���
rUc�0�|jd||d�|��}|S)a
        Return the elements in the given *positional* indices in each group.

        This means that we are not indexing according to actual values in
        the index attribute of the object. We are indexing according to the
        actual position of the element in the object.

        If a requested index does not exist for some group, this method will raise.
        To get similar behavior that ignores indices that don't exist, see
        :meth:`.DataFrameGroupBy.nth`.

        Parameters
        ----------
        indices : array-like
            An array of ints indicating which positions to take.
        axis : {0 or 'index', 1 or 'columns', None}, default 0
            The axis on which to select elements. ``0`` means that we are
            selecting rows, ``1`` means that we are selecting columns.

            .. deprecated:: 2.1.0
                For axis=1, operate on the underlying object instead. Otherwise
                the axis keyword is not necessary.

        **kwargs
            For compatibility with :meth:`numpy.take`. Has no effect on the
            output.

        Returns
        -------
        DataFrame
            An DataFrame containing the elements taken from each group.

        See Also
        --------
        DataFrame.take : Take elements from a Series along an axis.
        DataFrame.loc : Select a subset of a DataFrame by labels.
        DataFrame.iloc : Select a subset of a DataFrame by positions.
        numpy.take : Take elements from an array along an axis.

        Examples
        --------
        >>> df = pd.DataFrame([('falcon', 'bird', 389.0),
        ...                    ('parrot', 'bird', 24.0),
        ...                    ('lion', 'mammal', 80.5),
        ...                    ('monkey', 'mammal', np.nan),
        ...                    ('rabbit', 'mammal', 15.0)],
        ...                   columns=['name', 'class', 'max_speed'],
        ...                   index=[4, 3, 2, 1, 0])
        >>> df
             name   class  max_speed
        4  falcon    bird      389.0
        3  parrot    bird       24.0
        2    lion  mammal       80.5
        1  monkey  mammal        NaN
        0  rabbit  mammal       15.0
        >>> gb = df.groupby([1, 1, 2, 2, 2])

        Take elements at positions 0 and 1 along the axis 0 (default).

        Note how the indices selected in the result do not correspond to
        our input indices 0 and 1. That's because we are selecting the 0th
        and 1st rows, not rows whose indices equal 0 and 1.

        >>> gb.take([0, 1])
               name   class  max_speed
        1 4  falcon    bird      389.0
          3  parrot    bird       24.0
        2 2    lion  mammal       80.5
          1  monkey  mammal        NaN

        The order of the specified indices influences the order in the result.
        Here, the order is swapped from the previous example.

        >>> gb.take([1, 0])
               name   class  max_speed
        1 3  parrot    bird       24.0
          4  falcon    bird      389.0
        2 1  monkey  mammal        NaN
          2    lion  mammal       80.5

        Take elements at indices 1 and 2 along the axis 1 (column selection).

        We may take elements using negative integers for positive indices,
        starting from the end of the object, just like with Python lists.

        >>> gb.take([-1, -2])
               name   class  max_speed
        1 3  parrot    bird       24.0
          4  falcon    bird      389.0
        2 0  rabbit  mammal       15.0
          1  monkey  mammal        NaN
        r�r�r�r�s     rVr_zDataFrameGroupBy.take�	s&��D$��#�#�Q�G�$�Q�&�Q���
rUc��|tjurd}|dk7r|j	d|||d�|��}|Sd�}|j	d|||d�|��S)a
        Return unbiased skew within groups.

        Normalized by N-1.

        Parameters
        ----------
        axis : {0 or 'index', 1 or 'columns', None}, default 0
            Axis for the function to be applied on.

            Specifying ``axis=None`` will apply the aggregation across both axes.

            .. versionadded:: 2.0.0

            .. deprecated:: 2.1.0
                For axis=1, operate on the underlying object instead. Otherwise
                the axis keyword is not necessary.

        skipna : bool, default True
            Exclude NA/null values when computing the result.

        numeric_only : bool, default False
            Include only float, int, boolean columns.

        **kwargs
            Additional keyword arguments to be passed to the function.

        Returns
        -------
        DataFrame

        See Also
        --------
        DataFrame.skew : Return unbiased skew over requested axis.

        Examples
        --------
        >>> arrays = [['falcon', 'parrot', 'cockatoo', 'kiwi',
        ...            'lion', 'monkey', 'rabbit'],
        ...           ['bird', 'bird', 'bird', 'bird',
        ...            'mammal', 'mammal', 'mammal']]
        >>> index = pd.MultiIndex.from_arrays(arrays, names=('name', 'class'))
        >>> df = pd.DataFrame({'max_speed': [389.0, 24.0, 70.0, np.nan,
        ...                                  80.5, 21.5, 15.0]},
        ...                   index=index)
        >>> df
                        max_speed
        name     class
        falcon   bird        389.0
        parrot   bird         24.0
        cockatoo bird         70.0
        kiwi     bird          NaN
        lion     mammal       80.5
        monkey   mammal       21.5
        rabbit   mammal       15.0
        >>> gb = df.groupby(["class"])
        >>> gb.skew()
                max_speed
        class
        bird     1.628296
        mammal   1.669046
        >>> gb.skew(skipna=False)
                max_speed
        class
        bird          NaN
        mammal   1.669046
        rr�c�2�td|j����r�r�r�s rVr�z"DataFrameGroupBy.skew.<locals>.alt�
r�rUr�r�r�r�s       rVr�zDataFrameGroupBy.skewA
s���T�3�>�>�!��D��1�9�'�T�'�'�����)�	�
��F��M�	N�
(�t�'�'��
��F��
�IO�
�	
rUc��t|�}|Srxr�r�s  rVr�zDataFrameGroupBy.plot�
r�rUr�c�0�|jd|||��}|S)Nr�)r�r�rfr�)rar�r�rfr�s     rVr�zDataFrameGroupBy.corr�
s)���#�#��6�{��$�
���
rUc�0�|jd|||��}|S)Nr�)r�r�rfr�)rar�r�rfr�s     rVr�zDataFrameGroupBy.cov�
s)���#�#��{��L�$�
���
rUc�L�|j	d|||||||||	|
|||
||d�|��}|S)N)rLr�r�r�r�r�r�r��sharex�shareyr��layoutr@r�r�r�r�)rarLr�r�r�r�r�r�r�r{r|r�r}r@r�r�r}r�s                  rVr�zDataFrameGroupBy.hist�
s\��($��#�#��
����!��!����������!
�"�#
��&�
rUc��tjt|�j�d�tt���|j
d�|j�S)Nzj.dtypes is deprecated and will be removed in a future version. Check the dtypes on the base object insteadr�c��|jSrx)r)�dfs rVr�z)DataFrameGroupBy.dtypes.<locals>.<lambda>�
s��r�y�yrU)r�r�rmrOr�rr�rr�s rVrzDataFrameGroupBy.dtypes�
sU��	�
�
��D�z�"�"�#�$L�
L��'�)�		
��)�)� �$�"4�"4�
�	
rUc�4�|jd|||||��}|S)N�corrwith)r�r��dropr�rfr�)rar�r�r�r�rfr�s       rVr�zDataFrameGroupBy.corrwith�
s2���#�#������%�
$�
���
rUrxr�r�)r�r)r�r�r�rr�r)
r�zlist[Series]r�rrzIndex | Noner�rrr�r�)rr�rfrr�r?rr))r:r	r;r	r�r)r�r�)rz DataFrameGroupBy | SeriesGroupBy)r[r�)rfrr_r�rrD)rbrDrr))rrrr))r�r�r�rrfrrr))NFTFT)r\zSequence[Hashable] | Noner<rr!rr=rrrrr�)r�z.Hashable | Mapping | Series | DataFrame | Noner�r�r�r�r�rr�r�rzDataFrame | None)rrFr�r�rr)r�)r�r�F)r�z/str | Callable[[np.ndarray, np.ndarray], float]r�r�rfrrr))Nr�F)r�r�r�r�rfrrr))NNTNNNNNFFNNr�NF)rLzIndexLabel | Noner�rr�r�r�r�r�r�r�r�r{rr|rr�r�r}r�r@r�r�r�r�rr�)r�r�r�r�r�rr�r@rfrrr))1rOrPrQrr�rr.r�r�r�rr�rrr
�)_DataFrameGroupBy__examples_dataframe_docrrr1r�r2r3rrRr]rqrdrgr3rr�r�r�r9�boxplotr]r�r_r�r�r)r�rRr�r�r�rr�r�r�s@rVrr1s�����V	�X��t	�	�'8��L�T��T�T�M�T�l�C�1�<�."'�"�D��D��D��	D�
�D�L/,��/,��/,�
 �/,��
/,�
�/,�h#��	�
�����	�
�
�@/<�b &�2	�4 ��l��-E�F�
�!�"�,0��
�#�G�
�
$�"�HE3�N(�/:�d',���#��3=��	��B��83O�n-0�N�N��"�	P
�)�P
��P
��	P
�

�P
�h-0�N�N��"�	P
�)�P
��P
��	P
�

�P
�d$�G�-1�����
oN�)�oN��oN��	oN�
�oN��
oN�
�oN�fAE�'+�,/�N�N�� ����P�=�P�%�P�*�	P�
�P��
P�
�P�j-0�N�N�c��c�*�c�

�c�N-0�N�N��"�	^
�)�^
��^
��	^
�
�
^
�@�����	�	� ��!���	����	�	� �CL��"�		�?�	��	��		�

�	�!�	�	����	�	��#'��"�		��	��	��		�

�	� �	�	����	�	� �%)���!%�!�!%�!����*.�)-�$&�"��!&�!�&��	&�
�&��
&��&��&��&��&�(�&�'�&�"�&��&� �!&�!�&�P���	�	�	!�	!�"�
�#��
�	��	�	�	#�	#�$�&)�^�^��$-�"�
�!��#���	�
"���
�
��%�rUrc�t�ddlm}t|t�r�|jj|j�rD||gt
|j�zd��}|j|_|j|_n[|jtj|jt
|j�df�|j|j��}t|t�sJ�|St|t�r9|jj|j�s|j|�dS|S)Nrr�r�r�rb)r�r�r�r6r��is_r�r�r�r
rqr�r)�_align_frame)r]r�r�r��	res_frames     rVr5r5s�����#�v���9�9�=�=����#���u�s�5�=�=�'9�9��B�I� %�
�
�I��#�k�k�I�O��(�(�����
�
�S����%5�q�$9�:��
�
��k�k�)��I�
�)�Y�/�/�/���	�C��	#�C�I�I�M�M�%�+�+�,F�����&�q�)�)��
rU)r]r)r�r)r�r�rr))qrR�
__future__r�collectionsr�	functoolsr�textwrapr�typingrrr	r
rrr
rr��numpyr
�pandas._libsrr�pandas._libs.hashtabler�
pandas.errorsr�pandas.util._decoratorsrrr�pandas.util._exceptionsr�pandas.core.dtypes.commonrrrrrrr�pandas.core.dtypes.dtypesrr�pandas.core.dtypes.inferencer �pandas.core.dtypes.missingr!r"�pandas.corer#�pandas.core.applyr$r%r&r'r(�pandas.core.common�core�commonr��pandas.core.framer)�pandas.core.groupbyr*r+�pandas.core.groupby.groupbyr,r-r.r/r0r1�pandas.core.indexes.apir2r3r4r5�pandas.core.seriesr6�pandas.core.sortingr7�pandas.core.util.numba_r8�pandas.plottingr9�collections.abcr:r;r<�pandas._typingr=r>r?r@rArBrCrDrErFr�rG�pandas.core.genericrHr�rMrIrKrXrr5rTrUrV�<module>r�sL���#����	�	�	����.�,���
5�����5��
#���!� �'������&�/�3�1�������#�+�
�#�x��S��)�)�*�	��~�&���z��<T�G�F�O�T�n$X�w�y�)�X�v.�	��$��+=���rU

Sindbad File Manager Version 1.0, Coded By Sindbad EG ~ The Terrorists