I am looping through rows of a pandas df, loop index i.
I am able to assign several columns using the ix function with the loop index as first parameter, column name as second.
However, when I try to retrieve/print using this method,
print(df.ix[i,"Run"])
I get a the following Typerror: str object cannot be interpreted as an integer.
somehow related to Keyerror: 'Run'
Not quite sure why this is occurring, as Run is indeed a column in the dataframe.
Any suggestions?
Traceback (most recent call last):
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexes\base.py\!", line 3124, in get_value
return libindex.get_value_box(s, key)
File \!"pandas\_libs\index.pyx\!", line 55, in pandas._libs.index.get_value_box
File \!"pandas\_libs\index.pyx\!", line 63, in pandas._libs.index.get_value_box
TypeError: 'str' object cannot be interpreted as an integer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File \!"C:\...", line 365, in <module>
print(df.ix[i,\!"Run\!"])
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexing.py\!", line 116, in __getitem__
return self._getitem_tuple(key)
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexing.py\!", line 870, in _getitem_tuple
return self._getitem_lowerdim(tup)
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexing.py\!", line 1027, in _getitem_lowerdim
return getattr(section, self.name)[new_key]
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexing.py\!", line 122, in __getitem__
return self._getitem_axis(key, axis=axis)
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexing.py\!", line 1116, in _getitem_axis
return self._get_label(key, axis=axis)
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexing.py\!", line 136, in _get_label
return self.obj[label]
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\series.py\!", line 767, in __getitem__
result = self.index.get_value(self, key)
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexes\base.py\!", line 3132, in get_value
raise e1
File \!"C:\WPy-3670\python-3.6.7.amd64\lib\site-packages\pandas\core\indexes\base.py\!", line 3118, in get_value
tz=getattr(series.dtype, 'tz', None))
File \!"pandas\_libs\index.pyx\!", line 106, in pandas._libs.index.IndexEngine.get_value
File \!"pandas\_libs\index.pyx\!", line 114, in pandas._libs.index.IndexEngine.get_value
File \!"pandas\_libs\index.pyx\!", line 162, in pandas._libs.index.IndexEngine.get_loc
File \!"pandas\_libs\hashtable_class_helper.pxi\!", line 1492, in pandas._libs.hashtable.PyObjectHashTable.get_item
File \!"pandas\_libs\hashtable_class_helper.pxi\!", line 1500, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Run'
"
Upon changing the name of the column I print to any other column, it does work correctly. Earlier in the code, I "compressed" the rows, which had multiple rows per unique string in 'Run' column, using the following.
df=df.groupby('Run').max()
Did this last line somehow remove the column/column name from the table?
ix has been deprecated. ix has always been ambiguous: does ix[10] refer to the row with the label 10, or the row at position 10?
Use loc or iloc instead:
df.loc[i,"Run"] = ... # by label
df.iloc[i]["Run"] = ... # by position
As for the groupby removing Run: it moves Run to the index of the data frame. To get it back as a column, call reset_index:
df=df.groupby('Run').max().reset_index()
Differences between indexing by label and position:
Suppose you have a series like this:
s = pd.Series(['a', 'b', 'c', 'd', 'e'], index=np.arange(0,9,2))
0 a
2 b
4 c
6 d
8 e
The first column is the labels (aka the index). The second column is the values of the series.
Label based indexing:
s.loc[2] --> b
s.loc[3] --> error. The label doesn't exist
Position based indexing:
s.iloc[2] --> c. since `a` has position 0, `b` has position 1, and so on
s.iloc[3] --> d
According to the documentation, s.ix[3] would have returned d since it first searches for the label 3. When that fails, it falls back to the position 3. On my machine (Pandas 0.24.2), it returns an error, along with a deprecation warning, so I guess the developers changed it to behave like loc.
If you want to use mixed indexing, you have to be explicit about that:
s.loc[3] if 3 in s.index else s.iloc[3]
Related
I have a dataframe df_params. It contains parameters for the stored procedure.
PurchaseOrderID OrderDate SupplierReference DF_Name
0 1 2013-01-01 B2084020 dataframe1
1 2 2013-01-01 293092 dataframe2
2 3 2013-01-01 08803922 dataframe3
3 4 2013-01-01 BC0280982 dataframe4
4 5 2013-01-01 ML0300202 dataframe5
I simply want to access the elements of the dataframe in a loop:
for i in range(len(df_params)):
print(df_params[i][0])
But it gives me an error without really explanation:
Traceback (most recent call last):
File "C:my\path\site-packages\pandas\core\indexes\base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "Test3.py", line 35, in <module>
print(df_params[i][0])
File "C:\Users\my\path\Python37\lib\site-packages\pandas\core\frame.py", line 2995, in __getitem__
indexer = self.columns.get_loc(key)
File "C:\Users\my\path\Python37\lib\site-packages\pandas\core\indexes\base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 0
PS Microsoft.PowerShell.Core\FileSystem::\\my\path>
The goal is to supply value to the stored procedure:
for i in range(len(df_params)):
query = "EXEC Purchasing.GetPurchaseOrder " + df_params[i][0] + "," + str(df_params[i][1]) + "," + df_params[i][2])
df = pd.read_sql(query, conn)
desired outcome from print(query):
EXEC Purchasing.GetPurchaseOrder 1, '2013-01-01', 'B2084020'
EXEC Purchasing.GetPurchaseOrder 2, '2013-01-01', '293092'
EXEC Purchasing.GetPurchaseOrder 3, '2013-01-01', '08803922'
EXEC Purchasing.GetPurchaseOrder 4, '2013-01-01', 'BC0280982'
EXEC Purchasing.GetPurchaseOrder 5, '2013-01-01', 'ML0300202'
pandas.DataFrames don't behave exactly like numpy.ndarrays. There are basically three options:
option 1: iterrows-method:
You can iterate over rows of a pandas.dataframe by
for idx, row in df_params.iterrows():
print(row['PurchaseOrderID'])
This is a particularly readable way, so personally I prefer this
option 2: indexing:
if you want to index pandas.dataframe just like an numpy.ndarray object, go with the method .iat[]
for i in range(len(df_params)):
print(df_params.iat[i, 0])
This actually indexes all elements and ignores the index of the dataframe! So assuming that you have a different index (in the extreme some strings or a table with a pandas.DataTimeIndex) this still works... just as if you would have done a df_params.to_numpy()[i, 0].
Note: There exists a similar function that uses the column name: .at[]
There is a second way to index a pandas.DataFrame object and it is just a little safer with regard to columns:.loc[] It takes an index and column name(s)
for idx in df_params.index:
print(df_params.iloc[idx, 'PurchaseOrderID'])
option 3: slicing a pandas.Series object:
Every column in a pandas.DataFrame is a pandas.Series object, which you can index similar (you actually index the series as described above) to a numpy.ndarray:
col = df_params['PurchaseOrderID']
for idx in col.index:
print(col[idx])
So what went wrong in your case?
The double indexing is almost the same as the last example but it calls calls .loc[] under the hood and thus expects a column name and not a number (that would have been the method .iloc[]. And it is expecting to see the column first and then the row.
So if you really want, you could go like this:
for i in range(len(df_params)):
print(df_params.iloc[0][i])
but this only works because your pandas.DataFrame has continuous numeric indices starting from 0! So please don't do this and use the actual indices of your table (actually use one of the options above and not the last one ;) )
On data frame there are better ways to access values, you can use lambda.
with lambda will have an access to any row.
df.apply(lambda row : print(row['DF_Name']))
now the variable 'row' is each row on the dataframe, and you can access to each properties on the row.
data.groupby(by="DAY").agg({"CLOSING_DATE": min})
How come that when I tried to groupby my dataframe to get the oldest date for a sparse column (CLOSING_DATE is mostly empty) I get the following error?
Traceback (most recent call last):
File "<ipython-input-23-37f9fe161304>", line 1, in <module>
data[:10000].groupby(by="DAY").agg({"CLOSING_DATE": min})
File "/home/user/miniconda3/envs/churn/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 951, in aggregate
result, how = self._aggregate(func, *args, **kwargs)
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/base.py", line 416, in _aggregate
result = _agg(arg, _agg_1dim)
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/base.py", line 383, in _agg
result[fname] = func(fname, agg_how)
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/base.py", line 367, in _agg_1dim
return colg.aggregate(how)
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/groupby/generic.py", line 252, in aggregate
return getattr(self, cyfunc)()
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1553, in min
return self._agg_general(
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1000, in _agg_general
result = self._cython_agg_general(
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/groupby/groupby.py", line 1035, in _cython_agg_general
result, agg_names = self.grouper.aggregate(
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/groupby/ops.py", line 591, in aggregate
return self._cython_operation(
File "/home/user/miniconda3/envs/py_env/lib/python3.8/site-packages/pandas/core/groupby/ops.py", line 471, in _cython_operation
raise NotImplementedError(f"{values.dtype} dtype not supported")
NotImplementedError: Sparse[float64, nan] dtype not supported
This is a bug in pandas, related to a recent refactor of cython optimized groupbys:
https://github.com/pandas-dev/pandas/issues/38980
You have two choices:
Downgrade the version of pandas you're using to 1.1.4 and wait for the bug to be fixed (maybe ~4-6 weeks)
Convert your sparse matrix to a dense matrix before the groupby with to_dense()
I am attempting to slice a pandas dataframe by column labels using .loc. Based on Pandas documentation, https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html, .loc seems like the right indexer for the use case.
Original pandas DataFrame and confirmation the columns w/ labels exists:
The column labels as dynamically constructed and passed as list to slice the dataframe.
# Create dictionaries
prop_dict = dict(zip(df_list.id, df_list.Company))
city_dict = dict(zip(df_list.id, df_list.city))
# Lookup keys (property ids) from prop_dict
propKeys = getKeysByValue(prop_dict, landlord)
cityKeys = getKeysByValue(city_dict, market)
prop_list = list(set(propKeys) & set(cityKeys))
print(prop_list)
[19, 27]
# Slice dataframe
df_temp = df_t.loc[:, prop_list]
However, this throws an error KeyError: 'None of [[19, 27]] are in the [columns]'
Full traceback here:
Traceback (most recent call last):
File "/Platform/Deploy/tabs/market.py", line 279, in render_table
result = top_leads(company, market)
File "/Platform/Deploy/return_leads.py", line 86, in top_leads
df_temp = df_matrix.loc[:, prop_list]
File "/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py", line 1472, in __getitem__
return self._getitem_tuple(key)
File "/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py", line 890, in _getitem_tuple
retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
File "/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py", line 1901, in _getitem_axis
return self._getitem_iterable(key, axis=axis)
File "/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py", line 1143, in _getitem_iterable
self._validate_read_indexer(key, indexer, axis)
File "/anaconda3/lib/python3.7/site-packages/pandas/core/indexing.py", line 1206, in _validate_read_indexer
key=key, axis=self.obj._get_axis_name(axis)))
KeyError: 'None of [[19, 27]] are in the [columns]'
Is it possible the columns '19' and '27' are located as the 19th and 27th column and that is why the first time it gives you the appropriate result because of the integer value of the 'names' 19 and 27. If you want to pass it as a list there need to be ''s around the names of the column, meaning it should be ['19','27'] instead of [19,27]
I want to delete any rows including specific string in dataframe.
I want to delete data rows with abnormal email address (with .jpg)
Here's my code, what's wrong with it?
df = pd.DataFrame({'email':['abc#gmail.com', 'cde#gmail.com', 'ghe#ss.jpg', 'sldkslk#sss.com']})
df
email
0 abc#gmail.com
1 cde#gmail.com
2 ghe#ss.jpg
3 sldkslk#sss.com
for i, r in df.iterrows():
if df.loc[i,'email'][-3:] == 'com':
df.drop(df.index[i], inplace=True)
Traceback (most recent call last):
File "<ipython-input-84-4f12d22e5e4c>", line 2, in <module>
if df.loc[i,'email'][-3:] == 'com':
File "C:\Anaconda\lib\site-packages\pandas\core\indexing.py", line 1472, in __getitem__
return self._getitem_tuple(key)
File "C:\Anaconda\lib\site-packages\pandas\core\indexing.py", line 870, in _getitem_tuple
return self._getitem_lowerdim(tup)
File "C:\Anaconda\lib\site-packages\pandas\core\indexing.py", line 998, in _getitem_lowerdim
section = self._getitem_axis(key, axis=i)
File "C:\Anaconda\lib\site-packages\pandas\core\indexing.py", line 1911, in _getitem_axis
self._validate_key(key, axis)
File "C:\Anaconda\lib\site-packages\pandas\core\indexing.py", line 1798, in _validate_key
error()
File "C:\Anaconda\lib\site-packages\pandas\core\indexing.py", line 1785, in error
axis=self.obj._get_axis_name(axis)))
KeyError: 'the label [2] is not in the [index]'
IIUC, you can do this rather than iterating through your frame with iterrows:
df = df[df.email.str.endswith('.com')]
which returns:
>>> df
email
0 abc#gmail.com
1 cde#gmail.com
3 sldkslk#sss.com
Or, for larger dataframes, it's sometimes faster to not use the str methods provided by pandas, but just to do it in a plain list comprehension with python's built in string methods:
df = df[[i.endswith('.com') for i in df.email]]
I'm having trouble addressing values in a DataFrame, but I don't seem to have any problems with the Series object.
>>> df=DataFrame([0.5,1.5,2.5,3.5,4.5], index=[['a','a','b','b','b'],[1,2,1,2,3]])
>>> series=Series([0.5,1.5,2.5,3.5,4.5], index=[['a','a','b','b','b'],[1,2,1,2,3]])
>>> series['a']
1 0.5
2 1.5
dtype: float64
>>> df['a']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Anaconda\lib\site-packages\pandas\core\frame.py", line 2003, in __getitem__
return self._get_item_cache(key)
File "C:\Anaconda\lib\site-packages\pandas\core\generic.py", line 667, in _get_item_cache
values = self._data.get(item)
File "C:\Anaconda\lib\site-packages\pandas\core\internals.py", line 1655, in get
_, block = self._find_block(item)
File "C:\Anaconda\lib\site-packages\pandas\core\internals.py", line 1935, in _find_block
self._check_have(item)
File "C:\Anaconda\lib\site-packages\pandas\core\internals.py", line 1942, in _check_have
raise KeyError('no item named %s' % com.pprint_thing(item))
KeyError: u'no item named a'
I'm definitely misunderstanding something, if someone could help me out it would be very much appreciated!
You are trying to select a column, and there is indeed no column named 'a'. Try df.loc['a'] instead.
I recommend to look at the basic indexing docs: http://pandas.pydata.org/pandas-docs/stable/indexing.html#basics
In summary:
series[label] selects element in series at index label
dataframe[label] selects column with name label