How to resolve the error 'tuple' object is not callable - python

I am new to python, I have imported a file into jupyter as follows:
df = pd.read_csv(r"C:\Users\shalotte1\Documents\EBQS_INTEGRATEDQUOTEDOCUMENT\groceries.csv")
I am using the following code to determine the number of rows and columns in the data
df.shape()
However I am getting the following error:
TypeError: 'tuple' object is not callable

You want df.shape - this will return a tuple as in (n_rows, n_cols). You are then trying to call this tuple as though it were a function.

As you are new to python, I would recommend you to read this page. This will make you get aware of other causes too so that you can solve this problem again if it appears in the future.
https://careerkarma.com/blog/python-typeerror-tuple-object-is-not-callable/

Related

AttributeError: 'list' object has no attribute 'loc'

I have a few dataframes from an API set as variables shown in the list of data. When I tried to perform some functions the error shows:
AttributeError: 'list' object has no attribute 'loc'
data = ['dataA','dataB','dataC','dataD']
for i in data:
exec('{} = pd.DataFrame()'.format(i))
for i in data:
ma = 6
smaString = "SMA" + str(ma)
data[smaString] = data.iloc[:,3].rolling(window = ma).mean()
data = data.iloc[ma:]
Any help would be highly appreciated.
Thanks.
To answer your question, the error pops up because 'data' is not a dataframe but a list. 'iloc' or 'loc' functions cannot be used on a list.
In your question, you have shown the error message to have 'loc' whereas you have used 'iloc' in your code. These are two different things:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html
Also, it's unclear what you are trying to achieve from this code here.

How can I access to an item in pandas series?

data=pd.read_csv('E:\\movie_data.csv')
indices=pd.Series(data.index, index=data['movie_title']).drop_duplicates
indices['Avatar']
This is the result
I've tried to access an element of this series,
But I received this error:
TypeError
----> 3 indices['Avatar']
TypeError: 'method' object is not subscriptable
You need to actually call (using the paranthesis) the drop_duplicates method to get a DataFrame in return. Try the following code and it should work :
data=pd.read_csv('E:\\movie_data.csv')
indices=pd.Series(data.index, index=data['movie_title']).drop_duplicates()
indices['Avatar']
For the record, the error message your are having is because you try to use object subscription (the bracket notation) on the drop_duplicates method not on the result of this method.

Getting " AttributeError: 'float' object has no attribute 'items' " with pandas.io.json.json_normalize

When trying to normalize a series within a pandas dataframe with the json_normalize function i am getting the Error:
"AttributeError: 'float' object has no attribute 'items'"
Each row of the series contains a nested json, though some rows don't contain all of the attributes, that are present in some of those json' from that series
also there is a field "timestamp":{"$date":1578411194000} within those nested json's, which is also present in another column of that same dataframe, giving me an error in another attempt to flatten that other series.
I am assuming the AttributeError has something to do with the either not all JSONs containing all the fields or sth. with those timestamps.json_normalize did work for some of the other df-columns.
I hope this is enough info. thanks a lot in advance!
It can happen if there are NaN fields which can be solved by using dropna() :
pd.json_normalize(df.explode("field")["field"]))
=> AttributeError: 'float' object has no attribute 'items'
pd.json_normalize(df.explode("field")["field"].dropna())
=> no error

Error when coping a list into a Googlespreadsheet using gspread "TypeError: Object of type int64 is not JSON serializable"

When copying data from a Python DataFrame into a Googlespreadsheet, I encounter a TypeError when using sheet.insert_row on a list.
I have no problem using sheet.update_cell and add the values one by one, I was just wondering why insert_row is failing.
I'm using a Python notebook with Python version 3.7.1
df = pd.DataFrame([('a',1)],columns=columns)
sheet.insert_row(columns,1) # works
row = list(df.iloc[0])
print(row==['a',1]) # returns True
sheet.insert_row(['a',1], 2) # works
sheet.insert_row(row, 3) # Fails with error "TypeError: Object of type int64 is not JSON serializable"
I would expect the last two rows in the code to both succeed or fail, instead inserting ['a',1] works while ingesting row (even if row=['a',1]) fails with error "TypeError: Object of type int64 is not JSON serializable"

getting 'DataFrameGroupBy' object is not callable in jupyter

I have this csv file from https://www.data.gov.au/dataset/airport-traffic-data/resource/f0fbdc3d-1a82-4671-956f-7fee3bf9d7f2
I'm trying to aggregate with
airportdata = Airports.groupby(['Year_Ended_December'])('Dom_Pax_in','Dom_Pax_Out')
airportdata.sum()
However, I keep getting 'DataFrameGroupBy' object is not callable
and it wont print the data I want
How to fix the issue?
You need to execute the sum aggregation before extracting the columns:
airportdata_agg = Airports.groupby(['Year_Ended_December']).sum()[['Dom_Pax_in','Dom_Pax_Out']]
Alternatively, if you'd like to ensure you're not aggregating columns you are not going to use:
airportdata_agg = Airports[['Dom_Pax_in','Dom_Pax_Out', 'Year_Ended_December']].groupby(['Year_Ended_December']).sum()

Categories

Resources