simple code works under python 3.8 but raise error under 3.9.
the pandas series.mean() raise the following:
Traceback (most recent call last):
File "/home/odin/Learn/Udemy/python_100/day_25/main.py", line 8, in <module>
print(data["temp"].mean())
File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 11118, in mean
return NDFrame.mean(self, axis, skipna, level, numeric_only, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 10726, in mean
return self._stat_function(
File "/usr/local/lib/python3.9/dist-packages/pandas/core/generic.py", line 10711, in _stat_function
return self._reduce(
File "/usr/local/lib/python3.9/dist-packages/pandas/core/series.py", line 4182, in _reduce
return op(delegate, skipna=skipna, **kwds)
File "/usr/local/lib/python3.9/dist-packages/pandas/core/nanops.py", line 71, in _f
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/pandas/core/nanops.py", line 124, in f
result = bn_func(values, axis=axis, **kwds)
TypeError: 'NoneType' object is not callable
the sample code:
import pandas as pd
data = pd.read_csv("weather_data_n.csv")
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(data)
temp_list = data["temp"].to_list()
print(data["temp"].mean())
Related
I want to read a csv file using the read_csv method, and then plot the data using matplotlib. However, I cannot seem to create a simple matplotlib plot (even without using the data from the file), after I use pandas to read in some csv data. It gives the error: 'NoneType' object is not callable.
Any idea what's going on here?
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("data.csv")
time = [0,10,20]
vel = [0,1,2]
fig,axes = plt.subplots()
axes.plot(time,vel)
plt.show()
plt.savefig(fname = "pic.png" )
I have tried using plot.savefig instead of plot.show() and this produces the same error.
If I move the plotting block of code before the pd.read_csv line, it plots just fine. The error only seems to occur after pandas is used.
This is occuring in Visual Studio. Not occuring in IDLE.
Traceback:
Traceback (most recent call last):
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\numpy\core\getlimits.py", line 384, in __new__
dtype = numeric.dtype(dtype)
TypeError: 'NoneType' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\backends\backend_qt.py", line 455, in _draw_idle
self.draw()
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\backends\backend_agg.py", line 436, in draw
self.figure.draw(self.renderer)
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\artist.py", line 73, in draw_wrapper
result = draw(artist, renderer, *args, **kwargs)
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\artist.py", line 50, in draw_wrapper
return draw(artist, renderer)
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\figure.py", line 2803, in draw
mimage._draw_list_compositing_images(
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\image.py", line 132, in _draw_list_compositing_images
a.draw(renderer)
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\artist.py", line 50, in draw_wrapper
return draw(artist, renderer)
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\axes\_base.py", line 3020, in draw
self._unstale_viewLim()
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\axes\_base.py", line 776, in _unstale_viewLim
self.autoscale_view(**{f"scale{name}": scale
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\axes\_base.py", line 2932, in autoscale_view
handle_single_axis(
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\axes\_base.py", line 2895, in handle_single_axis
x0, x1 = locator.nonsingular(x0, x1)
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\ticker.py", line 1654, in nonsingular
return mtransforms.nonsingular(v0, v1, expander=.05)
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\matplotlib\transforms.py", line 2880, in nonsingular
if maxabsvalue < (1e6 / tiny) * np.finfo(float).tiny:
File "C:\Users\user1\AppData\Roaming\Python\Python310\site-packages\numpy\core\getlimits.py", line 387, in __new__
dtype = numeric.dtype(type(dtype))
TypeError: 'NoneType' object is not callable
The following code
If I run the following code in pdb (i.e. with python -m pdb)
if __name__=='__main__':
import pandas as pd
df=pd.DataFrame([[0,1,2],[63,146, 135]])
df.plot.area()
it fails with a TypeError inside a numpy routine that's called by matplotlib:
> python -m pdb test_dtype.py
> /home/jhaiduce/financial/forecasting/test_dtype.py(1)<module>()
-> if __name__=='__main__':
(Pdb) r
QSocketNotifier: Can only be used with threads started with QThread
--Return--
> /home/jhaiduce/financial/forecasting/test_dtype.py(6)<module>()->None
-> df.plot.area()
(Pdb) c
Traceback (most recent call last):
File "/usr/lib64/python3.10/site-packages/numpy/core/getlimits.py", line 384, in __new__
dtype = numeric.dtype(dtype)
TypeError: 'NoneType' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.10/pdb.py", line 1726, in main
pdb._runscript(mainpyfile)
File "/usr/lib64/python3.10/pdb.py", line 1586, in _runscript
self.run(statement)
File "/usr/lib64/python3.10/bdb.py", line 597, in run
exec(cmd, globals, locals)
File "<string>", line 1, in <module>
File "/home/jhaiduce/financial/forecasting/test_dtype.py", line 6, in <module>
df.plot.area()
File "/usr/lib64/python3.10/site-packages/pandas/plotting/_core.py", line 1496, in area
return self(kind="area", x=x, y=y, **kwargs)
File "/usr/lib64/python3.10/site-packages/pandas/plotting/_core.py", line 972, in __call__
return plot_backend.plot(data, kind=kind, **kwargs)
File "/usr/lib64/python3.10/site-packages/pandas/plotting/_matplotlib/__init__.py", line 71, in plot
plot_obj.generate()
File "/usr/lib64/python3.10/site-packages/pandas/plotting/_matplotlib/core.py", line 294, in generate
self._post_plot_logic_common(ax, self.data)
File "/usr/lib64/python3.10/site-packages/pandas/plotting/_matplotlib/core.py", line 473, in _post_plot_logic_common
self._apply_axis_properties(ax.xaxis, rot=self.rot, fontsize=self.fontsize)
File "/usr/lib64/python3.10/site-packages/pandas/plotting/_matplotlib/core.py", line 561, in _apply_axis_properties
labels = axis.get_majorticklabels() + axis.get_minorticklabels()
File "/usr/lib64/python3.10/site-packages/matplotlib/axis.py", line 1201, in get_majorticklabels
ticks = self.get_major_ticks()
File "/usr/lib64/python3.10/site-packages/matplotlib/axis.py", line 1371, in get_major_ticks
numticks = len(self.get_majorticklocs())
File "/usr/lib64/python3.10/site-packages/matplotlib/axis.py", line 1277, in get_majorticklocs
return self.major.locator()
File "/usr/lib64/python3.10/site-packages/matplotlib/ticker.py", line 2113, in __call__
vmin, vmax = self.axis.get_view_interval()
File "/usr/lib64/python3.10/site-packages/matplotlib/axis.py", line 1987, in getter
return getattr(getattr(self.axes, lim_name), attr_name)
File "/usr/lib64/python3.10/site-packages/matplotlib/axes/_base.py", line 781, in viewLim
self._unstale_viewLim()
File "/usr/lib64/python3.10/site-packages/matplotlib/axes/_base.py", line 776, in _unstale_viewLim
self.autoscale_view(**{f"scale{name}": scale
File "/usr/lib64/python3.10/site-packages/matplotlib/axes/_base.py", line 2932, in autoscale_view
handle_single_axis(
File "/usr/lib64/python3.10/site-packages/matplotlib/axes/_base.py", line 2895, in handle_single_axis
x0, x1 = locator.nonsingular(x0, x1)
File "/usr/lib64/python3.10/site-packages/matplotlib/ticker.py", line 1654, in nonsingular
return mtransforms.nonsingular(v0, v1, expander=.05)
File "/usr/lib64/python3.10/site-packages/matplotlib/transforms.py", line 2880, in nonsingular
if maxabsvalue < (1e6 / tiny) * np.finfo(float).tiny:
File "/usr/lib64/python3.10/site-packages/numpy/core/getlimits.py", line 387, in __new__
dtype = numeric.dtype(type(dtype))
TypeError: 'NoneType' object is not callable
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> /usr/lib64/python3.10/site-packages/numpy/core/getlimits.py(387)__new__()
-> dtype = numeric.dtype(type(dtype))
(Pdb)
The error occurs only when run in the debugger; the program runs as normal when run outside the debugger.
Any idea what could be the cause of this?
I want to save the data set as a parquet file, called power.parquet, and I use df.to_parquet(<filename>). But it gives me this errer "ValueError: Error converting column "Global_reactive_power" to bytes using encoding UTF8. Original error: bad argument type for built-in operation" And I installed the fastparquet package.
from fastparquet import write, ParquetFile
dat.to_parquet("power.parquet")
df_parquet = ParquetFile("power.parquet").to_pandas()
df_parquet.head() # Test your final value
`*Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.9/site-packages/fastparquet/writer.py", line 259, in convert
out = array_encode_utf8(data)
File "fastparquet/speedups.pyx", line 50, in fastparquet.speedups.array_encode_utf8
TypeError: bad argument type for built-in operation
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/folders/4f/bm2th1p56tz4rq_zffc8g3940000gn/T/ipykernel_85477/3080656655.py", line 1, in <module>
dat.to_parquet("power.parquet", compression="GZIP")
File "/opt/anaconda3/lib/python3.9/site-packages/dask/dataframe/core.py", line 4560, in to_parquet
return to_parquet(self, path, *args, **kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/dask/dataframe/io/parquet/core.py", line 732, in to_parquet
return compute_as_if_collection(
File "/opt/anaconda3/lib/python3.9/site-packages/dask/base.py", line 315, in compute_as_if_collection
return schedule(dsk2, keys, **kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/opt/anaconda3/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/opt/anaconda3/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/opt/anaconda3/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/opt/anaconda3/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/opt/anaconda3/lib/python3.9/site-packages/dask/utils.py", line 35, in apply
return func(*args, **kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/dask/dataframe/io/parquet/fastparquet.py", line 1167, in write_partition
rg = make_part_file(
File "/opt/anaconda3/lib/python3.9/site-packages/fastparquet/writer.py", line 716, in make_part_file
rg = make_row_group(f, data, schema, compression=compression,
File "/opt/anaconda3/lib/python3.9/site-packages/fastparquet/writer.py", line 701, in make_row_group
chunk = write_column(f, coldata, column,
File "/opt/anaconda3/lib/python3.9/site-packages/fastparquet/writer.py", line 554, in write_column
repetition_data, definition_data, encode[encoding](data, selement), 8 * b'\x00'
File "/opt/anaconda3/lib/python3.9/site-packages/fastparquet/writer.py", line 354, in encode_plain
out = convert(data, se)
File "/opt/anaconda3/lib/python3.9/site-packages/fastparquet/writer.py", line 284, in convert
raise ValueError('Error converting column "%s" to bytes using '
ValueError: Error converting column "Global_reactive_power" to bytes using encoding UTF8. Original error: bad argument type for built-in operation
*
I tried by adding object_coding = "bytes".I want to solve this problem.
I have unittest for api use Marshmallow in flask 1.1.2 and python 3.8 as bellow
schema.py
class ListSchema(Schema):
id = fields.Integer()
name = fields.String()
gone_date = fields.Function(lambda data: data.gone_date.timestamp())
main_service.py
class MainService:
#classmethod:
def get_list(cls):
list_schema = ListSchema()
list_data, total = Class.function_get_list_data
list_data = list_schema.dump(list_data, many=True)
When i run pytest for function get_list
mock = [(1, 'Name_1', datetime.datetime(2020, 8, 20, 0, 0))], 10
mock_data = mock.Mock(return_value=mock)
with mock.patch.object(Class, 'function_get_list_data', mock_data):
response = MainService.get_list()
I alway get error add lambda function in schema
AttributeError: 'tuple' object has no attribute 'gone_date'
How to pass or test via function lambda in this case. I have try catch with raise unittest.skip but it not my expectation. My function still work normally, only error when apply unittest. Thanks for help.
Edit:
This trace back:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
yield
File "/usr/local/lib/python3.8/unittest/case.py", line 676, in run
self._callTestMethod(testMethod)
File "/usr/local/lib/python3.8/unittest/case.py", line 633, in _callTestMethod
method()
File "/opt/project/app/test/service/test_main_service.py", line 79, in test_process_list
response = MainService.get_list()
File "/opt/project/app/main/service/main_service.py", line 18, in process_main_list
list_data = list_schema.dump(list_data, many=True)
File "/usr/local/lib/python3.8/site-packages/marshmallow/schema.py", line 557, in dump
result = self._serialize(processed_obj, many=many)
File "/usr/local/lib/python3.8/site-packages/marshmallow/schema.py", line 515, in _serialize
return [
File "/usr/local/lib/python3.8/site-packages/marshmallow/schema.py", line 516, in <listcomp>
self._serialize(d, many=False)
File "/usr/local/lib/python3.8/site-packages/marshmallow/schema.py", line 521, in _serialize
value = field_obj.serialize(attr_name, obj, accessor=self.get_attribute)
File "/usr/local/lib/python3.8/site-packages/marshmallow/fields.py", line 312, in serialize
return self._serialize(value, attr, obj, **kwargs)
File "/usr/local/lib/python3.8/site-packages/marshmallow/fields.py", line 1722, in _serialize
return self._call_or_raise(self.serialize_func, obj, attr)
File "/usr/local/lib/python3.8/site-packages/marshmallow/fields.py", line 1736, in _call_or_raise
return func(value)
File "/opt/project/app/main/schemas/schema.py", line 13, in <lambda>
gone_date = fields.Function(lambda data: data.gone_date.timestamp())
Is there a way to force pandas to write an empty DataFrame to an HDF file?
import pandas as pd
df = pd.DataFrame(columns=['x','y'])
df.to_hdf('temp.h5', 'xxx')
df2 = pd.read_hdf('temp.h5', 'xxx')
Output:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 389, in read_hdf
return store.select(key, auto_close=auto_close, **kwargs)
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 740, in select
return it.get_result()
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 1518, in get_result
results = self.func(self.start, self.stop, where)
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 733, in func
columns=columns)
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 2986, in read
idx=i), start=_start, stop=_stop)
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 2575, in read_index
_, index = self.read_index_node(getattr(self.group, key), **kwargs)
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 2676, in read_index_node
data = node[start:stop]
File ".../Python-3.6.3/lib/python3.6/site-packages/tables/vlarray.py", line 675, in __getitem__
return self.read(start, stop, step)
File ".../Python-3.6.3/lib/python3.6/site-packages/tables/vlarray.py", line 811, in read
listarr = self._read_array(start, stop, step)
File "tables/hdf5extension.pyx", line 2106, in tables.hdf5extension.VLArray._read_array (tables/hdf5extension.c:24649)
ValueError: cannot set WRITEABLE flag to True of this array
Writing with format='table':
import pandas as pd
df = pd.DataFrame(columns=['x','y'])
df.to_hdf('temp.h5', 'xxx', format='table')
df2 = pd.read_hdf('temp.h5', 'xxx')
Output:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 389, in read_hdf
return store.select(key, auto_close=auto_close, **kwargs)
File ".../Python-3.6.3/lib/python3.6/site-packages/pandas/io/pytables.py", line 722, in select
raise KeyError('No object named {key} in the file'.format(key=key))
KeyError: 'No object named xxx in the file'
Pandas version: 0.24.2
Thank you for your help!
Putting empty DataFrame into HDFStore in fixed format should work (maybe you need to check versions of other packages, e.g. tables):
# Versions
pd.__version__
tables.__version__
# DF
df = pd.DataFrame(columns=['x','y'])
df
# Dump in fixed format
with pd.HDFStore('temp.h5') as store:
store.put('df', df, format='f')
print('Read:')
store.select('df')
>>> '0.24.2'
>>> '3.5.1'
>>> x y
>>>
>>> Read:
>>> x y
Pytable really forbids to do so (at least it was), but for fixed pandas has its workaround.
But as discussed in same github issue there are made some efforts to fix this behavior for table as well. But looks like solution is still 'hangs in the air' because it was so at the end of march.