I have a quite concerning problem with this API. I am using this API to perform six different queries one after the other. However the between the queries I save the resulting pandas dataframes into csv files.
After this phase, I proceed to read the csv files and perform some operations (I substitute the NaN values with 0 in the column called '_value'). The problem is that, in some executions of the code, the csv files seems to miss some columns or are not filled. It returns the following error:
Traceback (most recent call last):
File "Path\code.py", line 136, in <module>
fd['_value'] = fd["_value"].fillna(0)
File "Path\Anaconda3\envs\tf\lib\site-packages\pandas\core\frame.py", line 3505, in __getitem__
indexer = self.columns.get_loc(key)
File "Path\Anaconda3\envs\tf\lib\site-packages\pandas\core\indexes\base.py", line 3631, in get_loc
raise KeyError(key) from err
KeyError: '_value'
Exception ignored in: <function InfluxDBClient.__del__ at 0x000002723CB52170>
Traceback (most recent call last):
File "Path\Anaconda3\envs\tf\lib\site-packages\influxdb_client\client\influxdb_client.py", line 284, in __del__
File "Path\Anaconda3\envs\tf\lib\site-packages\influxdb_client\_sync\api_client.py", line 84, in __del__
File "Path\Anaconda3\envs\tf\lib\site-packages\influxdb_client\_sync\api_client.py", line 661, in _signout
TypeError: 'NoneType' object is not callable
Exception ignored in: <function ApiClient.__del__ at 0x000002723CB53BE0>
Traceback (most recent call last):
File "Path\Anaconda3\envs\tf\lib\site-packages\influxdb_client\_sync\api_client.py", line 84, in __del__
File "Path\Anaconda3\envs\tf\lib\site-packages\influxdb_client\_sync\api_client.py", line 661, in _signout
TypeError: 'NoneType' object is not callable
I don't know how to solve this problem and why sometimes occurs and why sometimes not. Below you can see the code for which I do the queries.
from influxdb_client import InfluxDBClient
client = InfluxDBClient(
url=url,
token=token,
org=org
)
query_api = client.query_api()
df = query_api.query_data_frame(query_1)
df.to_csv('path_to_file/name1.csv')
df = query_api.query_data_frame(query_2)
df.to_csv('path_to_file/name2.csv')
df = query_api.query_data_frame(query_3)
df.to_csv('path_to_file/name3.csv')
df = query_api.query_data_frame(query_4)
df.to_csv('path_to_file/name4.csv')
df = query_api.query_data_frame(query_5)
df.to_csv('path_to_file/name5.csv')
df = query_api.query_data_frame(query_6)
df.to_csv('path_to_file/name6.csv')
client.close()
To write this code I followed the examples at GitHub Influxdb-Client Python: Queries with pandas dataframe
Should I do somenthing like: open the client -> query -> save file -> close the client -> repeat?
EDIT: open the client -> query -> save file -> close the client -> repeat, did not solve anything.
Related
I'm trying to create an xlsm file using xlwings. Or openpyxl if not possible with xlwings.
I'm on Mac so I can't use PyWin32.
I'm running the following python code:
import xlwings as xw
b = xw.Book()
b.save('test_book2.xlsm')
When I run that code I receive the message:
When I click 'yes' to that message I receive the following error:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/aeosa/appscript/reference.py", line 482, in __call__
return self.AS_appdata.target().event(self._code, params, atts, codecs=self.AS_appdata).send(timeout, sendflags)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/aeosa/aem/aemsend.py", line 92, in send
raise EventError(errornum, errormsg, eventresult)
aem.aemsend.EventError: Command failed: Parameter error. (-50)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/matthewbell/Desktop/test.py", line 4, in <module>
b.save('test_book2.xlsm')
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlwings/main.py", line 704, in save
return self.impl.save(path)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/xlwings/_xlmac.py", line 244, in save
self.xl.save_workbook_as(filename=hfs_path, overwrite=True)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/aeosa/appscript/reference.py", line 518, in __call__
raise CommandError(self, (args, kargs), e, self.AS_appdata) from e
appscript.reference.CommandError: Command failed:
OSERROR: -50
MESSAGE: Parameter error.
COMMAND: app(pid=6198).workbooks['Book1'].save_workbook_as(filename='Macintosh HD:Users:matthewbell:Desktop:test_book2.xlsm', overwrite=True)
What can I do to create an xlsm file?
I have a Pandas dataframe and want to compute bigrams with the following code:
from nltk import bigrams
df['tweet_bigrams'] = df['tweet_tokenized'].apply(lambda x: list(bigrams(x)))
It was working fine in Jupyter. However, when I tried to run it on Linux terminal, I keep receiving the following error:
Traceback (most recent call last):
File "/usr/licensed/anaconda3/5.3.1/lib/python3.7/site-packages/nltk/util.py", line 468, in ngrams
history.append(next(sequence))
StopIteration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "url_tweet_feature_extraction.py", line 143, in <module>
df['tweet_bigrams'] = df['tweet_tokenized'].apply(lambda x: list(bigrams(x)))
File "/usr/licensed/anaconda3/5.3.1/lib/python3.7/site-packages/pandas/core/series.py", line 3194, in apply
mapped = lib.map_infer(values, f, convert=convert_dtype)
File "pandas/_libs/src/inference.pyx", line 1472, in pandas._libs.lib.map_infer
File "url_tweet_feature_extraction.py", line 143, in <lambda>
df['tweet_bigrams'] = df['tweet_tokenized'].apply(lambda x: list(bigrams(x)))
File "/usr/licensed/anaconda3/5.3.1/lib/python3.7/site-packages/nltk/util.py", line 491, in bigrams
for item in ngrams(sequence, 2, **kwargs):
RuntimeError: generator raised StopIteration
Any idea on how to resolve this?
Update your NLTK. You need version 3.4 (or higher, for future readers). Old versions relied on StopIteration handling that changed in Python 3.7.
I wrote a program that takes data and plots it via matplotlib. It works no problem on one computer but on the other I get the error:
TypeError: Couldn't find foreign struct converter for 'cairo.Context'
Traceback (most recent call last):
File "/usr/lib/python3.5/idlelib/run.py", line 125, in main
seq, request = rpc.request_queue.get(block=True, timeout=0.05)
File "/usr/lib/python3.5/queue.py", line 172, in get raise Empty queue.Empty
what can I do to get a cairo.Context converter?
is their any way to fetch data from db using chunkwise
i have around 30 million data in my db it will cause big memory usage without using this
i tried with pandas version 0.17.1
for sd in psql.read_sql(sql,myconn,chunksize=100):
print sd
but throwing
/usr/bin/python2.7 /home/subin/PythonIDE/workspace/python/pygram.py
Traceback (most recent call last):
File "/home/subin/PythonIDE/workspace/python/pygram.py", line 20, in <module>
for sd in psql.read_sql(sql,myconn,chunksize=100):
File "/usr/lib/python2.7/dist-packages/pandas/io/sql.py", line 1565, in _query_iterator
parse_dates=parse_dates)
File "/usr/lib/python2.7/dist-packages/pandas/io/sql.py", line 137, in _wrap_result
coerce_float=coerce_float)
File "/usr/lib/python2.7/dist-packages/pandas/core/frame.py", line 969, in from_records
coerce_float=coerce_float)
File "/usr/lib/python2.7/dist-packages/pandas/core/frame.py", line 5279, in _to_arrays
dtype=dtype)
File "/usr/lib/python2.7/dist-packages/pandas/core/frame.py", line 5357, in _list_to_arrays
content = list(lib.to_object_array_tuples(data).T)
TypeError: Argument 'rows' has incorrect type (expected list, got tuple)
please help me
I could successfully connect to reddit's servers with oauth2 some time ago, but when running my script just now, I get a KeyError followed by a NoSectionError. Code is below followed by exceptions, (The code has been reduced to its essentials).
import praw
# Configuration
APP_UA = 'useragent'
...
...
...
r = praw.Reddit(APP_UA)
Error message:
Traceback (most recent call last):
File "D:\Directory\Python\lib\configparser.py", line 843, in items
d.update(self._sections[section])
KeyError: 'useragent'
A NoSectionError occurred when handling the above exception.
"During handling of the above exception, another exception occurred:"
'Traceback (most recent call last):
File "D:\Directory\Python\Projects\myprj for Reddit, globaloffensive\oddshotcrawler.py", line 19, in <module>
r = praw.Reddit(APP_UA)
File "D:\Directory\Python\lib\site-packages\praw\reddit.py", line 84, in __init__
**config_settings)
File "D:\Directory\Python\lib\site-packages\praw\config.py", line 47, in __init__
raw = dict(Config.CONFIG.items(site_name), **settings)
File "D:\Directory\Python\lib\configparser.py", line 846, in items
raise NoSectionError(section)
configparser.NoSectionError: No section: 'useragent'
[Finished in 0.2s]
Try giving it a user_agent kwarg.
r = praw.Reddit(useragent=APP_UA)