I'm using the following code:
import connectorx as cx
cx_con = f"sqlite:C:\\Users\\X\\PycharmProjects\\Bot\\Trading Data\\{get_trading_database()}.db"
df = cx.read_sql(conn=cx_con,query=f"SELECT * FROM daily_quotes;"partition_on="quoteTimeInLong")
and getting the following error when I add the "partition_on" argument:
Traceback (most recent call last): File
"C:\Users\X\PycharmProjects\Bot\main.py", line 1065, in
cx_df_dict = read_symbol_data_from_db(query_type='cx').values() File
"C:\Users\X\PycharmProjects\Bot\main.py", line 499, in
read_symbol_data_from_db df = cx.read_sql(conn=cx_con, File
"C:\Users\X\PycharmProjects\venv\lib\site-packages\connectorx\init.py",
line 224, in read_sql result = _read_sql( TypeError: argument
'partition_query': Unable to convert key: num
The query works just fine if I omit partition_on, but from my understanding I can speed things up further if I add this argument.
Related
I have defined a new derived dimension with
[molar_energy] = [energy] / [substance]
However, if I do the following it complains:
>>> UR.get_compatible_units('[molar_energy]')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/cedric/.local/share/virtualenvs/MatDB--uGOYMXa/lib/python3.9/site-packages/pint/registry.py", line 881, in get_compatible_units
equiv = self._get_compatible_units(input_units, group_or_system)
File "/Users/cedric/.local/share/virtualenvs/MatDB--uGOYMXa/lib/python3.9/site-packages/pint/registry.py", line 2082, in _get_compatible_units
ret = super()._get_compatible_units(input_units, group_or_system)
File "/Users/cedric/.local/share/virtualenvs/MatDB--uGOYMXa/lib/python3.9/site-packages/pint/registry.py", line 1835, in _get_compatible_units
ret = super()._get_compatible_units(input_units, group_or_system)
File "/Users/cedric/.local/share/virtualenvs/MatDB--uGOYMXa/lib/python3.9/site-packages/pint/registry.py", line 891, in _get_compatible_units
return self._cache.dimensional_equivalents[src_dim]
KeyError: <UnitsContainer({'[length]': 2, '[mass]': 1, '[substance]': -1, '[time]': -2})
I saw that there is a conversion included in a context but I don't use it. What I am doing wrong?
Thanks for your help
PS: logged issue https://github.com/hgrecco/pint/issues/1418
Just leaving the solution here for anyone who faces this issue as well.
I just added a made-up unit and it worked
# Molar Energy
[molar_energy] = [energy] / [substance]
mol_en = J / mol
I am trying to fetch value from cell using below code:
import openpyxl
wb=openpyxl.load_workbook(r'C:\Users\xyz\Desktop\xxx.xlsx')
sheet=wb.active
v = sheet.cell(9,9).value
print(v)
When I execute the code, I get the following error:
Warning (from warnings module):
File "C:\Users\XYZ\AppData\Local\Programs\Python\Python36-32\lib\site-packages\openpyxl\worksheet\worksheet.py", line 303
warn("Using a coordinate with ws.cell is deprecated. Use ws[coordinate] instead")
UserWarning: Using a coordinate with ws.cell is deprecated. Use ws[coordinate] instead
Traceback (most recent call last):
File "C:/Users/XYZ/Desktop/test.py", line 14, in <module>
v = sheet.cell(9,9).value
File "C:\Users\XYZ\AppData\Local\Programs\Python\Python36-32\lib\site-packages\openpyxl\worksheet\worksheet.py", line 304, in cell
row, column = coordinate_to_tuple(coordinate)
File "C:\Users\XYZ\AppData\Local\Programs\Python\Python36-32\lib\site-packages\openpyxl\utils\cell.py", line 185, in coordinate_to_tuple
col, row = coordinate_from_string(coordinate)
File "C:\Users\XYZ\AppData\Local\Programs\Python\Python36-32\lib\site-packages\openpyxl\utils\cell.py", line 45, in coordinate_from_string
match = COORD_RE.match(coord_string.upper())
AttributeError: 'int' object has no attribute 'upper'
Can anyone say what is wrong with my program.
You need to use keyword arguments:
v = sheet.cell(row=9, column=9).value
How should I fix this?
import pandas as pd
csv_file = 'sample.csv'
count = 1
my_filtered_csv = pd.read_csv(csv_file, usecols=['subDirectory_filePath', 'expression'])
emotion_map = { '0':'6', '1':'3', '2':'4', '3':'5', '4':'2', '5':'1', '6':'0'}
my_filtered_csv['expression'] = my_filtered_csv['expression'].replace(emotion_map)
print(my_filtered_csv)
Error is:
Traceback (most recent call last):
File "/Users/mona/CS585/project/affnet/emotion_map.py", line 11, in <module>
my_filtered_csv['expression'] = my_filtered_csv['expression'].replace(emotion_map)
File "/Users/mona/anaconda/lib/python3.6/site-packages/pandas/core/generic.py", line 3836, in replace
limit=limit, regex=regex)
File "/Users/mona/anaconda/lib/python3.6/site-packages/pandas/core/generic.py", line 3885, in replace
regex=regex)
File "/Users/mona/anaconda/lib/python3.6/site-packages/pandas/core/internals.py", line 3259, in replace_list
masks = [comp(s) for i, s in enumerate(src_list)]
File "/Users/mona/anaconda/lib/python3.6/site-packages/pandas/core/internals.py", line 3259, in <listcomp>
masks = [comp(s) for i, s in enumerate(src_list)]
File "/Users/mona/anaconda/lib/python3.6/site-packages/pandas/core/internals.py", line 3247, in comp
return _maybe_compare(values, getattr(s, 'asm8', s), operator.eq)
File "/Users/mona/anaconda/lib/python3.6/site-packages/pandas/core/internals.py", line 4619, in _maybe_compare
raise TypeError("Cannot compare types %r and %r" % tuple(type_names))
TypeError: Cannot compare types 'ndarray(dtype=int64)' and 'str'
Process finished with exit code 1
A few lines of the csv file looks like:
,subDirectory_filePath,expression
0,689/737db2483489148d783ef278f43f486c0a97e140fc4b6b61b84363ca.jpg,1
1,392/c4db2f9b7e4b422d14b6e038f0cdc3ecee239b55326e9181ee4520f9.jpg,0
2,468/21772b68dc8c2a11678c8739eca33adb6ccc658600e4da2224080603.jpg,0
3,944/06e9ae8d3b240eb68fa60534783eacafce2def60a86042f9b7d59544.jpg,1
4,993/02e06ee5521958b4042dd73abb444220609d96f57b1689abbe87c024.jpg,8
5,979/f675c6a88cdef99a6d8b0261741217a0319387fcf1571a174f99ac81.jpg,6
6,637/94b769d8e880cbbea8eaa1350cb8c094a03d27f9fef44e1f4c0fb2ae.jpg,9
7,997/b81f843f08ce3bb0c48b270dc58d2ab8bf5bea3e2262e50bbcadbec2.jpg,6
8,358/21a32dd1c1ecd57d3e8964621c911df1c0b3348a4ae5203b4a243230.JPG,9
Changing the emotion_map to the following fixed the problem:
emotion_map = { 0:6, 1:3, 2:4, 3:5, 4:2, 5:1, 6:0}
Another possiblity which can also create this error is:
you have already run this code once and the data is already replaced.
To solve this problem, you can go back and load the data set again
I am trying to train some data for a classification tool I am building. I have done some simple examples and it works fine.
I am now trying to use some data from work (which is what it will be used on), and I am getting a TypeError: 'float' object is not iterable error, the traceback is here:
Traceback (most recent call last):
File "C:/Users/nicholas/Desktop/machineTraining/classLearning.py", line 13, in <module>
cl = NaiveBayesClassifier(train)
File "C:\Users\nicholas\AppData\Local\Programs\Python\Python36-32\lib\site-packages\textblob\classifiers.py", line 205, in __init__
super(NLTKClassifier, self).__init__(train_set, feature_extractor, format, **kwargs)
File "C:\Users\nicholas\AppData\Local\Programs\Python\Python36-32\lib\site-packages\textblob\classifiers.py", line 139, in __init__
self._word_set = _get_words_from_dataset(self.train_set) #Keep a hidden set of unique words.
File "C:\Users\nicholas\AppData\Local\Programs\Python\Python36-32\lib\site-packages\textblob\classifiers.py", line 63, in _get_words_from_dataset
return set(all_words)
This is my code:
df = pd.read_csv("C:/Users/nicholas\Desktop/trainData.csv", encoding='latin-1')
df['train'] = df[['Summary', 'Primary Classification']].apply(tuple, axis=1)
aTrain = df['train'].values.tolist()
train = aTrain
cl = NaiveBayesClassifier(train)
Any ideas on what is going wrong?
I have a document:
class Hamburger(Document):
size = IntField(default=0, required=True)
which I can use fine
h = Hamburger()
h.size = 5
h.save()
until I try an update_one for example
Hamburger.objects().update_one(set__size=5)
which throws this exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/mongoengine/queryset/base.py", line 467, in update_one
upsert=upsert, multi=False, write_concern=write_concern, **update)
File "/usr/local/lib/python2.7/site-packages/mongoengine/queryset/base.py", line 430, in update
update = transform.update(queryset._document, **update)
File "/usr/local/lib/python2.7/site-packages/mongoengine/queryset/transform.py", line 207, in update
field = cleaned_fields[-1]
IndexError: list index out of range
It is possible to have a Document with a field called size? Is there any way to achieve this?