I would like to use the in_ operator in sqlalchemy using two values, one of them being NULL (mysql NULL), I don't know how to pass it via Python?
So I have a Python cgi that contains a bunch of parameters that I format then finally store inside a dict queryValues (the key being the column name and the value being a value sent by the user stored inside a fieldStorage)
for attr,value in queryValues.items() : #queryValues is a dict of parameters
valueWithNone = value.append(None) #I want to includ NULL
and_args_of_cr = [(and_(getattr(TableCR.c,attr).in_(valueWithNone)))]
I tried None and sqlalchemy.sql.null(), also tried putting directly in_(value,None) but value has the form ['Yes'] so I don't know how to do this.
But it's not working, how can I do this please?
The line value.append(None) is an in-place modification and does not return anything, so valueWithNone will be None. This is probably what you're after:
for attr,value in queryValues.items():
queryvalues = value[:] # Create a copy of list
queryvalues.append(None)
and_args_of_cr = [(and_(getattr(TableCR.c,attr).in_(queryvalues)))]
Related
I'm adding a search feature to my application (created using PyQt5) that will allow the user to search an archive table in the database. I've provided applicable fields for the user to choose to match rows with. I'm having some trouble with the query filter use only what was provided by the user, given that the other fields would be empty strings.
Here's what I have so far:
def search_for_order(pierre):
fields = {'archive.pat.firstname': pierre.search_firstname.text(),
'archive.pat.lastname': pierre.search_lastname.text(),
'archive.pat.address': pierre.search_address.text(),
'archive.pat.phone': pierre.search_phone.text(),
'archive.compound.compname': pierre.search_compname.text(),
'archive.compound.compstrength': pierre.search_compstrength.text(),
'archive.compound.compform': pierre.search_compform.currentText(),
'archive.doc.lastname': pierre.search_doctor.text(),
'archive.clinic.clinicname': pierre.search_clinic.text()
}
filters = {}
for field, value in fields.items():
if value is not '':
filters[field] = value
query = session.query(Archive).join(Patient, Prescribers, Clinic, Compound)\
.filter(and_(field == value for field, value in filters.items())).all()
The fields dictionary collects the values of all the fields in the search form. Some of them will be blank, resulting in empty strings. filters is intended to be a dictionary of the object names and the value to match that.
The problem lies in your definition of the expressions within your and_ conjunction. As of now you're comparing each field with the corresponding value which of course returns false for each comparison.
To properly populate the and_ conjunction you have to create a list of what sqlalchemy calls BinaryExpression objects.
In order to do so I'd change your code like this:
1) First use actual references to your table classes in your definition of fields:
fields = {
(Patient, 'firstname'): pierre.search_firstname.text(),
(Patient, 'lastname'): pierre.search_lastname.text(),
(Patient, 'address'): pierre.search_address.text(),
(Patient, 'phone'): pierre.search_phone.text(),
(Compound, 'compname'): pierre.search_compname.text(),
(Compound, 'compstrength'): pierre.search_compstrength.text(),
(Compound, 'compform'): pierre.search_compform.currentText(),
(Prescribers, 'lastname'): pierre.search_doctor.text(),
(Clinic, 'clinicname'): pierre.search_clinic.text()
}
2) Define filters as a list instead of a dictionary:
filters = list()
3) To populate the filters list explode the tuple of table and fieldname used as key in the fields dictionary and add the value to again create tuples but now with three elements. Append each of the newly created tuples to the list of filters:
for table_field, value in fields.items():
table, field = table_field
if value:
filters.append((table, field, value))
4) Now transform the created list of filter definitions to a list of BinaryExpression objects usable by sqlalchemy:
binary_expressions = [getattr(table, attribute) == value for table, attribute, value in filters]
5) Finally apply the binary expressions to your query, make sure it's presented to the and_ conjunction in a consumable form:
query = session.query(Archive).join(Patient, Prescribers, Clinic, Compound)\
.filter(and_(*binary_expressions)).all()
I'm not able to test that solution within your configuration, but a similar test using my environment was successful.
Once you get a query object bound to a table in SqlAlquemy - that is, what is returned by session.query(Archive) in the code above -, calling some methods on that object will return a new, modified query, where that filter is already applied.
So, my preferred way of combining several and filters is to start from the bare query, iterate over the filters to be used, and for each, add a new .filter call and reassign the query:
query = session.query(Archive).join(Patient, Prescribers, Clinic, Compound)
for field, value in filters.items():
query = query.filter(field == value)
results = query.all()
Using and_ or or_ as you intend can also work - in the case of your example, the only thing missing was an *. Without an * preceeding the generator expression, it is passed as the first (and sole) parameter to and_. With a prefixed *, all elements in the iterator are unpacked in place, each one passed as an argument:
...
.filter(and_(*(field == value for field, value in filters.items()))).all()
I'm using an API that returns different dicts depending on the query. So because I can't be certain I'll have the desired keys, I'm using dict.get() to avoid raising a KeyError. But I am currently inserting these results into a database, and would like to avoid filling rows with None.
What would be the preferred way to deal with this?
Use NULL as default value with dict.get(). In case key is not present in your dict object, it will return NULL instead of None. 'NULL' in databases (most) is equivalent to None in Python. For example:
>>> my_dict = {}
# v Returns `NULL` if key not found in `my_dict`
>>> my_dict.get('key', 'NULL')
'NULL'
In case, you have column as NOT NULL, set them as empty string. For example:
>>> my_dict.get('key', '')
''
I have a clob datapoint that I must use .read() to add it to a list, however, sometimes this column is null, so I need a check first before using the .read() property.
I've isolated the portion that is relevant. If I just print data, the null fields print as none. Is not null seems to be the wrong code, but I'm not sure what to use.
for currentrow in data:
if currentrow[8] is not null:
Product = currentrow[8].read()
else:
Product = currentrow[8]
data = tuple([currentrow[0], currentrow[1], currentrow[2], currentrow[3], currentrow[4], currentrow[5], currentrow[6], currentrow[7], Product])
print data
From the docs:
The sole value of types.NoneType. None is frequently used to represent
the absence of a value, as when default arguments are not passed to a
function.
So you may try this:
for currentrow in data:
if currentrow[8] is not None: <-- Change this from null to None
Product = currentrow[8].read()
else:
Product = currentrow[8]
data = tuple([currentrow[0], currentrow[1], currentrow[2], currentrow[3], currentrow[4], currentrow[5], currentrow[6], currentrow[7], Product])
print data
Python uses the None singleton value as a null; NULLs from the database are translated to that object:
if currentrow[8] is not None:
You could collapse that line into just two:
for currentrow in data:
product = currentrow[8] and currentrow[8].read()
data = currentrow[:8] + (product,)
as Python's and operator short-circuits and None is false in a boolean context. Unless you set a row factory, cx_Oracle cursors produce tuples for each row, which you can slice to select just the first 8 elements, then append the 9th to create a new tuple from the two.
I have code along these lines:
classinstance.col1 = queryresult.col1
classinstance.col2 = queryresult.col2
classinstance.col3 = queryresult.col3
classinstance.col4 = queryresult.col4
Which adds variables to the classinstance and assigns the values of the queryresult column with the same name as the variable.
I am hoping to make my code a little more flexible, and not need to identify the columns by name. To this end, I was wondering if there was some way to do a loop over all the columns, rather than handle each one individually. Something like this (This is psuedocode rather than actual code, since I'm not sure what it should actually look like):
for each var in vars(queryresult):
classinstance.(var.name) = var.value
Is this possible? What does it require? Is there some fundamental misunderstanding on my part?
I'm assuming there's only one row in the result for the following example (built with help from comments here). The key component here is zip(row.cursor_description, row) used to get column names from pyodbc.Row object.
# convert row to an object, assuming row variable contains query result
rowdict = { key[0]:value for (key, value) in zip(row.cursor_description, row) }
# loop through keys (equivalent to column names) and set class instance values
# assumes existing instance of class is variable classinstance
for column in rowdict.keys():
classinstance[column] = rowdict[column]
I'm trying to do something relatively simple, spit out the column names and respective column values, and possibly filter out some columns so they aren't shown.
This is what I attempted ( after the initial connection of course ):
metadata = MetaData(engine)
users_table = Table('fusion_users', metadata, autoload=True)
s = users_table.select(users_table.c.user_name == username)
results = s.execute()
if results.rowcount != 1:
return 'Sorry, user not found.'
else:
for result in results:
for x, y in result.items()
print x, y
I looked at the API on SQLAlchemy ( v.5 ) but was rather confused. my 'result' in 'results' is a RowProxy, yet I don't think it's returning the right object for the .items() invocation.
Let's say my table structure is so:
user_id user_name user_password user_country
0 john a9fu93f39uf usa
i want to filter and specify the column names to show ( i dont want to show the user_password obviously ) - how can I accomplish this?
A SQLAlchemy RowProxy object has dict-like methods -- .items() to get all name/value pairs, .keys() to get just the names (e.g. to display them as a header line, then use .values() for the corresponding values or use each key to index into the RowProxy object, etc, etc -- so it being a "smart object" rather than a plain dict shouldn't inconvenience you unduly.
You can use results instantly as an iterator.
results = s.execute()
for row in results:
print row
Selecting specific columns is done the following way:
from sqlalchemy.sql import select
s = select([users_table.c.user_name, users_table.c.user_country], users_table.c.user_name == username)
for user_name, user_country in s.execute():
print user_name, user_country
To print the column names additional to the values the way you have done it in your question should be the best because RowProxy is really nothing more than a ordered dictionary.
IMO the API documentation for SqlAlchemy is not really helpfull to learn how to use it. I would suggest you to read the SQL Expression Language Tutorial. It contains the most vital information about basic querying with SqlAlchemy.