how to unpack a tuple that MAY contain a list - python

I'm having an issue when trying to pass a sqlite query to another function.
The issue is that the sqlite query MAY contains a list and therefore I cannot use *args as it unpacks the tuple but then ignores the list, example query I'm attempting to pass to the function:
'SELECT postname FROM history WHERE postname = ? COLLATE NOCASE', [u'Test']
So in this case I could use args as opposed to *args in the destination function, however I may have a sqlite query that doesn't contain a list and therefore I can't always do this e.g.
'SELECT * FROM history'
so I guess my question in a nutshell is how can I successfully pass a sqlite query to another function whether it contains a list or not, using args?

Can you just try,except it?
try:
func(*args)
except TypeError:
func(args)
Of course, this will catch TypeErrors inside your function as well. As such, you may want to create another function which actually deals with the unpacking and makes sure to give you an unpackable object in return. This also doesn't work for strings since they'll unpack too (see comments).
Here's a function which will make sure an object can be unpacked.
def unpackable(obj):
if hasattr(obj,'__iter__'):
return obj
else:
return (obj,)
func(*unpackable(args))

I would argue the best answer here is to try and ensure you are always putting in an iterable, rather than trying to handle the odd case of having a single item.
Where you have ('SELECT postname FROM history WHERE postname = ? COLLATE NOCASE', [u'Test']) in one place, it makes more sense to pass in a tuple of length one - ('SELECT * FROM history', ) as opposed to the string.
You haven't said where the strings are coming from, so it's possible you simply can't change the way the data is, but if you can, the tuple is the much better option to remove the edge case from your code.
If you truly can't do that, then what you want is to unpack any non-string iterable, checking for that can be done as shown in this question.

Related

Postgres/psycopg2 "executue_values": Which argument was not converted during string formatting?

I am using execute_values to insert a list of lists of values into a postgres database using psycopg2. Sometimes I get "not all arguments converted during string formatting", indicating that one of the values in one of the lists is not the expected data type (and also not NoneType). When it is a long list, it can be a pain to figure out which value in which list was causing the problem.
Is there a way to get postgres/psycopg2 to tell me the specific 'argument which could not be converted'?
If not, what is the most efficient way to look through the list of lists and find any incongruent data types per place in the list, excluding NoneTypes (which obviously are not equal to a value but also are not the cause of the error)?
Please note that I am not asking for help with the specific set of values I am executing it, but trying to find a general method to more quickly inspect the problem query so I can debug it.

generate array element if not exists in python

What is the right way to check if element does not exist in Python ?
The element is expected to be present most of the time, and if it is empty it is not an "error" and need to be processed normally:
def checkElement(self, x, y):
if not (self.map[x][y]):
self.map[x][y] = 'element {}:{}'.format(x, y)
return self.map[x][y]
tldr
Your own code together with triplee's answer cover the common cases. I want to point out ambiguity in your question. How you check for "empty" very much depends on what your definition of empty is.
This is a tricky question because the semantics of "empty" are not exactly clear. Assuming that the data structure is a nested dict as could be inferred from your example, then it could be the case that empty means the inner/outer key is not contained in the dictionary. In that case you'd want to go with what triplee suggests. Similarly if the container is a nested list, but instead of KeyError you'd catch IndexError.
Alternatively, it could also be the case that "empty" means both the inner and outer keys are in the dictionary (or list) but the value at that position is some signifier for "empty". In this case the most natural "empty" in Python would be None, so you'd want to check if the value under those keys is None. None evaluates to False in boolean expressions so your code would work just fine.
However, depending on how your application defines empty these are not the only alternatives. If you're loading json data and the producer of said json has been prudent, empty values are null in json and map to None when loaded into Python. More often than not the producer of the json has not been prudent and empty values are actually just empty strings {firstName:''}, this happens more often than one would like. It turns out that if not self.map[x][y] works in this case as well because an empty string also evaluates to False, same applies to an empty list, an empty set and an empty dict.
We can generalise the meaning of empty further and say that "empty" is any value that is not recognised as actionable or valid content by the application and should therefore be considered "empty" - but you can already see how this is completely dependent on what the application is. Would {firstName: ' '} a string that only contains white space be empty, is a partially filled in email address empty?
The Best way to check if any object (Lists, Dicts, etc) exist or not is to wrap it within a try...except Block. Your checkElement Function could be re-written thus:
def checkElement
try:
self.map[x][y]
except:
# HANDLE THE CASE WHERE self.map[x][y] ISN'T SET...
self.map[x][y] = 'element {}:{}'.format(x, y)
The answer to what you seem to be asking is simply
try:
result = self.map[x][y]
except KeyError:
result = 'element {}:{}'.format(x, y)
self.map[x][y] = result
return result
Of course, if self.map[x] might also not exist, you have to apply something similar to that; or perhaps redefine it to be a defaultdict() instead, or perhaps something else entirely, depending on what sort of structure this is.
KeyError makes sense for a dict; if self[x] is a list, probably trap IndexError instead.

Python MySQL DB executemany does not work on one value

I am trying to do a batch insert for a single value (row), I am trying to use the executemany function to do so, but it will not work it returns TypeError: not all arguments converted during string formatting. However, when I add an extra value it does work
So...
It will return an error here:
entries_list=[("/helloworld"),
("/dfadfadsfdas")]
cursor.executemany('''INSERT INTO fb_pages(fb_page)
VALUES (%s)''', entries_list)
But not here:
entries_list=[("/helloworld", 'test'),
("/dfadfadsfdas", 'rers')]
cursor.executemany('''INSERT INTO fb_pages(fb_page, page_name)
VALUES (%s, %s)''', entries_list)
Writing ("/helloworld") is the same as writing "/helloworld". To create a tuple you need ("/helloworld", ).
What you're doing is actually running this:
cursor.executemany('''INSERT INTO fb_pages(fb_page)
VALUES (%s)''', ["/helloworld","/dfadfadsfdas"])
And now the error you're receiving makes perfect sense - you're supplying two arguments with only one place holder. Defining entries_list as follows would solve the problem:
entries_list=[("/helloworld",),
("/dfadfadsfdas",)]
In addition to making tuple with an additional , at the end as pointed by Karem
entries_list=[
("/helloworld",),
("/dfadfadsfdas",)
]
You could also just pass in lists and that will sort it out for you just as good when you are using parameterized query.
entries_list=[
["/helloworld"],
["/dfadfadsfdas"]
]

Copy cursor object in Python

I am working on a Trac-Plugin...
To retrieve my data I create a cursor object and get the result table like this:
db = self.env.get_db_cnx()
cursor = db.cursor()
cursor.execute("SELECT...")
Now the result is being used in 3 different functions. My Problem is now that the cursor is being cleaned out while looping through the first time (like it is told here http://packages.python.org/psycopg2/cursor.html)
I then tried to copy the cursor object, but this failed too. the copy(cursor) function seems to have problem with a big dataset and the function deepcopy(cursor) fails anyway (according to this bug http://bugs.python.org/issue1515).
How can I solve this issue?
Storing the values from any finite iterable is simple:
results = list(cursor)
Iterate over the iterable and store the results in a list. This list can be iterated over as many times as necessary.
You don't need a copy of the cursor, just a copy of the results of the query.
For this specific case, you should do what 9000 suggests in his comment -- use the cursors built-in functionality to get the results of a list, which should be as fast or faster than manually calling list.
If you want to avoid looping through the data an extra time you could try wrapping it in a generator:
def lazy_execute(sql, cursor=cursor):
results = []
cursor.execute(sql)
def fetch():
if results:
for r in results:
yield r
raise StopIteration()
else:
for r in cursor:
results.append(r)
yield r
raise StopIteration()
return fetch
This essentially creates a list as you need it, but lets you call the same function everywhere, safely. You would then use this like so:
results = lazy_execute(my_sql):
for r in results():
"do something with r"
This is almost certainly an over-engineered premature-optimization, though it does have the advantage of the same name meaning the same thing in every case, as opposed to generating a new list and then the same data having two different names.
I think if I were going to argue for using this I would use the same-names argument, unless the data set was pretty huge, but if it's huge enough to matter then there's a good chance you don't want to store it all in memory anyway.
Also it's completely untested.

Reducing menu.add_command() clutter/repeat lines

I would like to do the following (just an example, the real code has more menu's and more add_command's):
editmenu.add_command(label="Cut",state="disabled")
editmenu.add_command(label="Copy",state="disabled")
editmenu.add_command(label="Paste",state="disabled")
editmenu.add_command(label="Delete",state="disabled")
But on fewer lines, In fact, just one line if possible. I have menus that are taking up a considerable amount of space in my program and would like to reduce the clutter. Plus the programmer in me sees a bunch of similar lines and feels there must be a way to reduce them.
I tried the following code to no avail; I obviously got a nameerror because label and state aren't defined...
for labeldic in [{label:"Cut"},{label:"Copy"},{label:"Paste"},{label:"Delete"}]: editmenu.add_command(labeldic+{state:"disabled"})
Thanks in advance for any suggestions!
Here's a translation of what you wanted to do:
for labeldic in [{"label":"Cut"},{"label":"Copy"},{"label":"Paste"},{"label":"Delete"}]:
labeldic.update({"state": "disabled"})
editmenu.add_command(**labeldic)
There were three problems I fixed.
The first is that dictionary keys need to be quoted if they are strings. If you want a dict mapping the string 'label' to the the string 'cut', you can do it using the dict literal {'label': 'cut'}, or else possibly with the dict() constructor which expands keyword arguments that way: dict(label='cut'). As you discovered, {label: 'cut'} wouldn't work, because it tries to use a variable's value for the key, but there is no such variable.
The second is that you can't merge dictionaries using the + operator. It doesn't work, unfortunately. There is, however, an update method that mutates the dict it's called on. Since it doesn't return a merged dict, it can't be used inline the way you used +.
The third problem is that passing a dict is not the same as passing in keyword arguments. foo(bar='baz') is not the same as foo({'bar':'baz'}), but it is the same as foo(**{'bar':'baz'}). The ** syntax in function calling "unpacks" a dictionary into keyword arguments.
Regardless it's sort of weird style. Here's what I would do instead:
for label in ['Cut', 'Copy', 'Paste', 'Delete']:
editmenu.add_command(label=label, state='disabled')

Categories

Resources