Getting one or zero elements of QuerySet in Django? - python

In Django 3.1, suppose I have some model M, and I have a QuerySet over M that I expect has either one or zero elements. How do I branch on whether it is one or zero elements (throwing an exception if it has two or more), and in the branch with one element, how do I get the one M object:
try:
my_query_set = M.objects.filter(some_filter_expr)
if ???:
m = ??? # the one M object
else:
on_zero_objects()
except ???:
more_than_one_object()

Use .first() (or last() )
one_or_none = M.objects.filter(some_filter_expr).first()
Thus, the variable one_or_none will have either an instance of model M or None
You can handle the more than mone element condition or separate conditions are in the following way,
my_query_set = M.objects.filter(some_filter_expr)
try:
my_query_set[1]
handle_more_than_one_element()
except IndexError:
try:
my_query_set[0]
handle_only_one_element()
except IndexError:
handle_no_element()

You can use .exists() to check if there is any data exists in my_queryset, if yes, then use .get() to get the one object. If there is multiple objects, then it will raise MultipleObjectsReturned error. In except block, you can handle it with more_than_one_object.
my_query_set = M.objects.filter(some_filter_expr)
try:
if my_query_set.exists()
m = my_query_set.get()
else:
on_zero_objects()
except MultipleObjectsReturned:
more_than_one_object()

You can use .get() to get a single element, and handle the exceptions for the DoesNotExist case and MultipleObjectsReturned cases.
from django.core.exceptions import MultipleObjectsReturned
...
try:
model_object = M.objects.get(some_filter_expr)
except M.DoesNotExist:
on_zero_objects()
except MultipleObjectsReturned:
more_than_one_object()

Related

How to differentiate between cases of ValueError

Since too many python operations return ValueError, how can we differentiate between them?
Example: I expect an iterable to have a single element, and I want to get it
a, = [1, 2]: ValueError: too many values to unpack
a, = []: ValueError: too few values to unpack
How can I differentiate between those two cases?? eg
try:
a, = lst
except ValueError as e:
if e.too_many_values:
do_this()
else:
do_that()
I realise that in this particular case I could find a work-around using length/indexing, but the point is similar cases come up often, and I want to know if there's a general approach. I also realise I could check the error message for if 'too few' in message but it seems a bit crude.
try:
raise ValueError('my error')
except ValueError as e:
# use str(), not repr(), see
# https://stackoverflow.com/a/45532289/7919597
x = getattr(e, 'message', str(e))
if 'my error' in x:
print('got my error')
(see also How to get exception message in Python properly)
But this might not be a clean solution after all.
The best thing would be to narrow the scope of your try block so that only one was possible. Or don't depend on exceptions to detect those error cases.
This isn't really an answer, because it only applies if you have some control over how the exceptions are raised. Since exceptions are just objects, you can just tack on other objects / flags to them. Not saying that this is a great thing to do or a great way of doing it:
from enum import Enum
class ValueErrorType(Enum):
HelloType = 0,
FooType = 1
def some_func(string):
if "Hello" in string:
error = ValueError("\"Hello\" is not allowed in my strings!!!!")
error.error_type = ValueErrorType.HelloType
raise error
elif "Foo" in string:
error = ValueError("\"Foo\" is also not allowed!!!!!!")
error.error_type = ValueErrorType.FooType
raise error
try:
some_func("Hello World!")
except ValueError as error:
error_type_map = {
ValueErrorType.HelloType: lambda: print("It was a HelloType"),
ValueErrorType.FooType: lambda: print("It was a FooType")
}
error_type_map[error.error_type]()
I'd be curious to know if there is some way you can achieve this with exceptions where you have no control over how they're raised.

Try-clause containing multiple statements

Let's say I have the following function/method, which calculates a bunch of stuff and then sets a lot a variables/attributes: calc_and_set(obj).
Now what I would like to do is to call the function several times with different objects, and if one or more fails then nothing should be set at all.
I thought I could do it like this:
try:
calc_and_set(obj1)
calc_and_set(obj2)
calc_and_set(obj3)
except:
pass
But this obviously doesn't work. If for instance the error happens in the third call to the function, then the first and second call will already have set the variables.
Can anyone think of a "clean" way of doing what I want? The only solutions I can think of are rather ugly workarounds.
I see a few options here.
A. Have a "reverse function", which is robust. So if
def calc_and_set(obj):
obj.A = 'a'
def unset(obj):
if hasattr(obj, 'A'):
del obj.A
and
try:
calc_and_set(obj1)
calc_and_set(obj2)
except:
unset(obj1)
unset(obj2)
Notice, that in this case, unset doesn't care if calc_and_set completed successfully or not.
B. Separate calc_and_set to try_calc_and_set, testing if it works, and set, which won't throw errors, and would be called only if all try_calc_and_set didn't fail.
try:
try_calc_and_set(obj1)
try_calc_and_set(obj2)
calc_and_set(obj1)
calc_and_set(obj2)
except:
pass
C. (my favorite) - have calc_and_set return a new variable, and not operate in place. If successful, replace the original reference with the new one. This could easily be done by adding copy as the first statement in calc_and_set, and then returning the variable.
try:
obj1_t = calc_and_set(obj1)
obj2_t = calc_and_set(obj2)
obj1 = obj1_t
obj2 = obj2_t
except:
pass
The mirror of that one is of course to save your objects before:
obj1_c = deepcopy(obj1)
obj2_c = deepcopy(obj2)
try:
calc_and_set(obj1)
calc_and_set(obj2)
except:
obj1 = obj1_c
obj2 = obj2_c
And as a general comment (if this is just a sample code, forgive me) - don't have excepts without specifying exception type.
You can also try cache the actions you want to take and then do them all in one go if everybody passes:
from functools import partial
def do_something (obj, val):
# magic here
def validate (obj):
if obj.is_what_you_want():
return partial(do_something, obj, val)
else:
raise ValueError ("unable to process %s" % obj)
instructions = [validate(item) for item in your_list_of_objects]
for each_partial in instructions:
each_partial()
The operations will only get fired if the list compehension collects without any exceptions. You could wrap that for exception safety:
try:
instructions = [validate(item) for item in your_list_of_objects]
for each_partial in instructions:
each_partial()
print "succeeded"
except ValueError:
print "failed"
If there is no "built-in" way of doing this, I think after all the "cleanest" solution is to divide the function in two parts. Something Like this:
try:
res1 = calc(obj1)
res2 = calc(obj2)
res3 = calc(obj3)
except:
pass
else:
set(obj1, res1)
set(obj2, res2)
set(obj3, res3)

Python's way to store a default value if expression failed

What is the python analog of perl's // operator?
In perl, one can do something like :
$pos = $some_list[0] // 1
How do you accomplish the same in python?
In Python there is no undefined; instead, you'd get an exception if you tried to access an non-existent index in a list. As such, you can use exception handling instead:
try:
pos = some_list[0]
except IndexError:
pos = 1
For the first element of a sequence, you could explicitly test the sequence as a boolean (a python container is 'falsey' when empty):
post = some_list[0] if some_list else 1
How about using exceptions?
try:
pos = some_list[0]
except (NameError, IndexError):
pos = 1
An alternative to try/catch answers above for dictionaries is the default argument on .get():
param_value = my_dictionary.get(param_key, default_value)
The best practice for this in python is to handle exceptions explicitly with a try, except clause. One example presented here to help you visuallize
my_list = []
try:
item = my_list[1]
except IndexError:
item = 1
Here the code executes and an exception is raised because the index "1" is out of bounds. We then go on to handle that exception and set item=1 allowing the program to continue running. The reason for this explicit handling of exceptions is so we as programmers see exactly what is causing our problems. Take this for example:
my_list = [0]
try:
item = 1/my_list[0]
except IndexError:
item = 1
This will raise a zero division error (halting execution) and let us know that we need to handle some other exception explicitly beyond the original exception we expected, the IndexError. We might then do something like this to deal with that situation:
my_list = [0]
try:
item = 1/my_list[0]
except IndexError:
item = 1
except ZeroDivisionError:
item = 99999
try-except blocks also have a few other notable features we can exploit:
try:
# code which might raise error
pass
except IndexError as err:
# handling an index error and storing the traceback in err
pass
except ZeroDivisionError:
#handling some other error:
pass
else:
# code we would like to execute if the try block succeeds without any errors
pass
finally:
# code we will execute regardless of what occurs in the entire
# try,except,else block listed above (i.e we can ensure a file is closed)
pass

KeyError: 'pop from an empty set' python

How do you get around this error while using .pop? I get that when it tries to return a number but there isn't one an error is raised but how do you get around it so the program keeps running?
def remove_element(self,integer):
self.integer = integer
self.members.pop()
Just check if self.members is not empty:
if self.members:
self.members.pop()
or, catch KeyError via try/except:
try:
self.members.pop()
except KeyError:
# do smth
You can use try/except to catch the KeyError raised by an_empty_set.pop(), or check the set first to make sure it's not empty:
if s:
value = s.pop()
else:
# whatever you want to do if the set is empty
Single line solution
res = self.members.pop() if self.members else None

Can I catch error in a list comprehensions to be sure to loop all the list items

I've got a list comprehensions which filter a list:
l = [obj for obj in objlist if not obj.mycond()]
but the object method mycond() can raise an Exception I must intercept. I need to collect all the errors at the end of the loop to show which object has created any problems and at the same time I want to be sure to loop all the list elements.
My solution was:
errors = []
copy = objlist[:]
for obj in copy:
try:
if (obj.mycond()):
# avoiding to touch the list in the loop directly
objlist.remove(obj)
except MyException as err:
errors = [err]
if (errors):
#do something
return objlist
In this post (How to delete list elements while cycling the list itself without duplicate it) I ask if there is a better method to cycle avoiding the list duplicate.
The community answer me to avoid in place list modification and use a list comprehensions that is applicable if I ignore the Exception problem.
Is there an alternative solution in your point of view ? Can I manage Exception in that manner using list comprehensions? In this kind of situation and using big lists (what I must consider big ?) I must find another alternative ?
I would use a little auxiliary function:
def f(obj, errs):
try: return not obj.mycond()
except MyException as err: errs.append((obj, err))
errs = []
l = [obj for obj in objlist if f(obj, errs)]
if errs:
emiterrorinfo(errs)
Note that this way you have in errs all the errant objects and the specific exception corresponding to each of them, so the diagnosis can be precise and complete; as well as the l you require, and your objlist still intact for possible further use. No list copy was needed, nor any changes to obj's class, and the code's overall structure is very simple.
A couple of comments:
First of all, the list comprehension syntax [expression for var in iterable] DOES create a copy. If you do not want to create a copy of the list, then use the generator expression (expression for var in iterable).
How do generators work? Essentially by calling next(obj) on the object repeatedly until a GeneratorExit exception is raised.
Based on your original code, it seems that you are still needing the filtered list as output.
So you can emulate that with little performance loss:
l = []
for obj in objlist:
try:
if not obj.mycond()
l.append(obj)
except Exception:
pass
However, you could re-engineer that all with a generator function:
def FilterObj(objlist):
for obj in objlist:
try:
if not obj.mycond()
yield obj
except Exception:
pass
In that way, you can safely iterate over it without caching a list in the meantime:
for obj in FilterObj(objlist):
obj.whatever()
you could define a method of obj that calls obj.mycond() but also catches the exception
class obj:
def __init__(self):
self.errors = []
def mycond(self):
#whatever you have here
def errorcatcher():
try:
return self.mycond()
except MyException as err:
self.errors.append(err)
return False # or true, depending upon what you want
l = [obj for obj in objlist if not obj.errorcatcher()]
errors = [obj.errors for obj in objlist if obj.errors]
if errors:
#do something
Instead of copying the list and removing elements, start with a blank list and add members as necessary. Something like this:
errors = []
newlist = []
for obj in objlist:
try:
if not obj.mycond():
newlist.append(obj)
except MyException as err:
errors.append(err)
if (errors):
#do something
return newlist
The syntax isn't as pretty, but it'll do more or less the same thing that the list comprehension does without any unnecessary removals.
Adding or removing elements to or from anywhere other than the end of a list will be slow because when you remove something, it needs to go through every item that comes after it and subtract one from its index, and same thing for adding something except it'll need to add to the index. update the position of all the elements after it.

Categories

Resources