Try-clause containing multiple statements - python

Let's say I have the following function/method, which calculates a bunch of stuff and then sets a lot a variables/attributes: calc_and_set(obj).
Now what I would like to do is to call the function several times with different objects, and if one or more fails then nothing should be set at all.
I thought I could do it like this:
try:
calc_and_set(obj1)
calc_and_set(obj2)
calc_and_set(obj3)
except:
pass
But this obviously doesn't work. If for instance the error happens in the third call to the function, then the first and second call will already have set the variables.
Can anyone think of a "clean" way of doing what I want? The only solutions I can think of are rather ugly workarounds.

I see a few options here.
A. Have a "reverse function", which is robust. So if
def calc_and_set(obj):
obj.A = 'a'
def unset(obj):
if hasattr(obj, 'A'):
del obj.A
and
try:
calc_and_set(obj1)
calc_and_set(obj2)
except:
unset(obj1)
unset(obj2)
Notice, that in this case, unset doesn't care if calc_and_set completed successfully or not.
B. Separate calc_and_set to try_calc_and_set, testing if it works, and set, which won't throw errors, and would be called only if all try_calc_and_set didn't fail.
try:
try_calc_and_set(obj1)
try_calc_and_set(obj2)
calc_and_set(obj1)
calc_and_set(obj2)
except:
pass
C. (my favorite) - have calc_and_set return a new variable, and not operate in place. If successful, replace the original reference with the new one. This could easily be done by adding copy as the first statement in calc_and_set, and then returning the variable.
try:
obj1_t = calc_and_set(obj1)
obj2_t = calc_and_set(obj2)
obj1 = obj1_t
obj2 = obj2_t
except:
pass
The mirror of that one is of course to save your objects before:
obj1_c = deepcopy(obj1)
obj2_c = deepcopy(obj2)
try:
calc_and_set(obj1)
calc_and_set(obj2)
except:
obj1 = obj1_c
obj2 = obj2_c
And as a general comment (if this is just a sample code, forgive me) - don't have excepts without specifying exception type.

You can also try cache the actions you want to take and then do them all in one go if everybody passes:
from functools import partial
def do_something (obj, val):
# magic here
def validate (obj):
if obj.is_what_you_want():
return partial(do_something, obj, val)
else:
raise ValueError ("unable to process %s" % obj)
instructions = [validate(item) for item in your_list_of_objects]
for each_partial in instructions:
each_partial()
The operations will only get fired if the list compehension collects without any exceptions. You could wrap that for exception safety:
try:
instructions = [validate(item) for item in your_list_of_objects]
for each_partial in instructions:
each_partial()
print "succeeded"
except ValueError:
print "failed"

If there is no "built-in" way of doing this, I think after all the "cleanest" solution is to divide the function in two parts. Something Like this:
try:
res1 = calc(obj1)
res2 = calc(obj2)
res3 = calc(obj3)
except:
pass
else:
set(obj1, res1)
set(obj2, res2)
set(obj3, res3)

Related

Check if variable exists in Python [duplicate]

I want to check if a variable exists. Now I'm doing something like this:
try:
myVar
except NameError:
# Do something.
Are there other ways without exceptions?
To check the existence of a local variable:
if 'myVar' in locals():
# myVar exists.
To check the existence of a global variable:
if 'myVar' in globals():
# myVar exists.
To check if an object has an attribute:
if hasattr(obj, 'attr_name'):
# obj.attr_name exists.
The use of variables that have yet to been defined or set (implicitly or explicitly) is often a bad thing in any language, since it tends to indicate that the logic of the program hasn't been thought through properly, and is likely to result in unpredictable behaviour.
If you need to do it in Python, the following trick, which is similar to yours, will ensure that a variable has some value before use:
try:
myVar
except NameError:
myVar = None # or some other default value.
# Now you're free to use myVar without Python complaining.
However, I'm still not convinced that's a good idea - in my opinion, you should try to refactor your code so that this situation does not occur.
By way of an example, the following code was given below in a comment, to allow line drawing from a previous point to the current point:
if last:
draw(last, current);
last = current
In the case where last has not been bound to a value, that won't help in Python at all since even the checking of last will raise an exception. A better idea would be to ensure last does have a value, one that can be used to decide whether or not it is valid. That would be something like:
last = None
# some time passes ...
if last is not None:
draw(last, current);
last = current
That ensures the variable exists and that you only use it if it's valid for what you need it for. This is what I assume the if last was meant to do in the comment code (but didn't), and you can still add the code to force this if you have no control over the initial setting of the variable, using the exception method above:
# Variable 'last' may or may not be bound to a value at this point.
try:
last
except NameError:
last = None
# It will always now be bound to a value at this point.
if last is not None:
draw(last, current);
last = current
A simple way is to initialize it at first saying myVar = None
Then later on:
if myVar is not None:
# Do something
Using try/except is the best way to test for a variable's existence. But there's almost certainly a better way of doing whatever it is you're doing than setting/testing global variables.
For example, if you want to initialize a module-level variable the first time you call some function, you're better off with code something like this:
my_variable = None
def InitMyVariable():
global my_variable
if my_variable is None:
my_variable = ...
for objects/modules, you can also
'var' in dir(obj)
For example,
>>> class Something(object):
... pass
...
>>> c = Something()
>>> c.a = 1
>>> 'a' in dir(c)
True
>>> 'b' in dir(c)
False
I will assume that the test is going to be used in a function, similar to user97370's answer. I don't like that answer because it pollutes the global namespace. One way to fix it is to use a class instead:
class InitMyVariable(object):
my_variable = None
def __call__(self):
if self.my_variable is None:
self.my_variable = ...
I don't like this, because it complicates the code and opens up questions such as, should this confirm to the Singleton programming pattern? Fortunately, Python has allowed functions to have attributes for a while, which gives us this simple solution:
def InitMyVariable():
if InitMyVariable.my_variable is None:
InitMyVariable.my_variable = ...
InitMyVariable.my_variable = None
catch is called except in Python. other than that it's fine for such simple cases. There's the AttributeError that can be used to check if an object has an attribute.
A way that often works well for handling this kind of situation is to not explicitly check if the variable exists but just go ahead and wrap the first usage of the possibly non-existing variable in a try/except NameError:
# Search for entry.
for x in y:
if x == 3:
found = x
# Work with found entry.
try:
print('Found: {0}'.format(found))
except NameError:
print('Not found')
else:
# Handle rest of Found case here
...
I created a custom function.
def exists(var):
return var in globals()
Then the call the function like follows replacing variable_name with the variable you want to check:
exists("variable_name")
Will return True or False
Like so:
def no(var):
"give var as a string (quote it like 'var')"
assert(var not in vars())
assert(var not in globals())
assert(var not in vars(__builtins__))
import keyword
assert(var not in keyword.kwlist)
Then later:
no('foo')
foo = ....
If your new variable foo is not safe to use, you'll get an AssertionError exception which will point to the line that failed, and then you will know better.
Here is the obvious contrived self-reference:
no('no')
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-88-d14ecc6b025a> in <module>
----> 1 no('no')
<ipython-input-86-888a9df72be0> in no(var)
2 "give var as a string (quote it)"
3 assert( var not in vars())
----> 4 assert( var not in globals())
5 assert( var not in vars(__builtins__))
6 import keyword
AssertionError:
It may not be performant, but you generalise the solution to a function that checks both local variables and global variables.
import inspect
def exists_var(var_name):
frame = inspect.currentframe()
try:
return var_name in frame.f_back.f_locals or var_name in globals()
finally:
del frame
Then you can use it like this:
exists_var('myVar')
Short variant:
my_var = some_value if 'my_var' not in globals() else my_var:
This was my scenario:
for i in generate_numbers():
do_something(i)
# Use the last i.
I can’t easily determine the length of the iterable, and that means that i may or may not exist depending on whether the iterable produces an empty sequence.
If I want to use the last i of the iterable (an i that doesn’t exist for an empty sequence) I can do one of two things:
i = None # Declare the variable.
for i in generate_numbers():
do_something(i)
use_last(i)
or
for i in generate_numbers():
do_something(i)
try:
use_last(i)
except UnboundLocalError:
pass # i didn’t exist because sequence was empty.
The first solution may be problematic because I can’t tell (depending on the sequence values) whether i was the last element. The second solution is more accurate in that respect.
Also a possibility for objects, use __dict__.
class A(object):
def __init__(self):
self.m = 1
a = A()
assert "m" in a.__dict__
assert "k" not in a.__dict__

How to try several methods with flat style?

If I want to try many way to avoid some error, I may write:
try:
try:
trial_1()
except some_error:
try:
trial_2()
except some_error:
try:
trial_3()
...
print "finally pass"
except some_error:
print "still fail"
But there are too many trials so too many nest, how to write it in a flat style?
If it's the same exception each time, you could do
for task in (trial_1, trial_2, trial_3, ...):
try:
task()
break
except some_error:
continue
If knowing whether it succeeded is important, the clearest way to add that is probably
successful = False
for task in (trial_1, trial_2, trial_3, ...):
try:
task()
successful = True
break
except some_error:
continue
if successful:
...
else:
...
You could do this:
def trial1 (): 42 / 0
def trial2 (): [] [42]
def trial3 (): 'yoohoo!'
def trial4 (): 'here be dragons'
for t in [trial1, trial2, trial3, trial4]:
print ('Trying {}.'.format (t.__name__) )
try:
t ()
print ('Success')
break
except Exception as ex:
print ('Failed due to {}'.format (ex) )
else:
print ('Epic fail!')
Output is:
Trying trial1.
Failed due to division by zero
Trying trial2.
Failed due to list index out of range
Trying trial3.
Success
Assuming that a) each trial is different, and b) they all throw the same error (since that's what your code illustrates), and c) you know the names of all the trial functions:
for t in (trial_1, trial_2, trial_3, trial_4):
try:
t()
# if we succeed, the trial is over
break
except some_error:
continue
This will loop over every trial, continuing in the case of the expected error, stopping if the trial succeeds, and throwing any other exceptions. I think that's the same behavior of your example code.
If you need to do this more than once, you can wrap up the answers Hyperboreus and others gave as a function:
def first_success(*callables):
for f in callables:
try:
return f()
except Exception as x:
print('{} failed due to {}'.format(f.__name__, x))
raise RuntimeError("still fail")
Then, all you need is:
first_success(trial_1, trial_2, trial_3, trial_4)
If you want to logging.info the exceptions instead of print them, or ignore them entirely, or keep track of them and attach the list of exceptions to the return value and/or exception as an attribute, etc., it should be pretty obvious how to modify this.
If you want to pass arguments to the functions, that's not quite as obvious, but still pretty easy. You just need to decide what the interface should be. Maybe take a sequence of callables as the first argument, and then all the callables' arguments after that:
first_success((trial_1, trial_2, trial_3, trial_4), 42, spam='spam')
That's easy:
def first_success(callables, *args, **kwargs):
for f in callables:
try:
return f(*args, **kwargs)
except Exception as x:
print('{} failed due to {}'.format(f.__name__, x))
else:
raise RuntimeError("still fail")
If you don't need exactly this pattern all the time, but you need a ton of not-quite-the-same things, you may want to instead write a function that just wraps any function in a try. I've actually built this half a dozen times, and then realized there was a more pythonic way to write my code that made this function unnecessary, so the only use I've ever gotten out of it was in arguments with Haskell snobs, but you may find a better use for it:
def tried(callable, *args, **kwargs):
try:
return (callable(*args, **kwargs), None)
except Exception as x:
return (None, x)
Now you can use higher-order functions like map, any, etc. For example, map(tried, (trial_1, trial_2, trial_3, trial_4)) gives you a sequence of four non-throwing functions, and you can (f(x[0]) if x[1] is None else x for x in tried_sequence) to work through a Haskell monad tutorial in Python, which is a good way to make both Python programmers and Haskell programmers hate you.

Handle generator exceptions in its consumer

This is a follow-up to Handle an exception thrown in a generator and discusses a more general problem.
I have a function that reads data in different formats. All formats are line- or record-oriented and for each format there's a dedicated parsing function, implemented as a generator. So the main reading function gets an input and a generator, which reads its respective format from the input and delivers records back to the main function:
def read(stream, parsefunc):
for record in parsefunc(stream):
do_stuff(record)
where parsefunc is something like:
def parsefunc(stream):
while not eof(stream):
rec = read_record(stream)
do some stuff
yield rec
The problem I'm facing is that while parsefunc can throw an exception (e.g. when reading from a stream), it has no idea how to handle it. The function responsible for handling exceptions is the main read function. Note that exceptions occur on a per-record basis, so even if one record fails, the generator should continue its work and yield records back until the whole stream is exhausted.
In the previous question I tried to put next(parsefunc) in a try block, but as turned out, this is not going to work. So I have to add try-except to the parsefunc itself and then somehow deliver exceptions to the consumer:
def parsefunc(stream):
while not eof(stream):
try:
rec = read_record()
yield rec
except Exception as e:
?????
I'm rather reluctant to do this because
it makes no sense to use try in a function that isn't intended to handle any exceptions
it's unclear to me how to pass exceptions to the consuming function
there going to be many formats and many parsefunc's, I don't want to clutter them with too much helper code.
Has anyone suggestions for a better architecture?
A note for googlers: in addition to the top answer, pay attention to senderle's and Jon's posts - very smart and insightful stuff.
You can return a tuple of record and exception in the parsefunc and let the consumer function decide what to do with the exception:
import random
def get_record(line):
num = random.randint(0, 3)
if num == 3:
raise Exception("3 means danger")
return line
def parsefunc(stream):
for line in stream:
try:
rec = get_record(line)
except Exception as e:
yield (None, e)
else:
yield (rec, None)
if __name__ == '__main__':
with open('temp.txt') as f:
for rec, e in parsefunc(f):
if e:
print "Got an exception %s" % e
else:
print "Got a record %s" % rec
Thinking deeper about what would happen in a more complex case kind of vindicates the Python choice of avoiding bubbling exceptions out of a generator.
If I got an I/O error from a stream object the odds of simply being able to recover and continue reading, without the structures local to the generator being reset in some way, would be low. I would somehow have to reconcile myself with the reading process in order to continue: skip garbage, push back partial data, reset some incomplete internal tracking structure, etc.
Only the generator has enough context to do that properly. Even if you could keep the generator context, having the outer block handle the exceptions would totally flout the Law of Demeter. All the important information that the surrounding block needs to reset and move on is in local variables of the generator function! And getting or passing that information, though possible, is disgusting.
The resulting exception would almost always be thrown after cleaning up, in which case the reader-generator will already have an internal exception block. Trying very hard to maintain this cleanliness in the brain-dead-simple case only to have it break down in almost every realistic context would be silly. So just have the try in the generator, you are going to need the body of the except block anyway, in any complex case.
It would be nice if exceptional conditions could look like exceptions, though, and not like return values. So I would add an intermediate adapter to allow for this: The generator would yield either data or exceptions and the adapter would re-raise the exception if applicable. The adapter should be called first-thing inside the for loop, so that we have the option of catching it within the loop and cleaning up to continue, or breaking out of the loop to catch it and and abandon the process. And we should put some kind of lame wrapper around the setup to indicate that tricks are afoot, and to force the adapter to get called if the function is adapting.
That way each layer is presented errors that it has the context to handle, at the expense of the adapter being a tiny bit intrusive (and perhaps also easy to forget).
So we would have:
def read(stream, parsefunc):
try:
for source in frozen(parsefunc(stream)):
try:
record = source.thaw()
do_stuff(record)
except Exception, e:
log_error(e)
if not is_recoverable(e):
raise
recover()
except Exception, e:
properly_give_up()
wrap_up()
(Where the two try blocks are optional.)
The adapter looks like:
class Frozen(object):
def __init__(self, item):
self.value = item
def thaw(self):
if isinstance(value, Exception):
raise value
return value
def frozen(generator):
for item in generator:
yield Frozen(item)
And parsefunc looks like:
def parsefunc(stream):
while not eof(stream):
try:
rec = read_record(stream)
do_some_stuff()
yield rec
except Exception, e:
properly_skip_record_or_prepare_retry()
yield e
To make it harder to forget the adapter, we could also change frozen from a function to a decorator on parsefunc.
def frozen_results(func):
def freezer(__func = func, *args, **kw):
for item in __func(*args, **kw):
yield Frozen(item)
return freezer
In which case we we would declare:
#frozen_results
def parsefunc(stream):
...
And we would obviously not bother to declare frozen, or wrap it around the call to parsefunc.
Without knowing more about the system, I think it's difficult to tell what approach will work best. However, one option that no one has suggested yet would be to use a callback. Given that only read knows how to deal with exceptions, might something like this work?
def read(stream, parsefunc):
some_closure_data = {}
def error_callback_1(e):
manipulate(some_closure_data, e)
def error_callback_2(e):
transform(some_closure_data, e)
for record in parsefunc(stream, error_callback_1):
do_stuff(record)
Then, in parsefunc:
def parsefunc(stream, error_callback):
while not eof(stream):
try:
rec = read_record()
yield rec
except Exception as e:
error_callback(e)
I used a closure over a mutable local here; you could also define a class. Note also that you can access the traceback info via sys.exc_info() inside the callback.
Another interesting approach might be to use send. This would work a little differently; basically, instead of defining a callback, read could check the result of yield, do a lot of complex logic, and send a substitute value, which the generator would then re-yield (or do something else with). This is a bit more exotic, but I thought I'd mention it in case it's useful:
>>> def parsefunc(it):
... default = None
... for x in it:
... try:
... rec = float(x)
... except ValueError as e:
... default = yield e
... yield default
... else:
... yield rec
...
>>> parsed_values = parsefunc(['4', '6', '5', '5h', '22', '7'])
>>> for x in parsed_values:
... if isinstance(x, ValueError):
... x = parsed_values.send(0.0)
... print x
...
4.0
6.0
5.0
0.0
22.0
7.0
On it's own this is a bit useless ("Why not just print the default directly from read?" you might ask), but you could do more complex things with default inside the generator, resetting values, going back a step, and so on. You could even wait to send a callback at this point based on the error you receive. But note that sys.exc_info() is cleared as soon as the generator yields, so you'll have to send everything from sys.exc_info() if you need access to the traceback.
Here's an example of how you might combine the two options:
import string
digits = set(string.digits)
def digits_only(v):
return ''.join(c for c in v if c in digits)
def parsefunc(it):
default = None
for x in it:
try:
rec = float(x)
except ValueError as e:
callback = yield e
yield float(callback(x))
else:
yield rec
parsed_values = parsefunc(['4', '6', '5', '5h', '22', '7'])
for x in parsed_values:
if isinstance(x, ValueError):
x = parsed_values.send(digits_only)
print x
An example of a possible design:
from StringIO import StringIO
import csv
blah = StringIO('this,is,1\nthis,is\n')
def parse_csv(stream):
for row in csv.reader(stream):
try:
yield int(row[2])
except (IndexError, ValueError) as e:
pass # don't yield but might need something
# All others have to go up a level - so it wasn't parsable
# So if it's an IOError you know why, but this needs to catch
# exceptions potentially, just let the major ones propogate
for record in parse_csv(blah):
print record
I like the given answer with the Frozen stuff. Based on that idea I came up with this, solving two aspects I did not yet like. The first was the patterns needed to write it down. The second was the loss of the stack trace when yielding an exception. I tried my best to solve the first by using decorators as good as possible. I tried keeping the stack trace by using sys.exc_info() instead of the exception alone.
My generator normally (i.e. without my stuff applied) would look like this:
def generator():
def f(i):
return float(i) / (3 - i)
for i in range(5):
yield f(i)
If I can transform it into using an inner function to determine the value to yield, I can apply my method:
def generator():
def f(i):
return float(i) / (3 - i)
for i in range(5):
def generate():
return f(i)
yield generate()
This doesn't yet change anything and calling it like this would raise an error with a proper stack trace:
for e in generator():
print e
Now, applying my decorators, the code would look like this:
#excepterGenerator
def generator():
def f(i):
return float(i) / (3 - i)
for i in range(5):
#excepterBlock
def generate():
return f(i)
yield generate()
Not much change optically. And you still can use it the way you used the version before:
for e in generator():
print e
And you still get a proper stack trace when calling. (Just one more frame is in there now.)
But now you also can use it like this:
it = generator()
while it:
try:
for e in it:
print e
except Exception as problem:
print 'exc', problem
This way you can handle in the consumer any exception raised in the generator without too much syntactic hassle and without losing stack traces.
The decorators are spelled out like this:
import sys
def excepterBlock(code):
def wrapper(*args, **kwargs):
try:
return (code(*args, **kwargs), None)
except Exception:
return (None, sys.exc_info())
return wrapper
class Excepter(object):
def __init__(self, generator):
self.generator = generator
self.running = True
def next(self):
try:
v, e = self.generator.next()
except StopIteration:
self.running = False
raise
if e:
raise e[0], e[1], e[2]
else:
return v
def __iter__(self):
return self
def __nonzero__(self):
return self.running
def excepterGenerator(generator):
return lambda *args, **kwargs: Excepter(generator(*args, **kwargs))
(I answered the other question linked in the OP but my answer applies to this situation as well)
I have needed to solve this problem a couple of times and came upon this question after a search for what other people have done.
One option- which will probably require refactoring things a little bit- would be to simply create an error handling generator, and throw the exception in the generator (to another error handling generator) rather than raise it.
Here is what the error handling generator function might look like:
def err_handler():
# a generator for processing errors
while True:
try:
# errors are thrown to this point in function
yield
except Exception1:
handle_exc1()
except Exception2:
handle_exc2()
except Exception3:
handle_exc3()
except Exception:
raise
An additional handler argument is provided to the parsefunc function so it has a place to put the errors:
def parsefunc(stream, handler):
# the handler argument fixes errors/problems separately
while not eof(stream):
try:
rec = read_record(stream)
do some stuff
yield rec
except Exception as e:
handler.throw(e)
handler.close()
Now just use almost the original read function, but now with an error handler:
def read(stream, parsefunc):
handler = err_handler()
for record in parsefunc(stream, handler):
do_stuff(record)
This isn't always going to be the best solution, but it's certainly an option, and relatively easy to understand.
About your point of propagating exception from generator to consuming function,
you can try to use an error code (set of error codes) to indicate the error.
Though not elegant that is one approach you can think of.
For example in the below code yielding a value like -1 where you were expecting
a set of positive integers would signal to the calling function that there was
an error.
In [1]: def f():
...: yield 1
...: try:
...: 2/0
...: except ZeroDivisionError,e:
...: yield -1
...: yield 3
...:
In [2]: g = f()
In [3]: next(g)
Out[3]: 1
In [4]: next(g)
Out[4]: -1
In [5]: next(g)
Out[5]: 3
Actually, generators are quite limited in several aspects. You found one: the raising of exceptions is not part of their API.
You could have a look at the Stackless Python stuff like greenlets or coroutines which offer a lot more flexibility; but diving into that is a bit out of scope here.

Can I catch error in a list comprehensions to be sure to loop all the list items

I've got a list comprehensions which filter a list:
l = [obj for obj in objlist if not obj.mycond()]
but the object method mycond() can raise an Exception I must intercept. I need to collect all the errors at the end of the loop to show which object has created any problems and at the same time I want to be sure to loop all the list elements.
My solution was:
errors = []
copy = objlist[:]
for obj in copy:
try:
if (obj.mycond()):
# avoiding to touch the list in the loop directly
objlist.remove(obj)
except MyException as err:
errors = [err]
if (errors):
#do something
return objlist
In this post (How to delete list elements while cycling the list itself without duplicate it) I ask if there is a better method to cycle avoiding the list duplicate.
The community answer me to avoid in place list modification and use a list comprehensions that is applicable if I ignore the Exception problem.
Is there an alternative solution in your point of view ? Can I manage Exception in that manner using list comprehensions? In this kind of situation and using big lists (what I must consider big ?) I must find another alternative ?
I would use a little auxiliary function:
def f(obj, errs):
try: return not obj.mycond()
except MyException as err: errs.append((obj, err))
errs = []
l = [obj for obj in objlist if f(obj, errs)]
if errs:
emiterrorinfo(errs)
Note that this way you have in errs all the errant objects and the specific exception corresponding to each of them, so the diagnosis can be precise and complete; as well as the l you require, and your objlist still intact for possible further use. No list copy was needed, nor any changes to obj's class, and the code's overall structure is very simple.
A couple of comments:
First of all, the list comprehension syntax [expression for var in iterable] DOES create a copy. If you do not want to create a copy of the list, then use the generator expression (expression for var in iterable).
How do generators work? Essentially by calling next(obj) on the object repeatedly until a GeneratorExit exception is raised.
Based on your original code, it seems that you are still needing the filtered list as output.
So you can emulate that with little performance loss:
l = []
for obj in objlist:
try:
if not obj.mycond()
l.append(obj)
except Exception:
pass
However, you could re-engineer that all with a generator function:
def FilterObj(objlist):
for obj in objlist:
try:
if not obj.mycond()
yield obj
except Exception:
pass
In that way, you can safely iterate over it without caching a list in the meantime:
for obj in FilterObj(objlist):
obj.whatever()
you could define a method of obj that calls obj.mycond() but also catches the exception
class obj:
def __init__(self):
self.errors = []
def mycond(self):
#whatever you have here
def errorcatcher():
try:
return self.mycond()
except MyException as err:
self.errors.append(err)
return False # or true, depending upon what you want
l = [obj for obj in objlist if not obj.errorcatcher()]
errors = [obj.errors for obj in objlist if obj.errors]
if errors:
#do something
Instead of copying the list and removing elements, start with a blank list and add members as necessary. Something like this:
errors = []
newlist = []
for obj in objlist:
try:
if not obj.mycond():
newlist.append(obj)
except MyException as err:
errors.append(err)
if (errors):
#do something
return newlist
The syntax isn't as pretty, but it'll do more or less the same thing that the list comprehension does without any unnecessary removals.
Adding or removing elements to or from anywhere other than the end of a list will be slow because when you remove something, it needs to go through every item that comes after it and subtract one from its index, and same thing for adding something except it'll need to add to the index. update the position of all the elements after it.

How do I check if a variable exists?

I want to check if a variable exists. Now I'm doing something like this:
try:
myVar
except NameError:
# Do something.
Are there other ways without exceptions?
To check the existence of a local variable:
if 'myVar' in locals():
# myVar exists.
To check the existence of a global variable:
if 'myVar' in globals():
# myVar exists.
To check if an object has an attribute:
if hasattr(obj, 'attr_name'):
# obj.attr_name exists.
The use of variables that have yet to been defined or set (implicitly or explicitly) is often a bad thing in any language, since it tends to indicate that the logic of the program hasn't been thought through properly, and is likely to result in unpredictable behaviour.
If you need to do it in Python, the following trick, which is similar to yours, will ensure that a variable has some value before use:
try:
myVar
except NameError:
myVar = None # or some other default value.
# Now you're free to use myVar without Python complaining.
However, I'm still not convinced that's a good idea - in my opinion, you should try to refactor your code so that this situation does not occur.
By way of an example, the following code was given below in a comment, to allow line drawing from a previous point to the current point:
if last:
draw(last, current);
last = current
In the case where last has not been bound to a value, that won't help in Python at all since even the checking of last will raise an exception. A better idea would be to ensure last does have a value, one that can be used to decide whether or not it is valid. That would be something like:
last = None
# some time passes ...
if last is not None:
draw(last, current);
last = current
That ensures the variable exists and that you only use it if it's valid for what you need it for. This is what I assume the if last was meant to do in the comment code (but didn't), and you can still add the code to force this if you have no control over the initial setting of the variable, using the exception method above:
# Variable 'last' may or may not be bound to a value at this point.
try:
last
except NameError:
last = None
# It will always now be bound to a value at this point.
if last is not None:
draw(last, current);
last = current
A simple way is to initialize it at first saying myVar = None
Then later on:
if myVar is not None:
# Do something
Using try/except is the best way to test for a variable's existence. But there's almost certainly a better way of doing whatever it is you're doing than setting/testing global variables.
For example, if you want to initialize a module-level variable the first time you call some function, you're better off with code something like this:
my_variable = None
def InitMyVariable():
global my_variable
if my_variable is None:
my_variable = ...
for objects/modules, you can also
'var' in dir(obj)
For example,
>>> class Something(object):
... pass
...
>>> c = Something()
>>> c.a = 1
>>> 'a' in dir(c)
True
>>> 'b' in dir(c)
False
I will assume that the test is going to be used in a function, similar to user97370's answer. I don't like that answer because it pollutes the global namespace. One way to fix it is to use a class instead:
class InitMyVariable(object):
my_variable = None
def __call__(self):
if self.my_variable is None:
self.my_variable = ...
I don't like this, because it complicates the code and opens up questions such as, should this confirm to the Singleton programming pattern? Fortunately, Python has allowed functions to have attributes for a while, which gives us this simple solution:
def InitMyVariable():
if InitMyVariable.my_variable is None:
InitMyVariable.my_variable = ...
InitMyVariable.my_variable = None
catch is called except in Python. other than that it's fine for such simple cases. There's the AttributeError that can be used to check if an object has an attribute.
A way that often works well for handling this kind of situation is to not explicitly check if the variable exists but just go ahead and wrap the first usage of the possibly non-existing variable in a try/except NameError:
# Search for entry.
for x in y:
if x == 3:
found = x
# Work with found entry.
try:
print('Found: {0}'.format(found))
except NameError:
print('Not found')
else:
# Handle rest of Found case here
...
I created a custom function.
def exists(var):
return var in globals()
Then the call the function like follows replacing variable_name with the variable you want to check:
exists("variable_name")
Will return True or False
Like so:
def no(var):
"give var as a string (quote it like 'var')"
assert(var not in vars())
assert(var not in globals())
assert(var not in vars(__builtins__))
import keyword
assert(var not in keyword.kwlist)
Then later:
no('foo')
foo = ....
If your new variable foo is not safe to use, you'll get an AssertionError exception which will point to the line that failed, and then you will know better.
Here is the obvious contrived self-reference:
no('no')
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-88-d14ecc6b025a> in <module>
----> 1 no('no')
<ipython-input-86-888a9df72be0> in no(var)
2 "give var as a string (quote it)"
3 assert( var not in vars())
----> 4 assert( var not in globals())
5 assert( var not in vars(__builtins__))
6 import keyword
AssertionError:
It may not be performant, but you generalise the solution to a function that checks both local variables and global variables.
import inspect
def exists_var(var_name):
frame = inspect.currentframe()
try:
return var_name in frame.f_back.f_locals or var_name in globals()
finally:
del frame
Then you can use it like this:
exists_var('myVar')
Short variant:
my_var = some_value if 'my_var' not in globals() else my_var:
This was my scenario:
for i in generate_numbers():
do_something(i)
# Use the last i.
I can’t easily determine the length of the iterable, and that means that i may or may not exist depending on whether the iterable produces an empty sequence.
If I want to use the last i of the iterable (an i that doesn’t exist for an empty sequence) I can do one of two things:
i = None # Declare the variable.
for i in generate_numbers():
do_something(i)
use_last(i)
or
for i in generate_numbers():
do_something(i)
try:
use_last(i)
except UnboundLocalError:
pass # i didn’t exist because sequence was empty.
The first solution may be problematic because I can’t tell (depending on the sequence values) whether i was the last element. The second solution is more accurate in that respect.
Also a possibility for objects, use __dict__.
class A(object):
def __init__(self):
self.m = 1
a = A()
assert "m" in a.__dict__
assert "k" not in a.__dict__

Categories

Resources