I use salt, and it raise the following exception when I run exec_state('update_salt') (The code below):
File "/usr/lib/python2.6/site-packages/salt/client/__init__.py", line 1582, in __init__
caller = salt.client.Caller()
File "/usr/lib/python2.6/site-packages/salt/minion.py", line 283, in __init__
for key, val in data.items():
File "/usr/lib/python2.6/site-packages/salt/minion.py", line 300, in gen_modules
File "/usr/lib/python2.6/site-packages/salt/loader.py", line 286, in render
opts,
salt.exceptions.LoaderError: The renderer yaml_jinja is unavailable, this error is often because the needed software is unavailable
I try to handle it by try and catch block:
try:
result = exec_state('update_salt')
if not result:
return False
except:
print "got it.."
result = exec_state('update_salt_light')
if not result:
return False
But it still failed in the first attemp, and doesn't get to the exception block (got it is not printed). Why?
Related
I have the following function:
def get_prev_match_elos(player_id, prev_matches):
try:
last_match = prev_matches[-1]
return last_match, player_id
except IndexError:
return
Sometimes prev_matches can be an empty list so I've added the try except block to catch an IndexError. However, I'm still getting an explicit IndexError on last_match = prev_matches[-1] when I pass an empty list instead of the except block kicking in.
I've tried replicating this function in another file and it works fine! Any ideas?
Full error:
Exception has occurred: IndexError
list index out of range
File "C:\Users\Philip\OneDrive\Betting\Capra\Tennis\polgara\elo.py", line 145, in get_prev_match_elos
last_match = prev_matches[-1]
File "C:\Users\Philip\OneDrive\Betting\Capra\Tennis\polgara\elo.py", line 24, in engineer_elos
get_prev_match_elos(player_id, prev_matches_all_surface)
File "C:\Users\Philip\OneDrive\Betting\Capra\Tennis\polgara\updater.py", line 499, in engineer_variables
engineer_elos(dal, p1_id, date, surface, params)
File "C:\Users\Philip\OneDrive\Betting\Capra\Tennis\polgara\updater.py", line 99, in run_updater
engineer_variables(dal, matches_for_engineering, params)
File "C:\Users\Philip\OneDrive\Betting\Capra\Tennis\polgara\decorators.py", line 12, in wrapper_timer
value = func(*args, **kwargs)
File "C:\Users\Philip\OneDrive\Betting\Capra\Tennis\polgara\updater.py", line 72, in main
run_updater(dal, scraper)
File "C:\Users\Philip\OneDrive\Betting\Capra\Tennis\polgara\updater.py", line 645, in <module>
main()
I also can't replicate the error, but an easy fix is to not use Exceptions this way. Programming languages aren't optimized for manually handling exceptions often. They should only be used for preemptively capturing possible failures, not for normal logic. Try checking if it's empty instead.
def get_prev_match_elos(player_id, prev_matches):
if not prev_matches:
return
last_match = prev_matches[-1]
return last_match, player_id
Here's Microsoft's take, using C# as the language:
I want to write a Try/Except block that catches the specific error causing this stack trace:
File "/home/me/anaconda2/envs/deepnn/lib/python2.7/site-packages/tensorflow/python/debug/wrappers/local_cli_wrapper.py", line 292, in _prep_cli_for_run_start
self._run_cli = ui_factory.get_ui(self._ui_type)
File "/home/me/anaconda2/envs/deepnn/lib/python2.7/site-packages/tensorflow/python/debug/cli/ui_factory.py", line 61, in get_ui
return curses_ui.CursesUI(on_ui_exit=on_ui_exit, config=config)
File "/home/me/anaconda2/envs/deepnn/lib/python2.7/site-packages/tensorflow/python/debug/cli/curses_ui.py", line 289, in __init__
self._screen_init()
File "/home/me/anaconda2/envs/deepnn/lib/python2.7/site-packages/tensorflow/python/debug/cli/curses_ui.py", line 404, in _screen_init
self._screen_color_init()
File "/home/me/anaconda2/envs/deepnn/lib/python2.7/site-packages/tensorflow/python/debug/cli/curses_ui.py", line 409, in _screen_color_init
curses.use_default_colors()
_curses.error: use_default_colors() returned ERR
But can't figure out how to determine what the correct Exception is.
I've written the following try/except to get more info:
try:
... call to procedure that generates error ...
except Exception,e:
print("type is:", e.__class__.__name__)
import sys
print(sys.exc_info())
And the result I've gotten is:
type is: error
(<class '_curses.error'>, error('use_default_colors() returned ERR',), <traceback object at 0x7fdec55abdd0>)
> /home/me/Projects/kerasECOC/net_manager.py(164)init_model_architecture()
But, when I try
Except error,e:
I get the following error message:
File "/home/me/Projects/kerasECOC/net_manager.py", line 157, in init_model_architecture
except error,e:
NameError: global name 'error' is not defined
So, how can I figure out which specific exception to flag?
As the traceback indicates you should use curses.error:
import curses
try:
...
except curses.error as err:
print(err)
You can check curses.error.mro() for base classes which you could except as well:
>>> curses.error.mro()
[<class '_curses.error'>, <class 'Exception'>, <class 'BaseException'>, <class 'object'>]
It does however not inherit from one of the concrete exceptions.
I'm doing a simple pipeline using Apache Beam in python (on GCP Dataflow) to read from PubSub and write on Big Query but can't handle exceptions on pipeline to create alternatives flows.
On a simple WriteToBigQuery example:
output = json_output | 'Write to BigQuery' >> beam.io.WriteToBigQuery('some-project:dataset.table_name')
I tried to put this inside a try/except code, but it doesnt work because when it fails, exceptions seems to be throwed on a Java layer outside my python execution:
INFO:root:2019-01-29T15:49:46.516Z: JOB_MESSAGE_ERROR: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error received from SDK harness for instruction -87: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 135, in _execute
response = task()
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 170, in <lambda>
self._execute(lambda: worker.do_instruction(work), work)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 221, in do_instruction
request.instruction_id)
...
...
...
self.signature.finish_bundle_method.method_value())
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/bigquery.py", line 1368, in finish_bundle
self._flush_batch()
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/bigquery.py", line 1380, in _flush_batch
self.table_id, errors))
RuntimeError: Could not successfully insert rows to BigQuery table [<myproject:datasetname.tablename>]. Errors: [<InsertErrorsValueListEntry
errors: [<ErrorProto
debugInfo: u''
location: u''
message: u'Missing required field: object.teste.'
reason: u'invalid'>]
index: 0>] [while running 'generatedPtransform-63']
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:57)
org.apache.beam.runners.dataflow.worker.fn.control.RegisterAndProcessBundleOperation.finish(RegisterAndProcessBundleOperation.java:276)
org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:84)
org.apache.beam.runners.dataflow.worker.fn.control.BeamFnMapTaskExecutor.execute(BeamFnMapTaskExecutor.java:119)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1228)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:143)
org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:967)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Error received from SDK harness for instruction -87: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 135, in _execute
response = task()
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 170, in <lambda>
self._execute(lambda: worker.do_instruction(work), work)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 221, in do_instruction
request.instruction_id)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 237, in process_bundle
bundle_processor.process_bundle(instruction_id)
...
...
...
self.signature.finish_bundle_method.method_value())
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/bigquery.py", line 1368, in finish_bundle
self._flush_batch()
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/bigquery.py", line 1380, in _flush_batch
self.table_id, errors))
Even trying to handle this:
RuntimeError: Could not successfully insert rows to BigQuery table [<myproject:datasetname.tablename>]. Errors: [<InsertErrorsValueListEntry
errors: [<ErrorProto
debugInfo: u''
location: u''
message: u'Missing required field: object.teste.'
reason: u'invalid'>]
index: 0>] [while running 'generatedPtransform-63']
Using:
try:
...
except RuntimeException as e:
...
Or using generic Exception didn't work.
I could find a lot of examples of errors handling in Apache Beam using Java, but no one in python handling errors.
Does anyone knows how to got this?
I've been only able to catch exceptions at the DoFn level, so something like this:
class MyPipelineStep(beam.DoFn):
def process(self, element, *args, **kwargs):
try:
# do stuff...
yield pvalue.TaggedOutput('main_output', output_element)
except Exception as e:
yield pvalue.TaggedOutput('exception', str(e))
However WriteToBigQuery is PTransform that wraps the DoFn BigQueryWriteFn
So you may need to do something like this
class MyBigQueryWriteFn(BigQueryWriteFn):
def process(self, *args, **kwargs):
try:
return super(BigQueryWriteFn, self).process(*args, **kwargs)
except Exception as e:
# Do something here
class MyWriteToBigQuery(WriteToBigQuery):
# Copy the source code of `WriteToBigQuery` here,
# but replace `BigQueryWriteFn` with `MyBigQueryWriteFn`
https://beam.apache.org/releases/pydoc/2.9.0/_modules/apache_beam/io/gcp/bigquery.html#WriteToBigQuery
You can also use the generator flavor of FlatMap:
This is similar to the other answer, in that you can use a DoFn in the place of something else, e.g. a CombineFn to produce no outputs when there is an exception or other kind of failed-preconditions.
def sum_values(values: List[int]) -> Generator[int, None, None]:
if not values or len(values) < 10:
logging.error(f'received invalid inputs: {...}')
return
yield sum(values)
# Now instead of use |CombinePerKey|
(inputs
| 'WithKey' >> beam.Map(lambda x: (x.key, x)) \
| 'GroupByKey' >> beam.GroupByKey() \
| 'Values' >> beam.Values() \
| 'MaybeSum' >> beam.FlatMap(sum_values))
I have a for loop on an avro data reader object
for i in reader:
print i
then I got a unicode decode error in the for statement so I wanted to ignore that particular record. So I did this
try:
for i in reader:
print i
except:
pass
but it does not continue further. How can I overcome this problem
Edit: Error trace added
Traceback (most recent call last):
File "modify.py", line 22, in <module>
for record in reader:
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/datafile.py", line 362, in next
datum = self.datum_reader.read(self.datum_decoder)
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 445, in read
return self.read_data(self.writers_schema, self.readers_schema, decoder)
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 490, in read_data
return self.read_record(writers_schema, readers_schema, decoder)
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 690, in read_record
field_val = self.read_data(field.type, readers_field.type, decoder)
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 468, in read_data
return decoder.read_utf8()
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 233, in read_utf8
return unicode(self.read_bytes(), "utf-8")
UnicodeDecodeError: 'utf8' codec can't decode byte 0xb4 in position 14: invalid start byte
could it be due to the fact that the file was corrupted?
Edit2:
As per suggestion in answers to go through iterobject I modified code and got this error
Traceback (most recent call last):
File "modify.py", line 28, in <module>
print next(iterobject)["filepath"]
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/datafile.py", line 362, in next
datum = self.datum_reader.read(self.datum_decoder)
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 445, in read
return self.read_data(self.writers_schema, self.readers_schema, decoder)
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 490, in read_data
return self.read_record(writers_schema, readers_schema, decoder)
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 690, in read_record
field_val = self.read_data(field.type, readers_field.type, decoder)
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 468, in read_data
return decoder.read_utf8()
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 233, in read_utf8
return unicode(self.read_bytes(), "utf-8")
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 226, in read_bytes
return self.read(self.read_long())
File "/usr/lib/python2.6/site-packages/avro-1.7.7-py2.6.egg/avro/io.py", line 184, in read_long
b = ord(self.read(1))
TypeError: ord() expected a character, but string of length 0 found
If your error is in for i in. Then try this, it will skip element in iterator if UnicodeDecodeError occurs.
iterobject = iter(reader)
while iterobject:
try:
print(next(iterobject))
except StopIteration:
break
except UnicodeDecodeError:
pass
You need the try/except inside the loop:
for i in reader:
try:
print i
except UnicodeEncodeError:
pass
By the way it's good practice to specify the specific type of error you're trying to catch (like I did with except UnicodeEncodeError:, since otherwise you risk making your code very hard to debug!
You can except the specific error, and avoid unknown errors to pass unnoticed.
Python 3.x:
try:
for i in reader:
print i
except UnicodeDecodeError as ue:
print(str(ue))
Python 2.x:
try:
for i in reader:
print i
except UnicodeDecodeError, ue:
print(str(ue))
By printing the error it's possible to know what happened. When you use only except, you except anything (And that can include an obscure RuntimeError), and you'll never know what happened. It can be useful sometimes, but it's dangerous and generally a bad practice.
I am having trouble with the following code:
import praw
import argparse
# argument handling was here
def main():
r = praw.Reddit(user_agent='Python Reddit Image Grabber v0.1')
for i in range(len(args.subreddits)):
try:
r.get_subreddit(args.subreddits[i]) # test to see if the subreddit is valid
except:
print "Invalid subreddit"
else:
submissions = r.get_subreddit(args.subreddits[i]).get_hot(limit=100)
print [str(x) for x in submissions]
if __name__ == '__main__':
main()
subreddit names are taken as arguments to the program.
When an invalid args.subreddits is passed to get_subreddit, it throws an exception which should be caught in the above code.
When a valid args.subreddit name is given as an argument, the program runs fine.
But when an invalid args.subreddit name is given, the exception is not thrown, and instead the following uncaught exception is outputted.
Traceback (most recent call last):
File "./pyrig.py", line 33, in <module>
main()
File "./pyrig.py", line 30, in main
print [str(x) for x in submissions]
File "/usr/local/lib/python2.7/dist-packages/praw/__init__.py", line 434, in get_content
page_data = self.request_json(url, params=params)
File "/usr/local/lib/python2.7/dist-packages/praw/decorators.py", line 95, in wrapped
return_value = function(reddit_session, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/praw/__init__.py", line 469, in request_json
response = self._request(url, params, data)
File "/usr/local/lib/python2.7/dist-packages/praw/__init__.py", line 342, in _request
response = handle_redirect()
File "/usr/local/lib/python2.7/dist-packages/praw/__init__.py", line 316, in handle_redirect
url = _raise_redirect_exceptions(response)
File "/usr/local/lib/python2.7/dist-packages/praw/internal.py", line 165, in _raise_redirect_exceptions
.format(subreddit))
praw.errors.InvalidSubreddit: `soccersdsd` is not a valid subreddit
I can't tell what I am doing wrong. I have also tried rewriting the exception code as
except praw.errors.InvalidSubreddit:
which also does not work.
EDIT: exception info for Praw can be found here
File "./pyrig.py", line 30, in main
print [str(x) for x in submissions]
The problem, as your traceback indicates is that the exception doesn't occur when you call get_subreddit In fact, it also doesn't occur when you call get_hot. The first is a lazy invocation that just creates a dummy Subreddit object but doesn't do anything with it. The second, is a generator that doesn't make any requests until you actually try to iterate over it.
Thus you need to move the exception handling code around your print statement (line 30) which is where the request is actually made that results in the exception.