I wrote the script below to be able to connect to a remote server and get some data from the XML file. I added some error handling to be able to skip issues with some devices. For some reason whenever the script gets a 401 message back, it breaks the whole loop and I get the message "Could not properly read the csv file". I tried other ways of handling the exception and it would fail at other points. Any info on how to properly deal with this?
#!/usr/bin/python
import sys, re, csv, xmltodict
import requests, logging
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
def version(ip, username, password):
baseUrl = "https://" + ip
session = requests.Session()
session.verify = False
session.timeout = 45
print "Connecting to " + ip
try:
r = session.get(baseUrl + '/getxml?location=/Status', auth=(username, password))
r.raise_for_status()
except Exception as error:
print err
doc = xmltodict.parse(r.text)
version = str(doc['Status']['#version'])
def main():
try:
with open('list.csv', 'r') as file:
reader = csv.DictReader(file)
for row in reader:
version(row['ip'], row['Username'], row['Password'])
except Exception as error:
print ValueError("Could not properly read the csv file \r")
sys.exit(0)
if __name__ == "__main__":
main()
The doc and version variables in def version are outside the try: catch: so when r is None due to exception, the next 2 operations also fail, raising some uncaught exception. Which surfaces in main. Can you try including doc and version within the try: catch: and see if it works.
A related suggestion: catch specific exceptions as this helps know more about why your code crashed. ex. Response.raise_for_status() raises requests.exceptions.HTTPError. Catch that, raise all other exceptions. xml might raise something else, catch that, instead of catching ALL.
Related
I have an array, that contains URL addresses to remote files.
By default I tried to download all files using this bad approach:
for a in ARRAY:
wget.download(url=A, out=path_folder)
So, it falls by the some reason: host server return timeout, some URL are broken etc.
How to handle this process more professional? But I can not apply this to my case.
If you still want to use wget, you can wrap the download in a try..except block that just prints any exception and moves on to the next file:
for f in files:
try:
wget.download(url=f, out=path_folder)
except Exception as e:
print("Could not download file {}".format(f)
print(e)
Here you have a way to define a timeout, it reads the filename from the url and retrieves big files as stream, so your memory won't get overfilled
import requests
import urlparse, os
timeout = 30 # Seconds
for url in urls:
try:
# Make the actual request, set the timeout for no data to X seconds and enable streaming responses so we don't have to keep the large files in memory
request = requests.get(url, timeout=timeout, stream=True)
# Get the Filename from the URL
name = os.path.basename(urlparse.urlparse(url).path)
# Open the output file and make sure we write in binary mode
with open(name, 'wb') as fh:
# Walk through the request response in chunks of 1024 * 1024 bytes, so 1MiB
for chunk in request.iter_content(1024 * 1024):
# Write the chunk to the file
fh.write(chunk)
except Exception as e:
print("Something went wrong:", e)
you can use urllib
import urllib.request
urllib.request.urlretrieve('http://www.example.com/files/file.ext', 'folder/file.ext')
you can use a try: except: around the retrive url, to catch any errors
try:
urllib.request.urlretrieve('http://www.example.com/files/file.ext', 'folder/file.ext')
except Exception as e:
print('The server couldn\'t fulfill the request.')
print('Error code: ', e.code)
Adding it as another answer,
If you want to solve the timeout you can use the requests library
import requests
try:
requests.get('http://url/to/file')
catch Exception as e:
print('Error code: ', e.code)
if you haven't specified any time it won't timeout
I have a simple long poll thing using python3 and the requests package. It currently looks something like:
def longpoll():
session = requests.Session()
while True:
try:
fetched = session.get(MyURL)
input = base64.b64decode(fetched.content)
output = process(data)
session.put(MyURL, data=base64.b64encode(response))
except Exception as e:
print(e)
time.sleep(10)
There is a case where instead of processing the input and puting the result, I'd like to raise an http error. Is there a simple way to do this from the high level Session interface? Or do I have to drill down to use the lower level objects?
Since You have control over the server you may want to reverse the 2nd call
Here is an example using bottle to recive the 2nd poll
def longpoll():
session = requests.Session()
while True: #I'm guessing that the server does not care that we call him a lot of times ...
try:
session.post(MyURL, {"ip_address": my_ip_address}) # request work or I'm alive
#input = base64.b64decode(fetched.content)
#output = process(data)
#session.put(MyURL, data=base64.b64encode(response))
except Exception as e:
print(e)
time.sleep(10)
#bottle.post("/process")
def process_new_work():
data = bottle.request.json()
output = process(data) #if an error is thrown an HTTP error will be returned by the framework
return output
This way the server will get the output or an bad HTTP status
With the risk to be a little off-topic I'll give it a try:
I have a small script written in python, which is supposed to POST some data on a specific URL. The problem is:
I am integrating this script in zenoss:
Here is the path from zenoss (GUI)
`Events(http://ip:port/zport/dmd/Events/evconsole)->click on some event class->settings->transform(here I copy-paste my script)`
When the events are triggered, the script is called, but no data it's POSTed to the other IP (a different server) even if the script is running correctly - I don't get any error messages in logs or in the GUI. So, I guess there are some restrictions at the network level and somehow the POSTed data isn't sent by Zenoss.
Could you please guys tell me what should I modify (perhaps a .conf file or hosts file) or what should I do to be able to POST some data on another server ?
The code is here (if relevant):
import urllib2 as urllib
from urllib import urlencode
from os.path import join as joinPath
from traceback import print_tb
from os.path import isfile
from sys import exc_info
gh_url = 'http://ip/txsms.cgi'
APPLICATION_PATH = '/srv/sms_alert/'
ALERT_POINT_PATH = joinPath(APPLICATION_PATH, 'alert_contact')
try:
evt.data = 'Some text'
isInProduction = False
if evt.prodState == 1000:
isInProduction = True
if isInProduction and isfile(ALERT_POINT_PATH):
alertContactContent = None
with open(ALERT_POINT_PATH, 'r') as alertContactFile:
alertContactContent = alertContactFile.read()
alertContactContent = alertContactContent.splitlines()
if alertContactContent:
evt.summary = '#[ SMS ALERT ]# {}'.format(evt.summary)
for alertContactContentLine in alertContactContent:
data = {'phonenr': alertContactContentLine, 'smstxt': evt.summary, 'Submit': 'SendSms'}
urllib.urlencode(data)
req = urllib.Request(gh_url, data)
password_manager = urllib.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, gh_url, 'admin', 'password')
auth_manager = urllib.HTTPBasicAuthHandler(password_manager)
opener = urllib.build_opener(auth_manager)
urllib.install_opener(opener)
handler = urllib.urlopen(req)
else:
evt.summary = '#[ ERROR: SMS ALERT NO CONTACT ]# {}'.format(evt.summary)
except Exception as e:
ex_type, ex, tb = exc_info()
print('\n #[ERROR]# TRANSFORM: exception: {ex}\n'.format(ex=e))
print('\n #[ERROR]# TRANSFORM: exception traceback: {trace}\n'.format(trace=print_tb(tb)))
So to be more specific:
I am trying to authenticate and then POST some data to: http://ip/txsms.cgi. If I am running the script directly from my CentOS machine (and remove the events - as they can be used only inside zenoss) it works and it accomplishes what I want, but when I copy-paste the code in Zenoss, the data isn't POSTed.
Any ideas on this ? (how can I have this done)
I would like to send an email from all of my raised exceptions that appear in the console in a Python script. An example of this would be if a json-web service that I am consuming is down and logs an error message to my console; then I would like to send that in addition to any other errors that are logged in the console.
At the lowest level; here is an example of what I would encounter. I would like to email all associated errors with the script if it fails, or returns exit code 1
import json
import jsonpickle
import requests
import time
f2 = open('C:\Users\Administrator\Desktop\DetailView.json', 'r')
data2 = jsonpickle.encode(jsonpickle.decode(f2.read()) )
url2 = "HTTPERROR or ValueError prone URL that raises an exception"
headers2 = {'Content-type': 'text/plain', 'Accept': '/'}
r2 = requests.post(url2, data=data2, headers=headers2)
decoded2 = json.loads(r2.text)
Start = datetime.datetime.now()
ValueError: No JSON object could be decoded
You want to use the SMTP Handler offered in the logging module. You would put a try except block around your concerned code and every time an exception is thrown the logger will send an email.
For example:
try:
{my code}
except Exception as e:
logger.error("My code threw an error", exc_info=True)
{handle exception}
And you would have to add the SMTP handler to your logging config:
[handler_smtpHandler]
class=logging.handlers.SMTPHandler
level=WARN
formatter=simpleFormatter
args=('localhost', 'ALERT#myemail.com', ['recipient#email.com'], "ERROR SUBJECT")
I made a simple script for amusment that takes the latest comment from http://www.reddit.com/r/random/comments.json?limit=1 and speaks through espeak. I ran into a problem however. If Reddit fails to give me the json data, which it commonly does, the script stops and gives a traceback. This is a problem, as it stops the script. Is there any sort of way to retry to get the json if it fails to load. I am using requests if that means anything
If you need it, here is the part of the code that gets the json data
url = 'http://www.reddit.com/r/random/comments.json?limit=1'
r = requests.get(url)
quote = r.text
body = json.loads(quote)['data']['children'][0]['data']['body']
subreddit = json.loads(quote)['data']['children'][0]['data']['subreddit']
For the vocabulary, the actual error you're having is an exception that has been thrown at some point in a program because of a detected runtime error, and the traceback is the program thread that tells you where the exception has been thrown.
Basically, what you want is an exception handler:
try:
url = 'http://www.reddit.com/r/random/comments.json?limit=1'
r = requests.get(url)
quote = r.text
body = json.loads(quote)['data']['children'][0]['data']['body']
subreddit = json.loads(quote)['data']['children'][0]['data']['subreddit']
except Exception as err:
print err
so that you jump over the part that needs the thing that couldn't work. Have a look at that doc as well: HandlingExceptions - Python Wiki
As pss suggests, if you want to retry after the url failed to load:
done = False
while not done:
try:
url = 'http://www.reddit.com/r/random/comments.json?limit=1'
r = requests.get(url)
except Exception as err:
print err
done = True
quote = r.text
body = json.loads(quote)['data']['children'][0]['data']['body']
subreddit = json.loads(quote)['data']['children'][0]['data']['subreddit']
N.B.: That solution may not be optimal, since if you're offline or the URL is always failing, it'll do an infinite loop. If you retry too fast and too much, Reddit may also ban you.
N.B. 2: I'm using the newest Python 3 syntax for exception handling, which may not work with Python older than 2.7.
N.B. 3: You may also want to choose a class other than Exception for the exception handling, to be able to select what kind of error you want to handle. It mostly depends on your app design, and given what you say, you might want to handle requests.exceptions.ConnectionError, but have a look at request's doc to choose the right one.
Here's what you may want, but please think this through and adapt it to your use case:
import requests
import time
import json
def get_reddit_comments():
retries = 5
while retries != 0:
try:
url = 'http://www.reddit.com/r/random/comments.json?limit=1'
r = requests.get(url)
break # if the request succeeded we get out of the loop
except requests.exceptions.ConnectionError as err:
print("Warning: couldn't get the URL: {}".format(err))
time.delay(1) # wait 1 second between two requests
retries -= 1
if retries == 0: # if we've done 5 attempts, we fail loudly
return None
return r.text
def use_data(quote):
if not quote:
print("could not get URL, despites multiple attempts!")
return False
data = json.loads(quote)
if 'error' in data.keys():
print("could not get data from reddit: error code #{}".format(quote['error']))
return False
body = data['data']['children'][0]['data']['body']
subreddit = data['data']['children'][0]['data']['subreddit']
# … do stuff with your data here
if __name__ == "__main__":
quote = get_reddit_comments()
if not use_data(quote):
print("Fatal error: Couldn't handle data receipt from reddit.")
sys.exit(1)
I hope this snippet will help you correctly design your program. And now that you've discovered exceptions, please always remember that exceptions are for handling things that shall stay exceptional. If you throw an exception at some point in one of your programs, always ask yourself if this is something that should happen when something unexpected happens (like a webpage not loading), or if it's an expected error (like a page loading but giving you an output that is not expected).