Storing a variable from a try statement - python

I have program that is calling into an API every 60 seconds and storing the data. The program is running on a cellular modem that is using Python 2.6. What I'm trying to do is have variables StartTimeConv and EndTimeConv from the try statement stored so that if the try statement fails the except statement can reference them. I've had them declared outside of the try statement, but that it generated a "referenced before assignment" error. What I'm ultimately trying to accomplish with this is if there's a cell signal issue or the API service isn't reachable, the start & stop times can still be referenced and the digital io triggers can still function.
def Client():
threading.Timer(60, Client).start()
# Request Session ID
request = urllib2.Request(url)
b64auth = base64.standard_b64encode("%s:%s" % (username,password))
request.add_header("Authorization", "Basic %s" % b64auth)
result = urllib2.urlopen(request)
# Parse and store Session ID
tree = ET.parse(result)
xml_data = tree.getroot()
sessionid = xml_data[1].text
# Dispatch Event Request
url1 = "SiteURL".format(sessionid)
request1 = urllib2.Request(url1)
result1 = urllib2.urlopen(request1)
# Read and store sys time
sys_time = time.localtime()
# Convert Sys time to datetime object
dt = datetime.fromtimestamp(mktime(sys_time))
# Parse and store Dispatch Event, start and stop time
try:
tree1 = ET.parse(result1)
xml_data1 = tree1.getroot()
dispatchEvent = xml_data1[0][0][2].text
EventStartTime = xml_data1[0][0][14].text
EventEndTime = xml_data1[0][0][1].text
#Convert string time to datetime object
StartTimeConv = datetime.strptime(xml_data1[0][0][14].text, "%a %B %d, %Y %H:%M")
EndTimeConv = datetime.strptime(xml_data1[0][0][1].text, "%a %B %d, %Y %H:%M")
print(dispatchEvent)
print(StartTimeConv)
print(EndTimeConv)
print(dt)
except:
print("No Event")
pass
else:
if dispatchEvent is not None and dt >= StartTimeConv:
set_digital_io('D0', 'on')
elif dispatchEvent is not None and dt <= EndTimeConv:
set_digital_io('D0', 'off')
else:
set_digital_io('D0', 'off')

Related

startswith first arg must be bytes or a tuple of bytes, not str: 'Python for everybody' Coursera

I am completing the 'Python for everybody' course on coursera. I am stuck on the 'Mailing List Data - Part I'
I have the following code below:
import sys
import sqlite3
import time
import ssl
from urllib import request
from urllib.parse import urljoin
from urllib.parse import urlparse
import re
from datetime import datetime, timedelta
# Not all systems have this so conditionally define parser
try:
import dateutil.parser as parser
except:
pass
def parsemaildate(md):
# See if we have dateutil
try:
pdate = parser.parse(tdate)
test_at = pdate.isoformat()
return test_at
except:
pass
# Non-dateutil version - we try our best
pieces = md.split()
notz = " ".join(pieces[:4]).strip()
# Try a bunch of format variations - strptime() is *lame*
dnotz = None
for form in ['%d %b %Y %H:%M:%S', '%d %b %Y %H:%M:%S',
'%d %b %Y %H:%M', '%d %b %Y %H:%M', '%d %b %y %H:%M:%S',
'%d %b %y %H:%M:%S', '%d %b %y %H:%M', '%d %b %y %H:%M']:
try:
dnotz = datetime.strptime(notz, form)
break
except:
continue
if dnotz is None:
# print 'Bad Date:',md
return None
iso = dnotz.isoformat()
tz = "+0000"
try:
tz = pieces[4]
ival = int(tz) # Only want numeric timezone values
if tz == '-0000': tz = '+0000'
tzh = tz[:3]
tzm = tz[3:]
tz = tzh + ":" + tzm
except:
pass
return iso + tz
conn = sqlite3.connect('emreyavuzher.sqlite')
cur = conn.cursor()
conn.text_factory = str
baseurl = "http://mbox.dr-chuck.net/sakai.devel/"
cur.execute('''CREATE TABLE IF NOT EXISTS Messages
(id INTEGER UNIQUE, email TEXT, sent_at TEXT,
subject TEXT, headers TEXT, body TEXT)''')
start = 0
cur.execute('SELECT max(id) FROM Messages')
try:
row = cur.fetchone()
if row[0] is not None:
start = row[0]
except:
start = 0
row = None
print(start)
many = 0
# Skip up to five messages
skip = 5
while True:
if (many < 1):
sval = input('How many messages:')
if (len(sval) < 1): break
many = int(sval)
start = start + 1
cur.execute('SELECT id FROM Messages WHERE id=?', (start,))
try:
row = cur.fetchone()
if row is not None: continue
except:
row = None
many = many - 1
url = baseurl + str(start) + '/' + str(start + 1)
try:
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
document = request.urlopen(url)
text = document.read()
if document.getcode() != 200:
print("Error code=", document.getcode(), url)
break
except KeyboardInterrupt:
print('')
print('Program interrupted by user...')
break
except:
print("Unable to retrieve or parse page", url)
print(sys.exc_info()[0])
break
print(url, len(text))
if not text.startswith('From '):
if skip < 1:
print(text)
print("End of mail stream reached...")
quit()
print("Skipping badly formed message")
skip = skip - 1
continue
However, the code keeps giving me the error: Traceback (most recent call last):
File "", line 128, in
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
Would anybody be able to give me a helping hand?

Python script stops working when I put it inside a function

Little bit of background: I'm using Python 2.7.12 on a Windows 10 computer.
This is by far one of the oddest problems I have ever encountered with Python.
I have written a script that makes a GET request to an API, with the correct headers, and gets some XML data back. For the record, when I paste the script like this in a python file and run it via CMD, it works perfectly fine.
But..
It stops working as soon as I wrap this inside a function. Nothing
else, just wrap it inside a function, and use
if __name__ == '__main__':
my_new_function()
to run it from CMD and it won't work anymore. It still works but the API says I have wrong auth credentials, and thus I don't get any data back.
I went over every piece of string that is in this code, and it's all ASCII encoded. I also checked the timestamps, and they are all correct.
This is my script:
SECRET_KEY = 'YYY'
PUBLIC_KEY = 'XXX'
content_type = 'application/xml'
date = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime())
method = 'GET'
uri = '/uri'
msg = """{method}
{content_type}
{date}
x-bol-date:{date}
{uri}""".format(content_type=content_type,
date=date,
method=method,
uri=uri)
h = hmac.new(
SECRET_KEY,
msg, hashlib.sha256)
b64 = base64.b64encode(h.digest())
signature = PUBLIC_KEY + b':' + b64
headers = {'Content-Type': content_type,
'X-BOL-Date': date,
'X-BOL-Authorization': signature}
r = requests.get('example.com/uri', headers=headers)
the same code inside a function:
def get_orders():
SECRET_KEY = 'XXX'
PUBLIC_KEY = 'YYY'
content_type = 'application/xml'
date = time.strftime('%a, %d %b %Y %H:%M:%S GMT', time.gmtime())
method = 'GET'
uri = '/uri'
msg = """{method}
{content_type}
{date}
x-bol-date:{date}
{uri}""".format(content_type=content_type,
date=date,
method=method,
uri=uri)
h = hmac.new(
SECRET_KEY,
msg, hashlib.sha256)
b64 = base64.b64encode(h.digest())
signature = PUBLIC_KEY + b':' + b64
headers = {'Content-Type': content_type,
'X-BOL-Date': date,
'X-BOL-Authorization': signature}
r = requests.get('example.com/uri', headers=headers)
if __name__ == '__main__':
get_orders()
I think your multi-line string is getting spaces in it when you indent it in a function. Concatenate it on each line instead and it should work.

Bloomberg Api for Python: Parts of result missing in response

I'm using bloomberg api for python to get the option data. Firstly, I got all the symbols of option chain. Then I used them to get the bid and ask prices. Through function getOptionChain, there are more than 400 options and I checked the result , it was fine. However, when I run the getPX function, I got only 10 results in the end. Could anyone help me looking into this? Thanks in advance!
import blpapi
import pandas
import csv
options = blpapi.SessionOptions()
options.setServerHost('localhost')
options.setServerPort(8194)
SECURITY_DATA = blpapi.Name("securityData")
SECURITY = blpapi.Name("security")
FIELD_DATA = blpapi.Name("fieldData")
FIELD_ID = blpapi.Name("fieldId")
OPT_CHAIN = blpapi.Name("OPT_CHAIN")
SECURITY_DES = blpapi.Name("Security Description")
def getOptionChain (sec_list):
session = blpapi.Session(options)
session.start()
session.openService('//blp/refdata')
refDataService = session.getService("//blp/refdata")
request = refDataService.createRequest("ReferenceDataRequest")
for s in sec_list:
request.append("securities",s)
request.append("fields", "OPT_CHAIN")
cid = session.sendRequest(request)
try:
# Process received events
while(True):
# We provide timeout to give the chance to Ctrl+C handling:
ev = session.nextEvent(500)
response = []
for msg in ev:
if cid in msg.correlationIds():
securityDataArray = msg.getElement(SECURITY_DATA)
for securityData in securityDataArray.values():
fieldData = securityData.getElement(FIELD_DATA)
for field in fieldData.elements():
for n in range(field.numValues()):
fld = field.getValueAsElement(n)
response.append (fld.getElement(SECURITY_DES).getValueAsString())
# Response completely received, so we could exit
if ev.eventType() == blpapi.Event.RESPONSE:
break
finally:
# Stop the session
session.stop()
return response
def getPX (sec_list, fld_list):
opt_chain_list = getOptionChain(sec_list)
session = blpapi.Session(options)
session.start()
session.openService('//blp/refdata')
refDataService = session.getService("//blp/refdata")
request = refDataService.createRequest("ReferenceDataRequest")
for s in opt_chain_list:
request.append("securities",s)
for f in fld_list:
request.append("fields",f)
cid = session.sendRequest(request)
try:
# Process received events
while(True):
# We provide timeout to give the chance to Ctrl+C handling:
ev = session.nextEvent(500)
response = {}
for msg in ev:
if cid in msg.correlationIds():
securityDataArray = msg.getElement(SECURITY_DATA)
for securityData in securityDataArray.values():
secName = securityData.getElementAsString(SECURITY)
fieldData = securityData.getElement(FIELD_DATA)
response[secName] = {}
for field in fieldData.elements():
response[secName][field.name()] = field.getValueAsFloat()
# Response completely received, so we could exit
if ev.eventType() == blpapi.Event.RESPONSE:
break
finally:
# Stop the session
session.stop()
tempdict = {}
for r in response:
tempdict[r] = pandas.Series(response[r])
data = pandas.DataFrame(tempdict)
return data
sec = ["IBM US Equity"]
fld = ["PX_ASK","PX_BID"]
getPX(sec,fld)
It looks like you've got the "response = {}" in the wrong place.
Currently you're clearing at each iteration of your loop so each event coming in refills it.
If you shift the "response = {}" to just before "While(True):" each iteration will append to it rather than clearing and refilling.
The same is true of the first function, but the bulk data comes back in a single event in this case. If you were using multiple securities you would see the same issue (a single Bloomberg refdata (partial) response contains data for at most 10 securities).

Searching for keywords with pycurl Python is stuck at Shell reverting nothing

I am trying to get tweets related to the keyword in the code But at the python shell there is nothing its just curson only No traceback nothing.The code is here
import time
import pycurl
import urllib
import json
import oauth2 as oauth
API_ENDPOINT_URL = 'https://stream.twitter.com/1.1/statuses/filter.json'
USER_AGENT = 'TwitterStream 1.0' # This can be anything really
# You need to replace these with your own values
OAUTH_KEYS = {'consumer_key': 'ABC',
'consumer_secret': 'ABC',
'access_token_key': 'ABC',
'access_token_secret': 'ABC'}
# These values are posted when setting up the connection
POST_PARAMS = {'include_entities': 0,
'stall_warning': 'true',
'track': 'iphone,ipad,ipod'}
class TwitterStream:
def __init__(self, timeout=False):
self.oauth_token = oauth.Token(key=OAUTH_KEYS['access_token_key'], secret=OAUTH_KEYS['access_token_secret'])
self.oauth_consumer = oauth.Consumer(key=OAUTH_KEYS['consumer_key'], secret=OAUTH_KEYS['consumer_secret'])
self.conn = None
self.buffer = ''
self.timeout = timeout
self.setup_connection()
def setup_connection(self):
""" Create persistant HTTP connection to Streaming API endpoint using cURL.
"""
if self.conn:
self.conn.close()
self.buffer = ''
self.conn = pycurl.Curl()
# Restart connection if less than 1 byte/s is received during "timeout" seconds
if isinstance(self.timeout, int):
self.conn.setopt(pycurl.LOW_SPEED_LIMIT, 1)
self.conn.setopt(pycurl.LOW_SPEED_TIME, self.timeout)
self.conn.setopt(pycurl.URL, API_ENDPOINT_URL)
self.conn.setopt(pycurl.USERAGENT, USER_AGENT)
# Using gzip is optional but saves us bandwidth.
self.conn.setopt(pycurl.ENCODING, 'deflate, gzip')
self.conn.setopt(pycurl.POST, 1)
self.conn.setopt(pycurl.POSTFIELDS, urllib.urlencode(POST_PARAMS))
self.conn.setopt(pycurl.HTTPHEADER, ['Host: stream.twitter.com',
'Authorization: %s' % self.get_oauth_header()])
# self.handle_tweet is the method that are called when new tweets arrive
self.conn.setopt(pycurl.WRITEFUNCTION, self.handle_tweet)
def get_oauth_header(self):
""" Create and return OAuth header.
"""
params = {'oauth_version': '1.0',
'oauth_nonce': oauth.generate_nonce(),
'oauth_timestamp': int(time.time())}
req = oauth.Request(method='POST', parameters=params, url='%s?%s' % (API_ENDPOINT_URL,
urllib.urlencode(POST_PARAMS)))
req.sign_request(oauth.SignatureMethod_HMAC_SHA1(), self.oauth_consumer, self.oauth_token)
return req.to_header()['Authorization'].encode('utf-8')
def start(self):
""" Start listening to Streaming endpoint.
Handle exceptions according to Twitter's recommendations.
"""
backoff_network_error = 0.25
backoff_http_error = 5
backoff_rate_limit = 60
while True:
self.setup_connection()
try:
self.conn.perform()
except:
# Network error, use linear back off up to 16 seconds
print 'Network error: %s' % self.conn.errstr()
print 'Waiting %s seconds before trying again' % backoff_network_error
time.sleep(backoff_network_error)
backoff_network_error = min(backoff_network_error + 1, 16)
continue
# HTTP Error
sc = self.conn.getinfo(pycurl.HTTP_CODE)
if sc == 420:
# Rate limit, use exponential back off starting with 1 minute and double each attempt
print 'Rate limit, waiting %s seconds' % backoff_rate_limit
time.sleep(backoff_rate_limit)
backoff_rate_limit *= 2
else:
# HTTP error, use exponential back off up to 320 seconds
print 'HTTP error %s, %s' % (sc, self.conn.errstr())
print 'Waiting %s seconds' % backoff_http_error
time.sleep(backoff_http_error)
backoff_http_error = min(backoff_http_error * 2, 320)
def handle_tweet(self, data):
""" This method is called when data is received through Streaming endpoint.
"""
self.buffer += data
if data.endswith('\r\n') and self.buffer.strip():
# complete message received
message = json.loads(self.buffer)
self.buffer = ''
msg = ''
if message.get('limit'):
print 'Rate limiting caused us to miss %s tweets' % (message['limit'].get('track'))
elif message.get('disconnect'):
raise Exception('Got disconnect: %s' % message['disconnect'].get('reason'))
elif message.get('warning'):
print 'Got warning: %s' % message['warning'].get('message')
else:
print 'Got tweet with text: %s' % message.get('text')
if __name__ == '__main__':
ts = TwitterStream()
ts.setup_connection()
ts.start()
please help me to resolve the issue with code

Python, Catch timeout during stream request

I'm reading XML events with the requests library as stated in the code below. How do I raise a connection-lost error once the request is started? The Server is emulating a HTTP push / long polling -> http://en.wikipedia.org/wiki/Push_technology#Long_polling and will not end by default.
If there is no new message after 10minutes, the while loop should be exited.
import requests
from time import time
if __name__ == '__main__':
#: Set a default content-length
content_length = 512
try:
requests_stream = requests.get('http://agent.mtconnect.org:80/sample?interval=0', stream=True, timeout=2)
while True:
start_time = time()
#: Read three lines to determine the content-length
for line in requests_stream.iter_lines(3, decode_unicode=None):
if line.startswith('Content-length'):
content_length = int(''.join(x for x in line if x.isdigit()))
#: pause the generator
break
#: Continue the generator and read the exact amount of the body.
for xml in requests_stream.iter_content(content_length):
print "Received XML document with content length of %s in %s seconds" % (len(xml), time() - start_time)
break
except requests.exceptions.RequestException as e:
print('error: ', e)
The server push could be tested with curl via command line:
curl http://agent.mtconnect.org:80/sample\?interval\=0
This might not be the best method, but you can use multiprocessing to run the requests in a separate process.
Something like this should work:
import multiprocessing
import requests
import time
class RequestClient(multiprocessing.Process):
def run(self):
# Write all your code to process the requests here
content_length = 512
try:
requests_stream = requests.get('http://agent.mtconnect.org:80/sample?interval=0', stream=True, timeout=2)
start_time = time.time()
for line in requests_stream.iter_lines(3, decode_unicode=None):
if line.startswith('Content-length'):
content_length = int(''.join(x for x in line if x.isdigit()))
break
for xml in requests_stream.iter_content(content_length):
print "Received XML document with content length of %s in %s seconds" % (len(xml), time.time() - start_time)
break
except requests.exceptions.RequestException as e:
print('error: ', e)
While True:
childProcess = RequestClient()
childProcess.start()
# Wait for 10mins
start_time = time.time()
while time.time() - start_time <= 600:
# Check if the process is still active
if not childProcess.is_alive():
# Request completed
break
time.sleep(5) # Give the system some breathing time
# Check if the process is still active after 10mins.
if childProcess.is_alive():
# Shutdown the process
childProcess.terminate()
raise RuntimeError("Connection Timed-out")
Not the perfect code for your problem, but you get the idea.

Categories

Resources