Call a range of dates from an API using Python - python

Currently writing a program using an API from MarketStack.com. This is for school, so I am still learning.
I am writing a stock prediction program using Python on PyCharm and I have written the connection between the program and the API without issues. So, I can certainly get the High, Name, Symbols, etc. What I am trying to do now is call a range of dates. The API says I can call up to 30 years of historical data, so I want to call all 30 years for a date that is entered by the user. Then the program will average the high on that date in order to give a trend prediction.
So, the problem I am having is calling more than one date. As I said, I want to call all 30 dates, and then I will do the math, etc.
Can someone help me call a range of dates? I tried installing Pandas and that wasn't being accepted by PyCharm for some reason.. Any help is greatly appreciated.
import tkinter as tk
import requests
# callouts for window size
HEIGHT = 650
WIDTH = 600
# function for response
def format_response(selected_stock):
try:
name1 = selected_stock['data']['name']
symbol1 = selected_stock['data']['symbol']
high1 = selected_stock['data']['eod'][1]['high']
final_str = 'Name: %s \nSymbol: %s \nEnd of Day ($ USD): %s' % (name1, symbol1, high1)
except:
final_str = 'There was a problem retrieving that information'
return final_str
# function linking to API
def stock_data(entry):
params = {'access_key': 'xxx'}
response = requests.get('http://api.marketstack.com/v1/tickers/' + entry + '/' + 'eod', params=params)
selected_stock = response.json()
label2['text'] = format_response(selected_stock)
# function for response
def format_response2(stock_hist_data):
try:
name = stock_hist_data['data']['name']
symbol = stock_hist_data['data']['symbol']
high = stock_hist_data['data']['eod'][1]['high']
name3 = stock_hist_data['data']['name']
symbol3 = stock_hist_data['data']['symbol']
high3 = stock_hist_data['data']['eod'][1]['high']
final_str2 = 'Name: %s \nSymbol: %s \nEnd of Day ($ USD): %s' % (name, symbol, high)
final_str3 = '\nName: %s \nSymbol: %s \nEnd of Day ($ USD): %s' % (name3, symbol3, high3)
except:
final_str2 = 'There was a problem retrieving that information'
final_str3 = 'There was a problem retrieving that information'
return final_str2 + final_str3
# function for response in lower window
def stock_hist_data(entry2):
params2 = {'access_key': 'xxx'}
response2 = requests.get('http://api.marketstack.com/v1/tickers/' + entry2 + '/' + 'eod', params=params2)
hist_data = response2.json()
label4['text'] = format_response2(hist_data)

Related

How to allow users to enter number as a parameter in an API url call

I just started learning python today, so please take it easy. I have created a method that allows someone to enter a number and it will deliver the park name and weather for that location from MLB and Weather.gov. I have them hard coded for a couple of test cases to make sure it works. I want for the user to be able to input the venue number in it so that I can display the information for the proper location. I have searched around quite a bit but I can't seem to find exactly what Im looking for. For example, in the following url: https://statsapi.mlb.com/api/v1/venues/**3289**?hydrate=location, I want the user to pick the number that goes right there. Once I am able to do this, I should be able to take the latitude and longitude from the API call and use that in the weather API call. I'm just stuck right now. I am using the command line so all I need is for someone to be able to input mlbweather(xxx) and hit return. I tried using params, but that just seems to append to the end of the url and adds a ? and equals, so that doesnt work.
def mlbweather(venueNum):
citi = 3289
wrigley = 17
if venueNum == citi:
mlb_api = requests.get('https://statsapi.mlb.com/api/v1/venues/3289?hydrate=location')
mlb_data = mlb_api.text
parse_json = json.loads(mlb_data)
venueWanted = parse_json['venues'][0]['name']
print("Venue:" + " " + venueWanted)
weather_api = requests.get('https://api.weather.gov/gridpoints/OKX/37,37/forecast')
weather_data = weather_api.text
parse_json = json.loads(weather_data)
weatherWanted = parse_json['properties']['periods'][0]['detailedForecast']
print("Current Weather: \n" + weatherWanted)
elif venueNum == wrigley:
mlb_api = requests.get('https://statsapi.mlb.com/api/v1/venues/17?hydrate=location')
mlb_data = mlb_api.text
parse_json = json.loads(mlb_data)
venueWanted = parse_json['venues'][0]['name']
print("Venue:" + " " + venueWanted)
weather_api = requests.get('https://api.weather.gov/gridpoints/LOT/74,75/forecast')
weather_data = weather_api.text
parse_json = json.loads(weather_data)
weatherWanted = parse_json['properties']['periods'][0]['detailedForecast']
print("Current Weather: \n" + weatherWanted)
else:
print("Either you typed an invalid venue number or we don't have that info")
You're looking for simple string concatenation:
def mlbweather(venueNum):
mlb_api = requests.get('https://statsapi.mlb.com/api/v1/venues/' + str(venueNum) + '?hydrate=location')
mlb_data = mlb_api.text
parse_json = json.loads(mlb_data)
venueWanted = parse_json['venues'][0]['name']
print("Venue:" + " " + venueWanted)
weather_api = requests.get('https://api.weather.gov/gridpoints/OKX/37,37/forecast')
weather_data = weather_api.text
parse_json = json.loads(weather_data)
weatherWanted = parse_json['properties']['periods'][0]['detailedForecast']
print("Current Weather: \n" + weatherWanted)
mlbweather(3289)
Venue: Citi Field
Current Weather:
Patchy fog after 4am. Partly cloudy, with a low around 72. South wind 2 to 7 mph.
Alternatively, you can use fstrings:
mlb_api = requests.get(f'https://statsapi.mlb.com/api/v1/venues/{venueNum}?hydrate=location')

InvalidSchema("No connection adapters were found for '%s'" % url) - Have tried a lot

I understood that the topic already exists, but I cannot figure out what is the issue.
This is the code ( I believe it is straight forward )
import requests
import config
import time
dataID = "111111"
# If reportis not generated will return 'None'
def get_data():
dataChecking = None
checking_url = 'https://api.example.com/v1.1/reports/{0}'.format(dataID)
responseCheck = requests.get(checking_url, params=(('fields',
'generated_date'), ), auth=(''
+ config.authUsername + '', ''
+ config.authPassword + ''))
report_url = responseCheck.json()['report']['generated_date']
dataChecking = requests.get(report_url).content
return dataChecking
def download_report(dataChecking):
urlDownload = 'https://api.example.com/v1.1/reports/{0}'.format(dataID)
responseDownload = requests.get(urlDownload, params=(('fields',
'download'), ), auth=(''
+ config.authUsername + '', ''
+ config.authPassword + ''))
report_url = responseDownload.json()['report']['download']
dataDownload = requests.get(report_url).content
with open('' + config.fileDest + '\exportReport.json', 'w') as f:
f.write(dataDownload)
pass
# Checking report is generated
generatedData = get_data()
# Wating for report to generated
while generatedData == None:
# Check again if report is generated
print("Report is generating, Please wait")
generatedData = get_data()
# Wait 0.25 seconds between each check
time.sleep(0.25)
# Report generated, need to download him
download_report(dataChecking)
Error is :
raise InvalidSchema("No connection adapters were found for '%s'" % url)
InvalidSchema: No connection adapters were found for '2018-06-15T10:37:50'
I have tried to change the URL part, using different tutorials, with no success.
'2018-06-15T10:37:50' - This is the date when the report is generated, so what I currently do, is trying to check if the data is empty and keep checking it it filled in (with date, as I demonstrate in the example) and then it will run the download part.
Your error seems to be in these lines in the get_data function:
report_url = responseCheck.json()['report']['generated_date']
dataChecking = requests.get(report_url).content
The data structure implies that what you're storing in the variable report_url is actually a date, and then you're trying to retrieve that as a url, which throws the error. Figure out where the actual report url is stored and fetch that instead.

Bug in python code preventing successful recursion?

I have been working on a script to ingest a file (accounts.txt) which contains email addresses, for which each will then be verified against an API to see if they appear in a data dump. The script appears to work, however there is a bug present whereby once it finds a positive hit, it will disregard any other match...
For example;
If my "accounts.txt" file contains the following entries:
a#a.com
b#b.com
Even though both of those should return results, as soon as the script is run, the match on a#a.com will be found however b#b.com will not return anything.
I cannot seem to figure out why this is happening, ideally I want all of the hits outputted.
FYI, the script is querying 'haveibeenpwned' which is a site that locates email addresses found in credential dumps.
Any help finding my bug would be greatly appreciated. Below is my current script.
#!/usr/bin/env python
import argparse
import json
import requests
import time
breaches_by_date = {}
breaches_by_account = {}
breaches_by_name = {}
class Breach(object):
def __init__(self, e, n, d):
self.email = e
self.name = n
self.date = d
def __repr__(self):
return "%s: %s breached on %s" % (self.email, self.name, self.date)
def accCheck(acc):
global breaches_by_date, breaches_by_account, breaches_by_name
r = requests.get('https://haveibeenpwned.com/api/v2/breachedaccount/%s?truncateResponse=false' % acc)
try:
data = json.loads(r.text)
except ValueError:
print("No breach information for %s" % acc)
return
for i in data:
name, date = (i['Name'], i['BreachDate'])
breach = Breach(acc, name, date)
try: breaches_by_account[acc].append(breach)
except: breaches_by_account[acc] = [breach]
try: breaches_by_name[name].append(breach)
except: breaches_by_name[name] = [breach]
try: breaches_by_date[date].append(breach)
except: breaches_by_date[date] = [breach]
def readFromFile(fname="accounts.txt"):
accounts=[]
with open(fname, "r+") as f:
accounts = [l.strip() for l in f.readlines()]
return accounts
if __name__ == '__main__':
accounts = readFromFile()
for email_addr in accounts:
accCheck(email_addr)
print
print("Breaches by date")
for date, breaches in breaches_by_date.items():
for breach in breaches:
print(breach)
print
print("Breaches by account")
for acc, breaches in breaches_by_account.items():
print(acc)
for breach in breaches:
print("%s breached on %s" % (breach.name, breach.date))
print
print("Breaches by name")
for name, breaches in breaches_by_name.items():
print("%s breached for the following accounts:" % name)
for breach in breaches:
print("%s on %s" % (breach.email, breach.date))
print
I am not 100% sure to know where your problem comes from, but I would opt for a code like:
emails_to_check = open("/path/to/yourfile").read().split("\n")
for email in emails_to_check:
if is_email_blacklisted(email):
do_something()

How to parse a single-column text file into a table using python?

I'm new here to StackOverflow, but I have found a LOT of answers on this site. I'm also a programming newbie, so i figured i'd join and finally become part of this community - starting with a question about a problem that's been plaguing me for hours.
I login to a website and scrape a big body of text within the b tag to be converted into a proper table. The layout of the resulting Output.txt looks like this:
BIN STATUS
8FHA9D8H 82HG9F RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
INVENTORY CODE: FPBC *SOUP CANS LENTILS
BIN STATUS
HA8DHW2H HD0138 RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU 00A123 #2956- INVALID STOCK COUPON CODE (MISSING).
93827548 096DBR RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
There are a bunch of pages with the exact same blocks, but i need them to be combined into an ACTUAL table that looks like this:
BIN INV CODE STATUS
HA8DHW2HHD0138 FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8SHDNADU00A123 FPBC-*SOUP CANS LENTILS #2956- INVALID STOCK COUPON CODE (MISSING).
93827548096DBR FPBC-*SOUP CANS LENTILS RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
8FHA9D8H82HG9F SSXR-98-20LM NM CORN CREAM RECEIVED SUCCESSFULLY AWAITING STOCKING PROCESS
Essentially, all separate text blocks in this example would become part of this table, with the inv code repeating with its Bin values. I would post my attempts at parsing this data(have tried Pandas/bs/openpyxl/csv writer), but ill admit they are a little embarrassing, as i cannot find any information on this specific problem. Is there any benevolent soul out there that can help me out? :)
(Also, i am using Python 2.7)
A simple custom parser like the following should do the trick.
from __future__ import print_function
def parse_body(s):
line_sep = '\n'
getting_bins = False
inv_code = ''
for l in s.split(line_sep):
if l.startswith('INVENTORY CODE:') and not getting_bins:
inv_data = l.split()
inv_code = inv_data[2] + '-' + ' '.join(inv_data[3:])
elif l.startswith('INVENTORY CODE:') and getting_bins:
print("unexpected inventory code while reading bins:", l)
elif l.startswith('BIN') and l.endswith('MESSAGE'):
getting_bins = True
elif getting_bins == True and l:
bin_data = l.split()
# need to add exception handling here to make sure:
# 1) we have an inv_code
# 2) bin_data is at least 3 items big (assuming two for
# bin_id and at least one for message)
# 3) maybe some constraint checking to ensure that we have
# a valid instance of an inventory code and bin id
bin_id = ''.join(bin_data[0:2])
message = ' '.join(bin_data[2:])
# we now have a bin, an inv_code, and a message to add to our table
print(bin_id.ljust(20), inv_code.ljust(30), message, sep='\t')
elif getting_bins == True and not l:
# done getting bins for current inventory code
getting_bins = False
inv_code = ''
A rather complex one, but this might get you started:
import re, pandas as pd
from pandas import DataFrame
rx = re.compile(r'''
(?:INVENTORY\ CODE:)\s*
(?P<inv>.+\S)
[\s\S]+?
^BIN.+[\n\r]
(?P<bin_msg>(?:(?!^\ ).+[\n\r])+)
''', re.MULTILINE | re.VERBOSE)
string = your_string_here
# set up the dataframe
df = DataFrame(columns = ['BIN', 'INV', 'MESSAGE'])
for match in rx.finditer(string):
inv = match.group('inv')
bin_msg_raw = match.group('bin_msg').split("\n")
rxbinmsg = re.compile(r'^(?P<bin>(?:(?!\ {2}).)+)\s+(?P<message>.+\S)\s*$', re.MULTILINE)
for item in bin_msg_raw:
for m in rxbinmsg.finditer(item):
# append it to the dataframe
df.loc[len(df.index)] = [m.group('bin'), inv, m.group('message')]
print(df)
Explanation
It looks for INVENTORY CODE and sets up the groups (inv and bin_msg) for further processing in afterwork() (note: it would be easier if you had only one line of bin/msg as you need to split the group here afterwards).
Afterwards, it splits the bin and msg part and appends all to the df object.
I had a code written for a website scrapping which may help you.
Basically what you need to do is write click on the web page go to html and try to find the tag for the table you are looking for and using the module (i am using beautiful soup) extract the information. I am creating a json as I need to store it into mongodb you can create table.
#! /usr/bin/python
import sys
import requests
import re
from BeautifulSoup import BeautifulSoup
import pymongo
def req_and_parsing():
url2 = 'http://businfo.dimts.in/businfo/Bus_info/EtaByRoute.aspx?ID='
list1 = ['534UP','534DOWN']
for Route in list1:
final_url = url2 + Route
#r = requests.get(final_url)
#parsing_file(r.text,Route)
outdict = []
outdict = [parsing_file( requests.get(url2+Route).text,Route) for Route in list1 ]
print outdict
conn = f_connection()
for i in range(len(outdict)):
insert_records(conn,outdict[i])
def parsing_file(txt,Route):
soup = BeautifulSoup(txt)
table = soup.findAll("table",{"id" : "ctl00_ContentPlaceHolder1_GridView2"})
#trtags = table[0].findAll('tr')
tdlist = []
trtddict = {}
"""
for trtag in trtags:
print 'print trtag- ' , trtag.text
tdtags = trtag.findAll('td')
for tdtag in tdtags:
print tdtag.text
"""
divtags = soup.findAll("span",{"id":"ctl00_ContentPlaceHolder1_ErrorLabel"})
for divtag in divtags:
for divtag in divtags:
print "div tag - " , divtag.text
if divtag.text == "Currently no bus is running on this route" or "This is not a cluster (orange bus) route":
print "Page not displayed Errored with below meeeage for Route-", Route," , " , divtag.text
sys.exit()
trtags = table[0].findAll('tr')
for trtag in trtags:
tdtags = trtag.findAll('td')
if len(tdtags) == 2:
trtddict[tdtags[0].text] = sub_colon(tdtags[1].text)
return trtddict
def sub_colon(tag_str):
return re.sub(';',',',tag_str)
def f_connection():
try:
conn=pymongo.MongoClient()
print "Connected successfully!!!"
except pymongo.errors.ConnectionFailure, e:
print "Could not connect to MongoDB: %s" % e
return conn
def insert_records(conn,stop_dict):
db = conn.test
print db.collection_names()
mycoll = db.stopsETA
mycoll.insert(stop_dict)
if __name__ == "__main__":
req_and_parsing()

<type 'exceptions.IOError'> [Errno 9] Bad file descriptor

The code below is a part of a program which is aimed to capture data from Bloomberg terminal and dump it into SQLite database. It worked pretty well on my 32-bit windows XP. But it keeps giving me
"get_history.histfetch error: [Errno 9] Bad file descriptor" on 64-bit windows 7, although there shouldn't be a problem using 32-bit python under 64-bit OS. Sometimes this problem can be solved by simply exit the program and open it again, but sometimes it just won't work. Right now I'm really confused about what leads to this problem. I looked at the source code and found the problem is generated while calling "histfetch" and I have NO idea which part of the code is failing. Can anyone help me out here...? I really really appreciate it. Thanks in advance.
def run(self):
try: pythoncom.CoInitializeEx(pythoncom.COINIT_APARTMENTTHREADED)
except: pass
while 1:
if self.trigger:
try: self.histfetch()
except Exception,e:
logging.error('get_history.histfetch error: %s %s' % (str(type(e)),str(e)))
if self.errornotify != None:
self.errornotify('get_history error','%s %s' % ( str(type(e)), str(e) ) )
self.trigger = 0
if self.telomere: break
time.sleep(0.5)
def histfetch(self):
blpcon = win32com.client.gencache.EnsureDispatch('blpapicom.Session')
blpcon.Start()
dbcon = sqlite3.connect(self.dbfile)
c = dbcon.cursor()
fieldcodes = {}
symcodes = {}
trysleep(c,'select fid,field from fields')
for fid,field in c.fetchall():
# these are different types so this will be ok
fieldcodes[fid] = field
fieldcodes[field] = fid
trysleep(c,'select sid,symbol from symbols')
for sid,symbol in c.fetchall():
symcodes[sid] = symbol
symcodes[symbol] = sid
for instr in self.instructions:
if instr[1] != 'minute': continue
sym,rollspec = instr[0],instr[2]
print 'MINUTE',sym
limits = []
sid = getsid(sym,symcodes,dbcon,c)
trysleep(c,'select min(epoch),max(epoch) from minute where sid=?',(sid,))
try: mine,maxe = c.fetchone()
except: mine,maxe = None,None
print sym,'minute data limits',mine,maxe
rr = getreqrange(mine,maxe)
if rr == None: continue
start,end = rr
dstart = start.strftime('%Y%m%d')
dend = end.strftime('%Y%m%d')
try: # if rollspec is 'noroll', then this will fail and goto except-block
ndaysbefore = int(rollspec)
print 'hist fetch for %s, %i days' % (sym,ndaysbefore)
rolldb.update_roll_db(blpcon,(sym,))
names = rolldb.get_contract_range(sym,ndaysbefore)
except: names = {sym:None}
# sort alphabetically here so oldest always gets done first
# (at least within the decade)
sorted_contracts = names.keys()
sorted_contracts.sort()
for contract in sorted_contracts:
print 'partial fetch',contract,names[contract]
if names[contract] == None:
_start,_end = start,end
else:
da,db = names[contract]
dc,dd = start,end
try: _start,_end = get_overlap(da,db,dc,dd)
except: continue # because get_overlap returning None cannot assign to tuple
# localstart and end are for printing and logging
localstart = _start.strftime('%Y/%m/%d %H:%M')
localend = _end.strftime('%Y/%m/%d %H:%M')
_start = datetime.utcfromtimestamp(time.mktime(_start.timetuple())).strftime(self.blpfmt)
_end = datetime.utcfromtimestamp(time.mktime(_end.timetuple())).strftime(self.blpfmt)
logging.debug('requesting intraday bars for %s (%s): %s to %s' % (sym,contract,localstart,localend))
print 'start,end:',localstart,localend
result = get_minute(blpcon,contract,_start,_end)
if len(result) == 0:
logging.error('warning: 0-length minute data fetch for %s,%s,%s' % (contract,_start,_end))
continue
event_count = len(result.values()[0])
print event_count,'events returned'
lap = time.clock()
# todo: split up writes: no more than 5000 before commit (so other threads get a chance)
# 100,000 rows is 13 seconds on my machine. 5000 should be 0.5 seconds.
try:
for i in range(event_count):
epoch = calendar.timegm(datetime.strptime(str(result['time'][i]),'%m/%d/%y %H:%M:%S').timetuple())
# this uses sid (from sym), NOT contract
row = (sid,epoch,result['open'][i],result['high'][i],result['low'][i],result['close'][i],result['volume'][i],result['numEvents'][i])
trysleep(c,'insert or ignore into minute (sid,epoch,open,high,low,close,volume,nevents) values (?,?,?,?,?,?,?,?)',row)
dbcon.commit()
except Exception,e:
print 'ERROR',e,'iterating result object'
logging.error(datetime.now().strftime() + ' error in get_history.histfetch writing DB')
# todo: tray notify the error and log it
lap = time.clock() - lap
print 'database write of %i rows in %.2f seconds' % (event_count,lap)
logging.debug(' -- minute bars %i rows (%.2f s)' % (event_count,lap))
for instr in self.instructions:
oldestdaily = datetime.now().replace(hour=0,minute=0,second=0,microsecond=0) - timedelta(self.dailyback)
sym = instr[0]
if instr[1] != 'daily': continue
print 'DAILY',sym
fields = instr[2]
rollspec = instr[3]
sid = getsid(sym,symcodes,dbcon,c)
unionrange = None,None
for f in fields:
try: fid = fieldcodes[f]
except:
trysleep(c,'insert into fields (field) values (?)',(f,))
trysleep(c,'select fid from fields where field=?',(f,))
fid, = c.fetchone()
dbcon.commit()
fieldcodes[fid] = f
fieldcodes[f] = fid
trysleep(c,'select min(epoch),max(epoch) from daily where sid=? and fid=?',(sid,fid))
mine,maxe = c.fetchone()
if mine == None or maxe == None:
unionrange = None
break
if unionrange == (None,None):
unionrange = mine,maxe
else:
unionrange = max(mine,unionrange[0]),min(maxe,unionrange[1])
print sym,'daily unionrange',unionrange
yesterday = datetime.now().replace(hour=0,minute=0,second=0,microsecond=0) - timedelta(days=1)
if unionrange == None:
reqrange = oldestdaily,yesterday
else:
mine = datetime.fromordinal(unionrange[0])
maxe = datetime.fromordinal(unionrange[1])
print 'comparing',mine,maxe,oldestdaily,yesterday
if oldestdaily < datetime.fromordinal(unionrange[0]): a = oldestdaily
else: a = maxe
reqrange = a,yesterday
if reqrange[0] >= reqrange[1]:
print 'skipping daily',sym,'because we\'re up to date'
continue
print 'daily request range',sym,reqrange,reqrange[0] > reqrange[1]
try:
ndaysbefore = int(rollspec) # exception if it's 'noroll'
print 'hist fetch for %s, %i days' % (sym,ndaysbefore)
rolldb.update_roll_db(blpcon,(sym,))
names = rolldb.get_contract_range(sym,ndaysbefore,daily=True)
except: names = {sym:None}
# sort alphabetically here so oldest always gets done first
# (at least within the year)
sorted_contracts = names.keys()
sorted_contracts.sort()
start,end = reqrange
for contract in sorted_contracts:
print 'partial fetch',contract,names[contract]
if names[contract] == None:
_start,_end = start,end
else:
da,db = names[contract]
dc,dd = start,end
try: _start,_end = get_overlap(da,db,dc,dd)
except: continue # because get_overlap returning None cannot assign to tuple
_start = _start.strftime('%Y%m%d')
_end = _end.strftime('%Y%m%d')
logging.info('daily bars for %s (%s), %s - %s' % (sym,contract,_start,_end))
result = get_daily(blpcon,(contract,),fields,_start,_end)
try: result = result[contract]
except:
print 'result doesn\'t contain requested symbol'
logging.error("ERROR: symbol '%s' not in daily request result" % contract)
# todo: log and alert error
continue
if not 'date' in result:
print 'result has no date field'
logging.error('ERROR: daily result has no date field')
# todo: log and alert error
continue
keys = result.keys()
keys.remove('date')
logging.info(' -- %i days returned' % len(result['date']))
for i in range(len(result['date'])):
ordinal = datetime.fromtimestamp(int(result['date'][i])).toordinal()
for k in keys:
trysleep(c,'insert or ignore into daily (sid,fid,epoch,value) values (?,?,?,?)',(sid,fieldcodes[k],ordinal,result[k][i]))
dbcon.commit()
Print the full traceback instead of just the exception message. The traceback will show you where the exception was raised and hence what the problem is:
import traceback
...
try: self.histfetch()
except Exception,e:
logging.error('get_history.histfetch error: %s %s' % (str(type(e)),str(e)))
logging.error(traceback.format_exc())
if self.errornotify != None:
self.errornotify('get_history error','%s %s' % ( str(type(e)), str(e) ) )
Update:
With the above (or similar, the idea being to look at the full traceback), you say:
it said it's with the "print" functions. The program works well after I disable all the "print" functions.
The print function calls you have in your post uses syntax valid in python 2.x only. If that is what you are using, perhaps the application that runs your script has undefined print and you're supposed to use a log function, otherwise I can't see anything wrong with the calls (unless you mean only one of the prints was the issue, then I would need to see the exact error to identify -- post this if you want to figure this out). If you are using Python 3.x, then you must use print(a, b, c, ...), see 3.x docs.

Categories

Resources