Pretty new to python so go easy on me :). This code works below but I was wondering if there is a way to change the indcode parameter by doing a loop so I do not have to repeat the requests.get.
paraD = dict()
paraD["area"] = "123"
paraD["periodtype"] = "2"
paraD["indcode"] = "722"
paraD["$limit"]=1000
#Open URL and get data for business indcode 722
document_1 = requests.get(dataURL, params=paraD)
bizdata_1 = document_1.json()
#Open URL and get data for business indcode 445
paraD["indcode"] = "445"
document_2 = requests.get(dataURL, params=paraD)
bizdata_2 = document_2.json()
#Open URL and get data for business indcode 311
paraD["indcode"] = "311"
document_3 = requests.get(dataURL, params=paraD)
bizdata_3 = document_3.json()
#Combine the three lists
output = bizdata_1 + bizdata_2 + bizdata_3
Since indcode is the only parameter that changes for each request, we will put that in a list and make the web requests inside a loop.
data_url = ""
post_params = dict()
post_params["area"] = "123"
post_params["periodtype"] = "2"
post_params["$limit"]=1000
# The list of indcode values
ind_codes = ["722", "445", "311"]
output = []
# Loop on indcode values
for code in ind_codes:
# Change indcode parameter value in the loop
post_params["indcode"] = code
try:
response = requests.get(data_url, params=post_params)
data1 = response.json()
output.append(data1)
except:
print("web request failed")
# More error handling / retry if required
print(output)
Assuming you're using Python 3.9+ you can combine dictionaries using the | operator. However, you need to be sure that you understand exactly what this will do. It's more likely that the code to combine dictionaries will be more complex.
When using the requests module it is very important to check the HTTP status code returned from the function (HTTP verb) you're calling.
Here's an approach to the stated problem that may work (depending on how the dictionary merge is effected).
from urllib.error import HTTPError
from requests import get as GET
from requests.exceptions import Timeout, TooManyRedirects, RequestException
from sys import stderr
# base parameters
params = {'area': '123', 'periodtype': '2', '$limit': 1000}
# the indcodes
indcodes = ('722', '445', '311')
# gets the JSON response (as a Python dictionary)
def getjson(url, params):
try:
(r := GET(url, params, timeout=1.0)).raise_for_status()
return r.json() # all good
# if we get any of these exceptions, report to stderr and return an empty dictionary
except (HTTPError, ConnectionError, Timeout, TooManyRedirects, RequestException) as e:
print(e, file=stderr)
return {}
# any exception here is not associated with requests/urllib. Report and raise
except Exception as f:
print(f, file=stderr)
raise
# an empty dictionary
target = {}
# build the target dictionary
# May not produce desired results depending on how the dictionary merge should be carried out
for indcode in indcodes:
target |= getjson('https://httpbin.org/json', params | {'indcode' : indcode})
Related
I'm very new to python, but learning.
I have below error in my code, I could solve 1 of the solutions, but I'm not sure is correct.
1. UnboundLocalError: local variable 'newdict' referenced before assignment
In my function I want to copy data to newdict if [elt['id'] for elt in response_json['data']] is higher but I get the error number 1. I then googled and found out that I could add Global newdict, but I am not sure if it is the right of way of expressing variables in Python. I am not sure how I can solve it.
I want to copy only the 'data' key from the nested dictionary response_json, it looks like it is a nested dictionary, so I can compare the 'id' element, because if that's higher than the previous, then I need to do something.
I hope someone can help me.
import json
import pprint, datetime
import numpy as np
import schedule
import time
from http import client
trading_pair_id = 48
offset = 0
limit = 1
newdict = {'id': 0} # <--- Not sure I have done this correct
def trades():
global newdict # <--- I'm not sure this is pythonically correct
# call the API `GET trades` endpoint
conn = client.HTTPSConnection("trade.blocktrade.com")
conn.request("GET", "/api/v1/trades/%d?offset=%d&limit=%d" % (trading_pair_id,offset,limit), headers={
'Content-Type': 'application/json',})
response = conn.getresponse()
response_raw = response.read()
try:
response_json = json.loads(response_raw)
pprint.pprint(response_json)
except:
print(response_raw)
print([elt['id'] for elt in response_json['data']])
if [elt['id'] for elt in response_json['data']] > newdict['id']:
newdict = dict(response_json)
print(newdict)
schedule.every(5).seconds.do(trades)
while True:
schedule.run_pending()
time.sleep(1)
If I understand you correctly, this should be ok
import json
import pprint, datetime
import numpy as np
import time
from http import client
trading_pair_id = 48
offset = 0
limit = 1
newdict = {'id': 0} # <-- Init variable
def trades(newdict): # <-- Pass variable to the function
# call the API `GET trades` endpoint
conn = client.HTTPSConnection("trade.blocktrade.com")
conn.request("GET", "/api/v1/trades/%d?offset=%d&limit=%d" % (trading_pair_id,offset,limit), headers={
'Content-Type': 'application/json',})
response = conn.getresponse()
response_raw = response.read()
try:
response_json = json.loads(response_raw)
pprint.pprint(response_json)
# <-- replace newdict data with newest id
for elt in response_json['data']:
if elt['id'] > newdict['id']:
newdict = elt
print(newdict)
except:
print(response_raw)
schedule.every(5).seconds.do(trades(newdict))
I am attempting to convert a curl request into a get-request to pull some data for work and transfer it to a local folder with a parameterized file name. One issue is that the data is only in text format and will not convert to JSON, even after trying multiple methods. Per the response, the data type is "text/tsv; charset=utf-8."
The next issue is that I cannot load the data into a data frame, partially because I am new to Python and do not understand the various methods for doing so, and partially because the formatting makes it more difficult to find an applicable solution. However, I was able to at least break the text into lists by using the splitlines() method. Unfortunately, though, I still cannot load the lists into a data frame. As of the last run, the error message is: "Error: cannot concatenate object of type '<class '_csv.reader'>'; only Series and DataFrame objs are valid."
import requests
import datetime
import petl
import csv
import pandas as pd
import sys
from requests.auth import HTTPBasicAuth
from curlParameters import *
def calculate_year():
current_year = datetime.datetime.now().year
return str(current_year)
def file_name():
name = "CallDetail"
year = calculate_year()
file_type = ".csv"
return name + year + file_type
try:
response = requests.get(url, params=parameters, auth=HTTPBasicAuth(username, password))
except Exception as e:
print("Error:" + str(e))
sys.exit()
if response.status_code == 200:
raw_data = response.text
parsed_data = csv.reader(raw_data.splitlines(), delimiter='\t')
table = pd.DataFrame(columns=[
'contact_id',
'master_contact_id',
'Contact_Code',
'media_name',
'contact_name',
'ani_dialnum',
'skill_no',
'skill_name',
'campaign_no',
'campaign_name',
'agent_no',
'agent_name',
'team_no',
'team_name',
'disposition_code',
'sla',
'start_date',
'start_time',
'PreQueue',
'InQueue',
'Agent_Time',
'PostQueue',
'Total_Time',
'Abandon_Time',
'Routing_Time',
'abandon',
'callback_time',
'Logged',
'Hold_Time'])
try:
for row in table:
table.append(parsed_data)
except Exception as e:
print("Error:" + str(e))
sys.exit()
petl.tocsv(table=table, source=local_source+file_name(), encoding='utf-8', write_header=True)
So, you're trying to append your parsed_data, which is the variable for iterating through your CSV data. I would actually recommend reading the data from the response first, then load it all into the dataframe. This would require a slight restructuring of the code. Something like this:
parsed_data = [row for row in csv.reader(raw_data.splitlines(), delimiter='\t')]
table = pd.DataFrame(parsed_data, columns=your_long_column_list)
I'm calling a LinkedIn API with the code below and it does what I want.
However when I use almost identical code inside a loop it returns a type error.
it returns a type error:
File "C:\Users\pchmurzynski\OneDrive - Centiq Ltd\Documents\Python\mergedreqs.py", line 54, in <module>
auth_headers = headers(access_token)
TypeError: 'dict' object is not callable
It has a problem with this line (which again, works fine outside of the loop):
headers = headers(access_token)
I tried changing it to
headers = headers.get(access_token)
or
headers = headers[access_token]
EDIT:
I have also tried this, with the same error:
auth_headers = headers(access_token)
But it didn't help. What am I doing wrong? Why does the dictionary work fine outside of the loop, but not inside of it and what should I do to make it work?
What I am hoping to achieve is to get a list, which I can save as json with share statistics called for each ID from the "shids" list. That can be done with individual requests - one link for one ID,
(f'https://api.linkedin.com/v2/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A77487&ugcPosts=List(urn%3Ali%3AugcPost%3A{shid})
or a a request with a list of ids.
(f'https://api.linkedin.com/v2/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A77487&ugcPosts=List(urn%3Ali%3AugcPost%3A{shid},urn%3Ali%3AugcPost%3A{shid2},...,urn%3Ali%3AugcPost%3A{shidx})
Updated Code thanks to your comments.
shlink = ("https://api.linkedin.com/v2/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A77487&shares=List(urn%3Ali%3Ashare%3A{})")
#loop through the list of share ids and make an api request for each of them
shares = []
token = auth(credentials) # Authenticate the API
headers = fheaders(token) # Make the headers to attach to the API call.
for shid in shids:
#create a request link for each sh id
r = (shlink.format(shid))
#call the api
res = requests.get(r, headers = auth_headers)
share_stats = res.json()
#append the shares list with the responce
shares.append(share_stats["elements"])
works fine outside the loop
Because in the loop, you re-define the variable. Added print statments to show it
from liapiauth import auth, headers # one type
for ...:
...
print(type(headers))
headers = headers(access_token) # now set to another type
print(type(headers))
Lesson learned - don't overrwrite your imports
Some refactors - your auth token isn't changing, so don't put it in the loop; You can use one method for all LinkedIn API queries
from liapiauth import auth, headers
import requests
API_PREFIX = 'https://api.linkedin.com/v2'
SHARES_ENDPOINT_FMT = '/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A77487&shares=List(urn%3Ali%3Ashare%3A{}'
def get_linkedin_response(endpoint, headers):
return requests.get(API_PREFIX + endpoint, headers=headers)
def main(access_token=None):
if access_token is None:
raise ValueError('Access-Token not defined')
auth_headers = headers(access_token)
shares = []
for shid in shids:
endpoint = SHARES_ENDPOINT_FMT.format(shid)
resp = get_linkedin_response(endpoint, auth_headers)
if resp.status_code // 100 == 2:
share_stats = resp.json()
shares.append(share_stats[1])
# TODO: extract your data here
idlist = [el["id"] for el in shares_list["elements"]]
if __name__ == '__main__':
credentials = 'credentials.json'
main(auth(credentials))
I have this loop in my app.py. For some reason it extends the load time by over 3 seconds. Are there any solutions?
import dateutil.parser as dp
# Converts date from ISO-8601 string to formatted string and returns it
def dateConvert(date):
return dp.parse(date).strftime("%H:%M # %e/%b/%y")
def nameFromID(userID):
if userID is None:
return 'Unknown'
else:
response = requests.get("https://example2.org/" + str(userID), headers=headers)
return response.json()['firstName'] + ' ' + response.json()['lastName']
logs = []
response = requests.get("https://example.org", headers=headers)
for response in response.json():
logs.append([nameFromID(response['member']), dateConvert(response['createdAt'])])
It extends the load time by over 3 seconds because it does a lot of unnecessary work, that's why.
You're not using requests Sessions. Each request will require creating and tearing down an HTTPS connection. That's slow.
You're doing another HTTPS request for each name conversion. (See above.)
You're parsing the JSON you get in that function twice.
Whatever dp.parse() is (dateutil?), it's probably doing a lot of extra work parsing from a free-form string. If you know the input format, use strptime.
Here's a rework that should be significantly faster. Please see the TODO points first, of course.
Also, if you are at liberty to knowing the member id -> name mapping doesn't change, you can make name_cache a suitably named global variable too (but remember it may be persisted between requests).
import datetime
import requests
INPUT_DATE_FORMAT = "TODO_FILL_ME_IN" # TODO: FILL ME IN.
def dateConvert(date: str):
return datetime.datetime.strptime(date, INPUT_DATE_FORMAT).strftime(
"%H:%M # %e/%b/%y"
)
def nameFromID(sess: requests.Session, userID):
if userID is None:
return "Unknown"
response = sess.get(f"https://example2.org/{userID}")
response.raise_for_status()
data = response.json()
return "{firstName} {lastName}".format_map(data)
def do_thing():
headers = {} # TODO: fill me in
name_cache = {}
with requests.Session() as sess:
sess.headers.update(headers)
logs = []
response = sess.get("https://example.org")
for response in response.json():
member_id = response["member"]
name = name_cache.get(member_id)
if not name:
name = name_cache[member_id] = nameFromID(sess, member_id)
logs.append([name, dateConvert(response["createdAt"])])
I have a CSV with keywords in one column and the number of impressions in a second column.
I'd like to provide the keywords in a url (while looping) and for the Google language api to return what type of language was the keyword in.
I have it working manually. If I enter (with the correct api key):
http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&key=myapikey&q=merde
I get:
{"responseData": {"language":"fr","isReliable":false,"confidence":6.213709E-4}, "responseDetails": null, "responseStatus": 200}
which is correct, 'merde' is French.
so far I have this code but I keep getting server unreachable errors:
import time
import csv
from operator import itemgetter
import sys
import fileinput
import urllib2
import json
E_OPERATION_ERROR = 1
E_INVALID_PARAMS = 2
#not working
def parse_result(result):
"""Parse a JSONP result string and return a list of terms"""
# Deserialize JSON to Python objects
result_object = json.loads(result)
#Get the rows in the table, then get the second column's value
# for each row
return row in result_object
#not working
def retrieve_terms(seedterm):
print(seedterm)
"""Retrieves and parses data and returns a list of terms"""
url_template = 'http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&key=myapikey&q=%(seed)s'
url = url_template % {"seed": seedterm}
try:
with urllib2.urlopen(url) as data:
data = perform_request(seedterm)
result = data.read()
except:
sys.stderr.write('%s\n' % 'Could not request data from server')
exit(E_OPERATION_ERROR)
#terms = parse_result(result)
#print terms
print result
def main(argv):
filename = argv[1]
csvfile = open(filename, 'r')
csvreader = csv.DictReader(csvfile)
rows = []
for row in csvreader:
rows.append(row)
sortedrows = sorted(rows, key=itemgetter('impressions'), reverse = True)
keys = sortedrows[0].keys()
for item in sortedrows:
retrieve_terms(item['keywords'])
try:
outputfile = open('Output_%s.csv' % (filename),'w')
except IOError:
print("The file is active in another program - close it first!")
sys.exit()
dict_writer = csv.DictWriter(outputfile, keys, lineterminator='\n')
dict_writer.writer.writerow(keys)
dict_writer.writerows(sortedrows)
outputfile.close()
print("File is Done!! Check your folder")
if __name__ == '__main__':
start_time = time.clock()
main(sys.argv)
print("\n")
print time.clock() - start_time, "seconds for script time"
Any idea how to finish the code so that it will work? Thank you!
Try to add referrer, userip as described in the docs:
An area to pay special attention to
relates to correctly identifying
yourself in your requests.
Applications MUST always include a
valid and accurate http referer header
in their requests. In addition, we
ask, but do not require, that each
request contains a valid API Key. By
providing a key, your application
provides us with a secondary
identification mechanism that is
useful should we need to contact you
in order to correct any problems. Read
more about the usefulness of having an
API key
Developers are also encouraged to make
use of the userip parameter (see
below) to supply the IP address of the
end-user on whose behalf you are
making the API request. Doing so will
help distinguish this legitimate
server-side traffic from traffic which
doesn't come from an end-user.
Here's an example based on the answer to the question "access to google with python":
#!/usr/bin/python
# -*- coding: utf-8 -*-
import json
import urllib, urllib2
from pprint import pprint
api_key, userip = None, None
query = {'q' : 'матрёшка'}
referrer = "https://stackoverflow.com/q/4309599/4279"
if userip:
query.update(userip=userip)
if api_key:
query.update(key=api_key)
url = 'http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&%s' %(
urllib.urlencode(query))
request = urllib2.Request(url, headers=dict(Referer=referrer))
json_data = json.load(urllib2.urlopen(request))
pprint(json_data['responseData'])
Output
{u'confidence': 0.070496580000000003, u'isReliable': False, u'language': u'ru'}
Another issue might be that seedterm is not properly quoted:
if isinstance(seedterm, unicode):
value = seedterm
else: # bytes
value = seedterm.decode(put_encoding_here)
url = 'http://...q=%s' % urllib.quote_plus(value.encode('utf-8'))