My code:
#Importing the urllib tool to my program
import urllib.request
#Fetch data from URL
response = urllib.request.urlopen('<URL>')
#Store that response into the variable below
taginfo = response.read()
#Tag info result of search for SSI values
taginforesult = taginfo
#print taginfo
print(taginfo)
The result of the above in Python Shell is correct as follows:
b'LOCATE00016331: tagid="00016331", taggroupid=LOCATE, tagtype=mantis04A, irlocator=null, motion=false, tamper=false, panic=false, lowbattery=false, locationzone="", gpsid="", lastgpsid="", lastgpsts=null, confidencebyrule={}\r\n(CarrierHQ_channel_A: reader=CarrierHQ, channel="A", ssi=-95)\r\n(CarrierHQ_channel_B: reader=CarrierHQ, channel="B", ssi=-99)\r\n\r\n\r\n'
What I want is to know is: how do I select only the SSI=-95 and SSI-99 values from the response above and then insert them to an SSI-A and SSI-B variable?
Do I strip(), findall(), search(), ...?
That's a strange format. But you can easily cut it up to get the parts you want.
ssia = str(taginfo).split("\\r\\n")[1]
.strip("()")
.split(",")[-1]
.strip()
.split("=")[1]
assert ssia == '-95'
ssib = str(taginfo).split("\\r\\n")[2]
.strip("()")
.split(",")[-1]
.strip()
.split("=")[1]
assert ssib == '-99'
Related
Pretty new to python so go easy on me :). This code works below but I was wondering if there is a way to change the indcode parameter by doing a loop so I do not have to repeat the requests.get.
paraD = dict()
paraD["area"] = "123"
paraD["periodtype"] = "2"
paraD["indcode"] = "722"
paraD["$limit"]=1000
#Open URL and get data for business indcode 722
document_1 = requests.get(dataURL, params=paraD)
bizdata_1 = document_1.json()
#Open URL and get data for business indcode 445
paraD["indcode"] = "445"
document_2 = requests.get(dataURL, params=paraD)
bizdata_2 = document_2.json()
#Open URL and get data for business indcode 311
paraD["indcode"] = "311"
document_3 = requests.get(dataURL, params=paraD)
bizdata_3 = document_3.json()
#Combine the three lists
output = bizdata_1 + bizdata_2 + bizdata_3
Since indcode is the only parameter that changes for each request, we will put that in a list and make the web requests inside a loop.
data_url = ""
post_params = dict()
post_params["area"] = "123"
post_params["periodtype"] = "2"
post_params["$limit"]=1000
# The list of indcode values
ind_codes = ["722", "445", "311"]
output = []
# Loop on indcode values
for code in ind_codes:
# Change indcode parameter value in the loop
post_params["indcode"] = code
try:
response = requests.get(data_url, params=post_params)
data1 = response.json()
output.append(data1)
except:
print("web request failed")
# More error handling / retry if required
print(output)
Assuming you're using Python 3.9+ you can combine dictionaries using the | operator. However, you need to be sure that you understand exactly what this will do. It's more likely that the code to combine dictionaries will be more complex.
When using the requests module it is very important to check the HTTP status code returned from the function (HTTP verb) you're calling.
Here's an approach to the stated problem that may work (depending on how the dictionary merge is effected).
from urllib.error import HTTPError
from requests import get as GET
from requests.exceptions import Timeout, TooManyRedirects, RequestException
from sys import stderr
# base parameters
params = {'area': '123', 'periodtype': '2', '$limit': 1000}
# the indcodes
indcodes = ('722', '445', '311')
# gets the JSON response (as a Python dictionary)
def getjson(url, params):
try:
(r := GET(url, params, timeout=1.0)).raise_for_status()
return r.json() # all good
# if we get any of these exceptions, report to stderr and return an empty dictionary
except (HTTPError, ConnectionError, Timeout, TooManyRedirects, RequestException) as e:
print(e, file=stderr)
return {}
# any exception here is not associated with requests/urllib. Report and raise
except Exception as f:
print(f, file=stderr)
raise
# an empty dictionary
target = {}
# build the target dictionary
# May not produce desired results depending on how the dictionary merge should be carried out
for indcode in indcodes:
target |= getjson('https://httpbin.org/json', params | {'indcode' : indcode})
I am attempting to convert a curl request into a get-request to pull some data for work and transfer it to a local folder with a parameterized file name. One issue is that the data is only in text format and will not convert to JSON, even after trying multiple methods. Per the response, the data type is "text/tsv; charset=utf-8."
The next issue is that I cannot load the data into a data frame, partially because I am new to Python and do not understand the various methods for doing so, and partially because the formatting makes it more difficult to find an applicable solution. However, I was able to at least break the text into lists by using the splitlines() method. Unfortunately, though, I still cannot load the lists into a data frame. As of the last run, the error message is: "Error: cannot concatenate object of type '<class '_csv.reader'>'; only Series and DataFrame objs are valid."
import requests
import datetime
import petl
import csv
import pandas as pd
import sys
from requests.auth import HTTPBasicAuth
from curlParameters import *
def calculate_year():
current_year = datetime.datetime.now().year
return str(current_year)
def file_name():
name = "CallDetail"
year = calculate_year()
file_type = ".csv"
return name + year + file_type
try:
response = requests.get(url, params=parameters, auth=HTTPBasicAuth(username, password))
except Exception as e:
print("Error:" + str(e))
sys.exit()
if response.status_code == 200:
raw_data = response.text
parsed_data = csv.reader(raw_data.splitlines(), delimiter='\t')
table = pd.DataFrame(columns=[
'contact_id',
'master_contact_id',
'Contact_Code',
'media_name',
'contact_name',
'ani_dialnum',
'skill_no',
'skill_name',
'campaign_no',
'campaign_name',
'agent_no',
'agent_name',
'team_no',
'team_name',
'disposition_code',
'sla',
'start_date',
'start_time',
'PreQueue',
'InQueue',
'Agent_Time',
'PostQueue',
'Total_Time',
'Abandon_Time',
'Routing_Time',
'abandon',
'callback_time',
'Logged',
'Hold_Time'])
try:
for row in table:
table.append(parsed_data)
except Exception as e:
print("Error:" + str(e))
sys.exit()
petl.tocsv(table=table, source=local_source+file_name(), encoding='utf-8', write_header=True)
So, you're trying to append your parsed_data, which is the variable for iterating through your CSV data. I would actually recommend reading the data from the response first, then load it all into the dataframe. This would require a slight restructuring of the code. Something like this:
parsed_data = [row for row in csv.reader(raw_data.splitlines(), delimiter='\t')]
table = pd.DataFrame(parsed_data, columns=your_long_column_list)
I want to read json file from online https://api.myjson.com/bins/y2k4y and print based on the parameter like 'ipv4', 'ipv6'. I have written code in python. It is showing me this error. Please advise me to solve this problem.
Python code
import requests
import json
# Method To Get REST Data In JSON Format
def getResponse(url,choice):
response = requests.get(url)
if(response.ok):
jData = json.loads(response.content)
if(choice=="deviceInfo"):
print("working")
deviceInformation(jData)
else:
print("NOT working")
response.raise_for_status()
# Parses JSON Data To Find Switch Connected To H4
def deviceInformation(data):
global switch
global deviceMAC
global hostPorts
switchDPID = ""
print ( data)
for i in data:
print ("i: ", i['ipv4'])
deviceInfo = "https://api.myjson.com/bins/y2k4y"
getResponse(deviceInfo,"deviceInfo")
Error
File "E:/aaa/e.py", line 27, in deviceInformation
print ("i: ", i['ipv4'])
TypeError: string indices must be integers
json file
https://api.myjson.com/bins/y2k4y
jData is a dict with only 1 key devices. devices contain all the other info which you require.
Change for i in data to for i in data['devices']:
for i in data['devices']:
print ("i: ", i['ipv4'])
print ("i: ", i['ipv6'])
Try using for i in data['devices']: instead of for i in data:
I made a small web-crawler in one function, upso_final.
If I print(upso_final()), I get 15 lists that include title, address, phone #. However, I want to print out only title, so I made variable title a global string. When I print it, I get only 1 title, the last one in the run. I want to get all 15 titles.
from __future__ import unicode_literals
import requests
from scrapy.selector import Selector
import scrapy
import pymysql
def upso_final(page=1):
def upso_from_page(url):
html = fetch_page(url)
sel = Selector(text=html)
global title,address,phone
title = sel.css('h1::text').extract()
address = sel.css('address::text').extract()
phone = sel.css('.mt1::text').extract()
return {
'title' : title,
'address' : address,
'phone' : phone
}
def upso_list_from_listpage(url):
html = fetch_page(url)
sel = Selector(text=html)
upso_list = sel.css('.title_list::attr(href)').extract()
return upso_list
def fetch_page(url):
r = requests.get(url)
return r.text
list_url = "http://yp.koreadaily.com/list/list.asp?page={0}&bra_code=LA&cat_code=L020502&strChar=&searchField=&txtAddr=&txtState=&txtZip=&txtSearch=&sort=N".format(page)
upso_lists = upso_list_from_listpage(list_url)
upsos = [upso_from_page(url) for url in upso_lists]
return upsos
upso_final()
print (title,address,phone)
The basic problem is that you're confused about passing values back from a function.
upso_from_page finds each of the 15 records in turn, placing the desired information in the global variables (generally a bad design). However, the only time you print any results is after you've found all 15. Since your logic has each record overwriting the previous one, you print only the last one you found.
It appears that upso_final accumulates the list and returns it, but you ignore that return value. Instead, try this in your main program:
upso_list = upso_final()
for upso in upso.list:
print (upso)
This should give you a 3-item dictionary for each upso record; from there, you can learn the referencing and format to your taste.
AN alternate solution is to print each record as you find it, from within upso_from_page, but your overall design suggests that's not what you want.
I have a CSV with keywords in one column and the number of impressions in a second column.
I'd like to provide the keywords in a url (while looping) and for the Google language api to return what type of language was the keyword in.
I have it working manually. If I enter (with the correct api key):
http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&key=myapikey&q=merde
I get:
{"responseData": {"language":"fr","isReliable":false,"confidence":6.213709E-4}, "responseDetails": null, "responseStatus": 200}
which is correct, 'merde' is French.
so far I have this code but I keep getting server unreachable errors:
import time
import csv
from operator import itemgetter
import sys
import fileinput
import urllib2
import json
E_OPERATION_ERROR = 1
E_INVALID_PARAMS = 2
#not working
def parse_result(result):
"""Parse a JSONP result string and return a list of terms"""
# Deserialize JSON to Python objects
result_object = json.loads(result)
#Get the rows in the table, then get the second column's value
# for each row
return row in result_object
#not working
def retrieve_terms(seedterm):
print(seedterm)
"""Retrieves and parses data and returns a list of terms"""
url_template = 'http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&key=myapikey&q=%(seed)s'
url = url_template % {"seed": seedterm}
try:
with urllib2.urlopen(url) as data:
data = perform_request(seedterm)
result = data.read()
except:
sys.stderr.write('%s\n' % 'Could not request data from server')
exit(E_OPERATION_ERROR)
#terms = parse_result(result)
#print terms
print result
def main(argv):
filename = argv[1]
csvfile = open(filename, 'r')
csvreader = csv.DictReader(csvfile)
rows = []
for row in csvreader:
rows.append(row)
sortedrows = sorted(rows, key=itemgetter('impressions'), reverse = True)
keys = sortedrows[0].keys()
for item in sortedrows:
retrieve_terms(item['keywords'])
try:
outputfile = open('Output_%s.csv' % (filename),'w')
except IOError:
print("The file is active in another program - close it first!")
sys.exit()
dict_writer = csv.DictWriter(outputfile, keys, lineterminator='\n')
dict_writer.writer.writerow(keys)
dict_writer.writerows(sortedrows)
outputfile.close()
print("File is Done!! Check your folder")
if __name__ == '__main__':
start_time = time.clock()
main(sys.argv)
print("\n")
print time.clock() - start_time, "seconds for script time"
Any idea how to finish the code so that it will work? Thank you!
Try to add referrer, userip as described in the docs:
An area to pay special attention to
relates to correctly identifying
yourself in your requests.
Applications MUST always include a
valid and accurate http referer header
in their requests. In addition, we
ask, but do not require, that each
request contains a valid API Key. By
providing a key, your application
provides us with a secondary
identification mechanism that is
useful should we need to contact you
in order to correct any problems. Read
more about the usefulness of having an
API key
Developers are also encouraged to make
use of the userip parameter (see
below) to supply the IP address of the
end-user on whose behalf you are
making the API request. Doing so will
help distinguish this legitimate
server-side traffic from traffic which
doesn't come from an end-user.
Here's an example based on the answer to the question "access to google with python":
#!/usr/bin/python
# -*- coding: utf-8 -*-
import json
import urllib, urllib2
from pprint import pprint
api_key, userip = None, None
query = {'q' : 'матрёшка'}
referrer = "https://stackoverflow.com/q/4309599/4279"
if userip:
query.update(userip=userip)
if api_key:
query.update(key=api_key)
url = 'http://ajax.googleapis.com/ajax/services/language/detect?v=1.0&%s' %(
urllib.urlencode(query))
request = urllib2.Request(url, headers=dict(Referer=referrer))
json_data = json.load(urllib2.urlopen(request))
pprint(json_data['responseData'])
Output
{u'confidence': 0.070496580000000003, u'isReliable': False, u'language': u'ru'}
Another issue might be that seedterm is not properly quoted:
if isinstance(seedterm, unicode):
value = seedterm
else: # bytes
value = seedterm.decode(put_encoding_here)
url = 'http://...q=%s' % urllib.quote_plus(value.encode('utf-8'))