I wrote the following code:
from hashlib import sha256
from base64 import b64encode
import hmac
import urllib
from time import strftime, gmtime
url = 'http://ecs.amazonaws.com/onca/xml'
AWSAccessKeyId = amazon_settings.amazon_access_key_id
AssociateTag = amazon_settings.amazon_associate_tag
Keywords = urllib.quote_plus('Potter')
Operation = 'ItemSearch'
SearchIndex = 'Books'
Service = 'AWSECommerceService'
Timestamp = urllib.quote_plus(strftime("%Y-%m-%dT%H:%M:%S.000Z", gmtime()))
Version = '2011-08-01'
sign_to = 'GET\necs.amazonaws.com\n/onca/xml\nAWSAccessKeyId=%s&AssociateTag=%s&Keywords=%s&Operation=%s&SearchIndex=%s&Service=%s&Timestamp=%s&Version=%s' % (AWSAccessKeyId, AssociateTag, Keywords, Operation, SearchIndex, Service, Timestamp, Version)
Signature = urllib.quote_plus(b64encode(hmac.new(str(amazon_settings.amazon_secret_access_key), str(sign_to), sha256).digest()))
request = '%s?AWSAccessKeyId=%s&AssociateTag=%s&Keywords=%s&Operation=%s&SearchIndex=%s&Service=%s&Timestamp=%s&Version=%s&Signature=%s' % (url, AWSAccessKeyId, AssociateTag, Keywords, Operation, SearchIndex, Service, Timestamp, Version, Signature)
print request
When i use this code all fine.
But if i try add ItemPage param to sign_to variable and to request variable i get error SignatureDoesNotMatch.
Help me please.
It's actualy not answer to you question, but i recomend you take a look at excellent python wrapper for the Amazon Product Advertising API - python-amazon-product-api
It's hard to find in the documentation, but you have to make sure that your list of Operations are in alphabetical order or else you get a SignatureDoesNotMatch error.
For example, ItemPage must go between AssociateTag and Keywords to be valid.
AWSAccessKeyId
AssociateTag
ItemPage
Keywords
Operation
ResponseGroup
SearchIndex
Service
SignatureVersion
Timestamp
Version
Related
I am currently using Indexing API v3.
When I am using this API in a loop, I got this error:
Invalid attribute. 'url' is not in standard URL format
But I am pretty sure that my URL is correct, because it is download from Google search console:
Here is the code:
from oauth2client.service_account import ServiceAccountCredentials
import httplib2
import json
import pandas as pd
JSON_KEY_FILE = "key.json"
SCOPES = ["https://www.googleapis.com/auth/indexing"]
credentials = ServiceAccountCredentials.from_json_keyfile_name(JSON_KEY_FILE, scopes=SCOPES)
http = credentials.authorize(httplib2.Http())
# This file contains 2 column, URL and date
csv = pd.read_csv("my_data.csv")
csv[["URL"]][0:10].apply(lambda x: indexURL(x.to_string(), http), axis=1)
def indexURL(url, http):
ENDPOINT = "https://indexing.googleapis.com/v3/urlNotifications:publish"
content = {}
content['url'] = url
content['type'] = "URL_UPDATED"
json_ctn = json.dumps(content)
response, content = http.request(ENDPOINT, method="POST", body=json_ctn)
result = json.loads(content.decode())
if("error" in result):
print("Error({} - {}): {}".format(result["error"]["code"], result["error"]["status"], result["error"]["message"]))
else:
print("urlNotificationMetadata.url: {}".format(result["urlNotificationMetadata"]["url"]))
print("urlNotificationMetadata.latestUpdate.url: {}".format(result["urlNotificationMetadata"]["latestUpdate"]["url"]))
print("urlNotificationMetadata.latestUpdate.type: {}".format(result["urlNotificationMetadata"]["latestUpdate"]["type"]))
print("urlNotificationMetadata.latestUpdate.notifyTime: {}".format(result["urlNotificationMetadata"]["latestUpdate"]["notifyTime"]))
Here is a list of URL sample:
Can anyone please tell me what's wrong with my code?
Thank you very much in advance for all your help.
It seems that even if I apply .strip() to each row, there is still a \n at the end of each URL.
So instead of putting row one by one to lambda, I put the whole series to lambda and use a for-loop to handle it.
The whole working example is here:
Google Indexing API v3 Working Example with Python 3
I'm pretty new.
I wrote this python script to make an API call from blockr.io to check the balance of multiple bitcoin addresses.
The contents of btcaddy.txt are bitcoin addresses seperated by commas. For this example, let it parse this.
import urllib2
import json
btcaddy = open("btcaddy.txt","r")
urlRequest = urllib2.Request("http://btc.blockr.io/api/v1/address/info/" + btcaddy.read())
data = urllib2.urlopen(urlRequest).read()
json_data = json.loads(data)
balance = float(json_data['data''address'])
print balance
raw_input()
However, it gives me an error. What am I doing wrong? For now, how do I get it to print the balance of the addresses?
You've done multiple things wrong in your code. Here's my fix. I recommend a for loop.
import json
import urllib
addresses = open("btcaddy.txt", "r").read()
base_url = "http://btc.blockr.io/api/v1/address/info/"
request = urllib.urlopen(base_url+addresses)
result = json.loads(request.read())['data']
for balance in result:
print balance['address'], ":" , balance['balance'], "BTC"
You don't need an input at the end, too.
Your question is clear, but your tries not.
You said, you have a file, with at least, more than registry. So you need to retrieve the lines of this file.
with open("btcaddy.txt","r") as a:
addresses = a.readlines()
Now you could iterate over registries and make a request to this uri. The urllib module is enough for this task.
import json
import urllib
base_url = "http://btc.blockr.io/api/v1/address/info/%s"
for address in addresses:
request = urllib.request.urlopen(base_url % address)
result = json.loads(request.read().decode('utf8'))
print(result)
HTTP sends bytes as response, so you should to us decode('utf8') as approach to handle with data.
I'm using this function to get the latest commit url using PyGithub:
from github import Github
def getLastCommitURL():
encrypted = 'mypassword'
# naiveDecrypt defined elsewhere
g = Github('myusername', naiveDecrypt(encrypted))
org = g.get_organization('mycompany')
code = org.get_repo('therepo')
commits = code.get_commits()
last = commits[0]
return last.html_url
It works but it seems to make Github unhappy with my IP address and give me a slow response for the resulting url. Is there a more efficient way for me to do this?
This wouldn't work if you had no commits in the past 24 hours. But if you do, it seems to return faster and will request fewer commits, according to the Github API documentation:
from datetime import datetime, timedelta
def getLastCommitURL():
encrypted = 'mypassword'
g = Github('myusername', naiveDecrypt(encrypted))
org = g.get_organization('mycompany')
code = org.get_repo('therepo')
# limit to commits in past 24 hours
since = datetime.now() - timedelta(days=1)
commits = code.get_commits(since=since)
last = commits[0]
return last.html_url
You could directly make a request to the api.
from urllib.request import urlopen
import json
def get_latest_commit(owner, repo):
url = 'https://api.github.com/repos/{owner}/{repo}/commits?per_page=1'.format(owner=owner, repo=repo)
response = urlopen(url).read()
data = json.loads(response.decode())
return data[0]
if __name__ == '__main__':
commit = get_latest_commit('mycompany', 'therepo')
print(commit['html_url'])
In this case you would only being making one request to the api instead of 3 and you are only getting the last commit instead of all of them. Should be faster as well.
How could I make the following call in Python? Pseudocode version:
jsonTwitterResponse = twitter.get(up to max of 3
tweets within 3km of longitude: 7, latitude: 5)
print jsonTwitterResponse
It looks like the geocode API is what I need. I have no idea how to actually code this up though. How would I do the above in actual code?
Here is a sample geocode request:
import urllib, json, pprint
params = urllib.urlencode(dict(q='obama', rpp=10, geocode='37.781157,-122.398720,1mi'))
u = urllib.urlopen('http://search.twitter.com/search.json?' + params)
j = json.load(u)
pprint.pprint(j)
The full Twitter REST API is described here: https://dev.twitter.com/docs/api
Also, Twitter has a location search FAQ that may be of interest.
In addition to Raymond Hettinger's answer, I'd like to mention that you can also use a query like "near:Amsterdam within:5km" if you don't want to work with actual coordinates.
Example: http://search.twitter.com/search?q=near:Amsterdam%20within:5km
I think this method might've been added more recently:
import urllib, json, pprint
params = urllib.urlencode(dict(lat=37.76893497, long=-122.42284884))
u = urllib.urlopen('https://api.twitter.com/1/geo/reverse_geocode.json?' + params)
j = json.load(u)
pprint.pprint(j)
Documentation: https://dev.twitter.com/docs/api/1/get/geo/reverse_geocode
How can I get the current date, month & year online using Python? By this I mean, rather than getting it from the computer's date-visit a website and get it, so it doesn't rely on the computer.
So thinking about the "would be so trivial" part I went ahead and just made a google app engine web app -- when you visit it, it returns a simple response claiming to be HTML but actually just a string such as 2009-05-26 02:01:12 UTC\n. Any feature requests?-)
Usage example with Python's urllib module:
Python 2.7
>>> from urllib2 import urlopen
>>> res = urlopen('http://just-the-time.appspot.com/')
>>> time_str = res.read().strip()
>>> time_str
'2017-07-28 04:55:48'
Python 3.x+
>>> from urllib.request import urlopen
>>> res = urlopen('http://just-the-time.appspot.com/')
>>> result = res.read().strip()
>>> result
b'2017-07-28 04:53:46'
>>> result_str = result.decode('utf-8')
>>> result_str
'2017-07-28 04:53:46'
If you can't use NTP, but rather want to stick with HTTP, you could urllib.urlget("http://developer.yahooapis.com/TimeService/V1/getTime") and parse the results:
<?xml version="1.0" encoding="UTF-8"?>
<Error xmlns="urn:yahoo:api">
The following errors were detected:
<Message>Appid missing or other error </Message>
</Error>
<!-- p6.ydn.sp1.yahoo.com uncompressed/chunked Mon May 25 18:42:11 PDT 2009 -->
Note that the datetime (in PDT) is in the final comment (the error message is due to lack of APP ID). There probably are more suitable web services to get the current date and time in HTTP (without requiring registration &c), since e.g. making such a service freely available on Google App Engine would be so trivial, but I don't know of one offhand.
For this NTP server can be used.
import ntplib
import datetime, time
print('Make sure you have an internet connection.')
try:
client = ntplib.NTPClient()
response = client.request('pool.ntp.org')
Internet_date_and_time = datetime.datetime.fromtimestamp(response.tx_time)
print('\n')
print('Internet date and time as reported by NTP server: ',Internet_date_and_time)
except OSError:
print('\n')
print('Internet date and time could not be reported by server.')
print('There is not internet connection.')
In order to utilise an online time string, e.g. derived from an online service (http://just-the-time.appspot.com/), it can be read and converted into a datetime.datetime format using urllib2 and datetime.datetime:
import urllib2
from datetime import datetime
def getOnlineUTCTime():
webpage = urllib2.urlopen("http://just-the-time.appspot.com/")
internettime = webpage.read()
OnlineUTCTime = datetime.strptime(internettime.strip(), '%Y-%m-%d %H:%M:%S')
return OnlineUTCTime
or very compact (less good readable)
OnlineUTCTime=datetime.strptime(urllib2.urlopen("http://just-the-time.appspot.com/").read().strip(),
'%Y-%m-%d %H:%M:%S')
little exercise:
Comparing your own UTC time with the online time:
print(datetime.utcnow() - getOnlineUTCTime())
# 0:00:00.118403
#if the difference is negatieve the result will be something like: -1 day, 23:59:59.033398
(bear in mind that processing time is included also)
Goto timezonedb.com and create an account u will receive api key on the your email and use the api key in the following code
from urllib import request
from datetime import datetime
import json
def GetTime(zone):
ApiKey="YOUR API KEY"
webpage=request.urlopen("http://api.timezonedb.com/v2/get-time-zone?key=" +ApiKey + "&format=json&by=zone&zone="+zone)
internettime = json.loads(webpage.read().decode("UTF-8"))
OnlineTime = datetime.strptime(internettime["formatted"].strip(), '%Y-%m-%d %H:%M:%S')
return(OnlineTime)
print(GetTime("Asia/Kolkata")) #you can pass any zone region name for ex : America/Chicago
This works really well for me, no account required:
import requests
from datetime import datetime
def get_internet_datetime(time_zone: str = "etc/utc") -> datetime:
"""
Get the current internet time from:
'https://www.timeapi.io/api/Time/current/zone?timeZone=etc/utc'
"""
timeapi_url = "https://www.timeapi.io/api/Time/current/zone"
headers = {
"Accept": "application/json",
}
params = {"timeZone": time_zone}
dt = None
try:
request = requests.get(timeapi_url, headers=headers, params=params)
r_dict = request.json()
dt = datetime(
year=r_dict["year"],
month=r_dict["month"],
day=r_dict["day"],
hour=r_dict["hour"],
minute=r_dict["minute"],
second=r_dict["seconds"],
microsecond=r_dict["milliSeconds"] * 1000,
)
except Exception:
logger.exception("ERROR getting datetime from internet...")
return None
return dt
here is a python module for hitting NIST online http://freshmeat.net/projects/mxdatetime.
Perhaps you mean the NTP protocol? This project may help: http://pypi.python.org/pypi/ntplib/0.1.3
Here is the code I made for myself. I was getting a problem in linux that the date and time changes each time I switch on my PC, so instead setting again and again as the internet requires proper date. I made this script which will be used by date command to set date and time automatically through an alias.
import requests
resp = requests.get('https://www.timeapi.io/api/Time/current/zone?timeZone=etc/utc')
resp = resp.text
resp = str(resp)
resp
first = resp.find('"date":"') + 8
rp = ''
for i in range(first , 145):
rp = resp[i] + rp
print(rp[::-1]+"\n")
second = resp.find('"time":"') + 8
rp_2 = ''
for i in range(second , 160):
rp_2 = resp[i] + rp_2
print(rp_2[::-1]+"\n")