I have a link with some data link to the json file. I want in python to make a script who get the id value.
the value that I want
Can someone help me, I have done that but it keep saying that { "response": "Too many requests" }
My code :
response_API = requests.get('https://api.scratch.mit.edu/users/FlyPhoenix/') data = response_API.text print(data)
This solution might work for you:
import json
import requests
response_API = requests.get('https://api.scratch.mit.edu/users/FlyPhoenix/')
data = response_API.text
print(json.loads(data)['id'])
print() will display the ID you want.
About { "response": "Too many requests" }:
Obviously, your rate of requests has been too high and the server is not willing to accept this.
You should not seek to "dodge" this, or even try to circumvent server security settings by trying to spoof your IP, you should simply respect the server's answer by not sending too many requests. (as says in How to avoid HTTP error 429 (Too Many Requests) python)
Additional solution:
Check if your code is not in a loop making the request over and over.
if you have a dynamic IP restart your router. If not wait until you can make request again.
import requests
def get_id():
URL = "https://api.scratch.mit.edu/users/FlyPhoenix/"
resp = requests.get(URL)
if resp.status_code == 200:
data_dict = resp.json()
return (data_dict["id"])
else:
print("Error: request failed with status code", resp.status_code)
print (get_id())
Related
Hitting an error on a simple Instagram Graph API request. I am trying to check the status on a video container before posting it.
If I skip this step, sleep for 30 seconds (to make sure the Facebook server has enough time to process the container) and go straight to posting, it works. So you know the container ID and user access token are both accurate.
But if I insert this status check step, the response 400, bad request, etc. The content of the response is "Sorry, this content isn't available right now"
Here is the link for the official documentation: https://developers.facebook.com/docs/instagram-api/reference/ig-container
And below is my code. Any ideas?
EDIT:
figured it out. The documentation says the URL is for graph.instagram.com, but it's supposed to be graph.facebook.com. Documentation is wrong?
url = f"https://graph.instagram.com/{container_id_response.json()['id']}"
container_ready = False
while not container_ready:
time.sleep(3)
# Check status
params = {"fields": "status,id",
"access_token":credentials['FB_USER_ACCESS_TOKEN']}
response = status_response = requests.get(url, params=params)
if response.json()['status_code'] == "FINISHED":
container_ready = True
Im trying to create a requests python script that will add to cart and eventually checkout. I did a Post Requests to
(https://yeezysupply.com/cart/add.js) which is the add to cart endpoint I found in the networks chrome developer tools page. It has a json payload with 3 dictionaries. Id which is the variant ID from the product, properties which I don't know what it is so I left it blank, and quantity. I entered the data as a param when I did the Post requests. I received a 400 Response Error. When I printed the requests Text, nothing was added to my cart and i received this.
{
"status":"bad_request",
"message":"expected String to be a Hash: properties",
"description":"expected String to be a Hash: properties"
}
Im pretty new to requests so I'm not sure what the error means.
I was able to confirm nothing was added to my cart because I did a get requests to the shopify cart endpoint (https://yeezysupply.com/cart.json). When I print the get requests I get this.
{
"token":"cb67e6c53c63b930b4aca1eb3b5a7510",
"note":null,
"attributes":{
},
"original_total_price":0,
"total_price":0,
"total_discount":0,
"total_weight":0.0,
"item_count":0,
"items":[
],
"requires_shipping":false,
"currency":"USD",
"items_subtotal_price":0,
"cart_level_discount_applications":[
]
}
This confirmed nothing was added to my cart. Does anyone know what I'm doing wrong? The product I used for my testing is (https://yeezysupply.com/products/flannel-lined-canvas-jacket-medium-blue?c=%2Fcollections%2Fwomen)
I've tried creating a global requests session to see if I needed cookies. This didn't work either.
import requests
from bs4 import BeautifulSoup as soup
session = requests.Session()
atc_endpoint = 'https://yeezysupply.com/cart/add.js'
atc_info = {
"id": "1457089478675",
"properties": "{}",
"quantity": "1"
}
def add_to_cart():
pass
atc_post = session.post(atc_endpoint, data=atc_info)
atc_get = session.get('https://yeezysupply.com/cart.json')
print(atc_post.text)
I tried using headers, I used headers = {"Content-Type": "application/json"}
I received the following error:
{
"error":"822: unexpected token at 'id=1457089478675\u0026properties=%7B%7D\u0026quantity=1'"
}
Im not sure what token the api is asking for.
I expect to have the item in my cart and shown in the get requests text.
Try the following things-
Add {"Content-Type": "application/json"} as a header to your request. It would look like this-
headers {"Content-Type": "application/json"}
atc_post = session.post(atc_endpoint, data=atc_info, headers=headers)
This should do the trick. Your dictionary looks good to me but if this still gives errors, try to use json.loads on your dictionary before sending it.
Hope this helps. :)
So you're building a Bot to checkout products (it would seem anyway). No offence to your talents with Python, but your life would get absolutely better if you just used Javascript to make your bot do your bidding. Since is naturally built into browsers anyway, your efforts would be simplified.
If you wanted to run your Bot server-side with Python as your question kind of indicates, and a POST is giving you troubles, just wait till you script checkout! I am not sure you can even do that at this point, so you might want to put the brakes on your plans until you can demonstrably check out without issue. Did you look into that?
I am trying to play with the Hacker News API found here, especially the live data section.
I am currently trying to print the response I get for every new item that I get from the /v0/maxitem API.
Given below is the code that I currently have:
import pyrebase
from config import config
import requests
firebase = pyrebase.initialize_app(config)
firebase_db = firebase.database()
_BASEURL_ = "https://hacker-news.firebaseio.com/v0/item/"
def print_response(id):
headers = {"Content-Type": "application/json"}
print(_BASEURL_ + str(id) + ".json")
response = requests.get(_BASEURL_ + str(id) + ".json", headers=headers)
print(response.content)
def new_post_handler(message):
print(message["data"])
print_response(message["data"])
my_stream = firebase_db.child("/v0/maxitem").stream(new_post_handler,
stream_id="new_posts")
I am able to get a valid response the first time requests.get runs. But the second time, I always get a NULL value for the content of the response.
The GET URL works on postman though, able to get a valid response there. The issue seems to particularly be with how the requests module is treating the URL the second time.
Any help greatly appreciated.
I'm using an API for doing HTTP requests that return JSON. The calling of the api, however, depends on a start and an end page to be indicated, such as this:
def API_request(URL):
while(True):
try:
Response = requests.get(URL)
Data = Response.json()
return(Data['data'])
except Exception as APIError:
print(APIError)
continue
break
def build_orglist(start_page, end_page):
APILink = ("http://sc-api.com/?api_source=live&system=organizations&action="
"all_organizations&source=rsi&start_page={0}&end_page={1}&items_"
"per_page=500&sort_method=&sort_direction=ascending&expedite=1&f"
"ormat=json".format(start_page, end_page))
return(API_request(APILink))
The only way to know if you're not longer at an existing page is when the JSON will be null, like this.
If I wanted to do multiple build_orglist going over every single page asynchronously until I reach the end (Null JSON) how could I do so?
I went with a mix of #LukasGraf's answer of using sessions to unify all of my HTTP connections into a single session as well as made use of grequests for making the group of HTTP requests in parallel.
In my python application I have to read many web pages to collect data. To decrease the http calls I would like to fetch only changed pages. My problem is that my code always tells me that the pages have been changed (code 200) but in reality it is not.
This is my code:
from models import mytab
import re
import urllib2
from wsgiref.handlers import format_date_time
from datetime import datetime
from time import mktime
def url_change():
urls = mytab.objects.all()
# this is some urls:
# http://www.venere.com/it/pensioni/venezia/pensione-palazzo-guardi/#reviews
# http://www.zoover.it/italia/sardegna/cala-gonone/san-francisco/hotel
# http://www.orbitz.com/hotel/Italy/Venice/Palazzo_Guardi.h161844/#reviews
# http://it.hotels.com/ho292636/casa-del-miele-susegana-italia/
# http://www.expedia.it/Venezia-Hotel-Palazzo-Guardi.h1040663.Hotel-Information#reviews
# ...
for url in urls:
request = urllib2.Request(url.url)
if url.last_date == None:
now = datetime.now()
stamp = mktime(now.timetuple())
url.last_date = format_date_time(stamp)
url.save()
request.add_header("If-Modified-Since", url.last_date)
try:
response = urllib2.urlopen(request) # Make the request
# some actions
now = datetime.now()
stamp = mktime(now.timetuple())
url.last_date = format_date_time(stamp)
url.save()
except urllib2.HTTPError, err:
if err.code == 304:
print "nothing...."
else:
print "Error code:", err.code
pass
I do not understand what has gone wrong. Can anyone help me?
Web servers aren't required to send a 304 header as the response when you send an 'If-Modified-Since' header. They're free to send a HTTP 200 and send the entire page again.
Sending a 'If-Modified-Since' or 'If-None-Since' alerts the server that you'd like a cached response if available. It's like sending an 'Accept-Encoding: gzip, deflate' header -- you're just telling the server you'll accept something, not requiring it.
A good way to check if a site returns 304 is to use google chromes dev tools. E.g. below is an annotated example of using chrome on the bls website. Keep refreshing and you will see that the server keeps returning 304. If you force refresh with Ctrl+F5 (windows), you will see that instead it returns status code 200.
You can use this technique on your example to find out if the server does not return 304, or if you have incorrectly formatted your request headers somehow. Sometimes a webpage has a resource imported on to it which does not respect the If- headers and so it returns 200 whatever you do (If any resource on the page does not return 304, the whole page will return 200), but sometimes you are only looking at a specific part of a website and you can cheat by loading the resource directly and bypassing the whole document.