I'm attempting to perform a get request, however my instance of idle is consistently freezing/crashing when attempting to do so.
import requests
url = 'https://zkillboard.com/api/history/20161217/'
req = requests.get(url)
print(req.status_code)
print(req.text)
Ouput:
>>>
============= RESTART: D:\Google Drive\eveSoloPredictor\test.py =============
200
*then crashes
I am able to request this url and receive a JSON response both in my browser and using postman with no issues. Also, I am able to request other URL's in this script absolutely fine.
Related
I want to write a function which collects data from yahoo finance site. The website request looks like that:
import requests
def yahoo_summary_stats(stock):
response = requests.get(f"https://finance.yahoo.com/quote/{stock}")
print(response.reason)
if I call the function with parameter 'ALB':
yahoo_summary_stats('ALB')
everything works fine and the request is ok. He correctly leads me to:
https://finance.yahoo.com/quote/ALB
The call:
yahoo_summary_stats('AEE')
on other hand should lead me to the site https://finance.yahoo.com/quote/AEE, which I can call without any problems in firefox.
The program for some reason gives me a 'Not found' error. What is the problem of my request to that website?
Try to set User-Agent in headers...
def yahoo_summary_stats(stock):
response = requests.get(f"http://finance.yahoo.com/quote/{stock}", headers={'User-Agent': 'Custom user agent'})
print(response.status_code)
print(response.reason)
yahoo_summary_stats('ALB')
yahoo_summary_stats('AEE')
I'm trying to login to the Starbucks website (login url: https://app.starbucks.com/account/signin?ReturnUrl=https%3A%2F%2Fapp.starbucks.com%2Fprofile) with no success.
I used the firefox inspect tool to find out the url i am supposed to send a POST request to and how should the payload data look like and i found out that the request url is "https://www.starbucks.com/bff/account/signin" and the payload is something like : "{"username": "my_username","password":"my_password"}, so here's my code:
import requests
url = 'https://www.starbucks.com/bff/account/signin'
uname = "my_username"
pwd = "my_password"
payload = {"username":uname, "password":pwd}
with requests.Session() as s:
p = s.post(url,data=payload)
print(p.status_code)
The status_code that is printed is always 200, which is strange because whenever i type invalid credentials manually, on the network tab of the inspect tool i get a 400 response code. And also, whenever i do print(p.content) instead of printing the status code, the content is always the same (both for wrong and correct credentials).
Can somebody help me out?
Thanks in advance
I'm trying to login trough requests post method. i took a screen shot of the parameter value of post requests from firefox
import requests
params = {'ctl00$ContentPlaceHolder1$txtEmail': 'someusr',
'ctl00$ContentPlaceHolder1$txtPwd': 'somepswrd'}
r = requests.post('https://somewebsite.com/login.aspx', data=params, verify=True)
print(r.text)
any idea why the code above wont log me in
I am trying to get an http response from a website using the requests module. I get status code 410 in my response:
<Response [410]>
From the documentation, it appears that the forwarding url for the web content may not be intentionally available to the clients. Is this indeed the case, or am I missing something? Trying to confirm if the webpage can be scrapped at all:
url='http://www.b2i.us/profiles/investor/ResLibraryView.asp?ResLibraryID=81517&GoTopage=3&Category=1836&BzID=1690&G=666'
try:
response = requests.get(url)
except requests.exceptions.RequestException as e:
print(e)
Some webisites don't respond well to HTTP requests with 'python-requests' as a User Agent String.
You can get a 200 OK response if you set the User-Agent header to 'Mozilla'.
url='http://www.b2i.us/profiles/investor/ResLibraryView.asp?ResLibraryID=81517&GoTopage=3&Category=1836&BzID=1690&G=666'
headers={'User-Agent':'Mozilla/5'}
response = requests.get(url, headers=headers)
print(response)
< Response [200] >
This works for Mac OSX, but I am having issues with the same approach in Windows on a VMWare virtual machine I run automated tasks from. Why would the behavior be different? Is there a separate workaround for Window machines?
I am testing the Python library request to see if it is suitable for my work. Here is my sample code for reference:
import requests
url = "http://www.genenetwork.org/webqtl/main.py?cmd=sch&gene=Grin2b&tissue=hip&format=text"
print url
print requests.get(url)
My Output:
http://www.genenetwork.org/webqtl/main.py?cmd=sch&gene=Grin2b&tissue=hip&format=text
Response [200]
Output that I get from my browser & my expected result:
What made the differences? How can I get my expected results? I wanted to process the data inside the webpage.
Your code is currently printing the status code of your GET request. You can access the requested content via the text attribute of the Response class returned by the get method.
import requests
r = requests.get("http://www.genenetwork.org/webqtl/main.py?cmd=sch&gene=Grin2b&tissue=hip&format=text")
r.text