This question already has answers here:
How can I use cookies in Python Requests?
(4 answers)
Closed 3 years ago.
I am trying to get some data from a webpage using python requests which needs to be logged in first.The login http request "response header" contains "set-cookie" parameters which is used for the next http request of the webpage. Could any tell me how to use the set-cookie for the consecutive GET request of the webpage
Try this
session = requests.Session()
response = session.get('http://google.com')
print(session.cookies.get_dict())
Or without using sessions use
response = requests.get('http://google.com')
response.cookies
Related
This question already has answers here:
Python Requests and persistent sessions
(8 answers)
Closed 4 years ago.
I am logging into website by using below code...
payload = {'user': 'apple', 'password': 'banana'}
loginurl = 'http://12.345.678.910:8000/login'
r = requests.post(loginurl, data=payload)
data = r.json()
print(data)
As a response of above code i am getting output as below
{u'message': u'Logged in'}
Now in that website i am trying to get some response by using below get request...
DataURL = "http://12.345.678.910:8000/api/datasources/proxy/1/query?db=AB_CDE&q=SELECT%20sum(%22count%22)%20FROM%20%22gatling%22%20WHERE%20%22status%22%20%3D%20%27ok%27%20AND%20%22simulation%22%20%3D~%20%2Fabdokd-live*%2F%20AND%20time%20%3E%201544491800000ms%20and%20time%20%3C%201544495400000ms%20GROUP%20BY%20%22script%22&epoch=ms"
Datar = requests.get(url = DataURL)
resposne = Datar.json()
print(resposne)
As a response of above code i am getting below...
{u'message': u'Unauthorized'}
Which is not expected as in the previous step i already logged into the website. Can someone help me to correct my code?
You will probably need to look into how the authentication mechanism works in HTTP. It's most likely that your server is returning either a cookie or some sort of other identification header. Cookies are easiest because the browser will (to a first approximation) automatically return the cookies it gets form a server when making further requests. Your existing code isn't doing that.
Since you are using the requests library you should look at the answer to this question, which might shed some light on the problem.
This question already has answers here:
Is there an easy way to request a URL in python and NOT follow redirects?
(7 answers)
Closed 4 years ago.
I know that it is possible to check if a URL redirects, as mentioned in the following question and its answer.
How to check if the url redirect to another url using Python
using the following code:
eq = urllib2.Request(url=url, headers=headers)
resp = urllib2.urlopen(req, timeout=3)
redirected = resp.geturl() != url # redirected will be a boolean True/False
However, I have list of Millions of URLs. Currently it is discussed wether one of them is a harmful URL or redirects to a harmful URL.
I want to know if it is possible to check for redirect without opening a direct connection to the redirecting website to avoid creating a connection with a harmful website?
You can do a HEAD request and check the status code. If you are using the third party requests library you can do that like this:
import requests
original_url = '...' # your original url here
response = requests.head(original_url)
if response.is_redirect:
print("Redirecting")
else:
print("Not redirecting")
This question already has answers here:
"SSL: certificate_verify_failed" error when scraping https://www.thenewboston.com/
(7 answers)
Closed 4 years ago.
I am trying to download this url which is a frame on this page.
I have tried like this:
import urllib.request
url = 'https://tips.danskespil.dk/tips13/?#/tps/poolid/2954'
response = urllib.request.urlopen(url)
html = response.read()
and also this way:
import requests
page = requests.get(url)
but both ways give me the error: SSL: CERTIFICATE_VERIFY_FAILED request.get
Any help would be much appriciated.
If you're not worried about safety (which you should be) your best bet is to use verify=False in the request function.
page = requests.get(url, verify=False)
You can also set verify to a directory of certificates with trusted CAs like so
verify = '/path/to/certfile'
You can refer to the documentation here for all the ways to get around it
This question already has answers here:
Flask hangs when sending a post request to itself
(2 answers)
Closed 5 years ago.
I was trying to make a get request and obtain a jsonified response. But when i try this the request never gets completed. The browsers keeps on loading status only
#app.route('/user/json')
def json():
users = User.query.all()
list_serialized = [user.serialize() for user in users]
return jsonify(list_serialized)
#app.route('/recieve')
def recieve():
headers = {'content-type': 'application/json'}
users = requests.get('http://127.0.0.1:5000/user/json', headers=headers).json()
print users
return users
use app.run(threaded=True) for running app in main function.
This question already has an answer here:
Using Python Requests: Sessions, Cookies, and POST
(1 answer)
Closed 6 years ago.
I need To send get request To page that I have to login first
using requests module
I tried to first send post request and make login with my information
Then using cookies which is [phpsessionid] & send it with the get request
cookie = {'PHPSESSID': 'm7v485i9g1rfm3tqcn0aa531rvjf5d26'}
x = requests.get('https://www.example.com/',cookies=cookie)
but it doesn't work !
And idea on how to open the page ?
Instead of trying to hijack a session, login using requests with something like:
session = requests.session()
session.post(loginUrl, data={'username':username, 'password':password, ... (anything else the login page posts)}
response = session.get(anyInternalPage)