How to send headers for form data - python

I am trying to login into Gamestop(gamestop.ca) but it wont let me as shown below. I've tried adding various headers but no luck :(..
I'm just wondering maybe I am adding data incorrectly when sending my post request.. Can someone confirm?
The content type of the website in the response section under network is:
content-type: text/html; charset=utf-8
import requests
import json
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36'}
payload = {
'UserName': '*****',
'Password': '******',
'RememberMe': 'false'
}
with requests.session() as s:
r = s.post('https://www.gamestop.ca/Account/LogOn', headers=headers, data=payload)
print(r.status_code)
print(r.content)
print(r.text)
403
b'<HTML><HEAD>\n<TITLE>Access Denied</TITLE>\n</HEAD><BODY>\n<H1>Access Denied</H1>\n \nYou don\'t have permission to access "http://www.gamestop.ca/Account/LogOn" on this server.<P>\nReference #18.70fd017.1635453520.211da9c5\n</BODY>\n</HTML>\n'
<HTML><HEAD>
<TITLE>Access Denied</TITLE>
</HEAD><BODY>
<H1>Access Denied</H1>
You don't have permission to access "http://www.gamestop.ca/Account/LogOn" on this server.<P>
Reference #18.70fd017.1635453520.211da9c5
</BODY>
</HTML>
p.s. I can login with selenium so I don't think the error is caused by bot protection.

Related

url that worked with urllib.open does not with requests.get

I have a script that used to work with urllib and now has to use requests. I have a url I use to put stuff in a database. the url is
http://www.example.com/insert.php?network=testnet&id=1245100&c=2800203&lat=7555344
this url worked through urllib(urlopen) but i get 403 forbidden when doing it through requests.get
HEADER = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36' }
headers = requests.utils.default_headers()
headers.update = ( HEADER,)
payload={'network':'testnet','id':'1245300','c':'2803824', 'lat':'7555457'}
response = requests.get("http://www.example.com/insert.php", headers=headers, params=payload)
print(f"Remote commit: {response.text}")
print(response.url)
the url works in a browser and gets a simple json ok response.
the script produces:
Remote commit: <html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>
http://www.example.com/insert.php?network=testnet&id=1245300&c=2803824&lat=7555457
not sure what I am doing wrong.
edit: changed https to http.
Forbidden often correlated to SSL/TLS certificate verification failure. Please try using the requests.get by setting the verify=False as following
Fixing the SSL certificate issue
requests.get("https://www.example.com/insert.php?network=testnet&id=1245300&c=2803824&lat=7555457", verify=False)
Fixing the TLS certificate issue
Check out my answer related to the TLS certificate verification fix.
Somehow I overcomplicated it and when I tried the absolute minimum that works.
import requests
headers = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36' }
response = requests.get("http://www.example.com/insert.php?network=testnet&id=1245200&c=2803824&lat=7555457", headers=headers)
print(response.text)

Can't login with python post error shows up

When I try to login into GameStop(gamestop.ca) with requests, this is what I get. What am I doing wrong or what am I missing? I tried adding many other headers including the authority header shown in chrome dev tool under network. I don't understand why this doesn't work but selenium works when I try logging in with selenium. If this is because of bot detection, isn't selenium supposed to be detected much more easily?
import requests
import json
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36'}
payload = {
'UserName': '*****',
'Password': '******',
'RememberMe': 'false'
}
with requests.session() as s:
r = s.post('https://www.gamestop.ca/Account/LogOn', headers=headers, data=payload)
print(r.status_code)
print(r.content)
print(r.text)
403
b'<HTML><HEAD>\n<TITLE>Access Denied</TITLE>\n</HEAD><BODY>\n<H1>Access Denied</H1>\n \nYou don\'t have permission to access "http://www.gamestop.ca/Account/LogOn" on this server.<P>\nReference #18.70fd017.1635453520.211da9c5\n</BODY>\n</HTML>\n'
<HTML><HEAD>
<TITLE>Access Denied</TITLE>
</HEAD><BODY>
<H1>Access Denied</H1>
You don't have permission to access "http://www.gamestop.ca/Account/LogOn" on this server.<P>
Reference #18.70fd017.1635453520.211da9c5
</BODY>
</HTML>

can't find the right compression for this webpage (python requests.get)

I can load this webpage in Google Chrome, but I can't access it via requests. Any idea what the compression problem is?
Code:
import requests
url = r'https://www.huffpost.com/entry/sean-hannity-gutless-tucker-carlson_n_60d5806ae4b0b6b5a164633a'
headers = {'Accept-Encoding':'gzip, deflate, compress, br, identity'}
r = requests.get(url, headers=headers)
Result:
ContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing data: incorrect header check'))
Use a user agent that emulates a browser:
import requests
url = r'https://www.huffpost.com/entry/sean-hannity-gutless-tucker-carlson_n_60d5806ae4b0b6b5a164633a'
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36"}
r = requests.get(url, headers=headers)
You're getting a 403 Forbidden error, which you can see using requests.head. Use RJ's suggestion to defeat huffpost's robot blocking.
>>> requests.head(url)
<Response [403]>

Python login to website with requests returns 403

I am trying to login to https://www.zalando-lounge.com/#/ using requests. The problem is that website returns 403 status code.
This is my code:
import requests
url = 'https://www.zalando-lounge.com/onboarding-api/login'
payload = {
'email': 'my_email',
'password': 'my_password',
'onlyLogin': 'true'
}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:83.0) Gecko/20100101 Firefox/83.0'
}
with requests.Session() as s:
r = s.post(url, json=payload, headers=headers)
print(r.status_code)
print(r.text)
In response I get 403 status code and this:
<HTML><HEAD>
<TITLE>Access Denied</TITLE>
</HEAD><BODY>
<H1>Access Denied</H1>
You don't have permission to access "http://failover.www.zalando-lounge.de/waf_deny_lounge.html?" on this server.<P>
Reference #18.379ec817.1608234674.21c18108
</BODY>
</HTML>

Python 3.4 login on aspx

I'm trying to login to an aspx page then get the contents of another page as a logged in user.
import requests
from bs4 import BeautifulSoup
URL="https://example.com/Login.aspx"
durl="https://example.com/Daily.aspx"
user_agent = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36'
language = 'en-US,en;q=0.8'
encoding = 'gzip, deflate'
accept = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
connection = 'keep-alive'
headers = {
"Accept": accept,
"Accept-Encoding": encoding,
"Accept-Language": language,
"Connection": connection,
"User-Agent": user_agent
}
username="user"
password="pass"
s=requests.Session()
s.headers.update(headers)
r=s.get(URL)
print(r.cookies)
soup=BeautifulSoup(r.content)
LASTFOCUS=soup.find(id="__LASTFOCUS")['value']
EVENTTARGET=soup.find(id="__EVENTTARGET")['value']
EVENTARGUMENT=soup.find(id="__EVENTARGUMENT")['value']
VIEWSTATEFIELDCOUNT=soup.find(id="__VIEWSTATEFIELDCOUNT")['value']
VIEWSTATE=soup.find(id="__VIEWSTATE")['value']
VIEWSTATE1=soup.find(id="__VIEWSTATE1")['value']
VIEWSTATE2=soup.find(id="__VIEWSTATE2")['value']
VIEWSTATE3=soup.find(id="__VIEWSTATE3")['value']
VIEWSTATE4=soup.find(id="__VIEWSTATE4")['value']
VIEWSTATEGENERATOR=soup.find(id="__VIEWSTATEGENERATOR")['value']
login_data={
"__LASTFOCUS":"",
"__EVENTTARGET":"",
"__EVENTARGUMENT":"",
"__VIEWSTATEFIELDCOUNT":"5",
"__VIEWSTATE":VIEWSTATE,
"__VIEWSTATE1":VIEWSTATE1,
"__VIEWSTATE2":VIEWSTATE2,
"__VIEWSTATE3":VIEWSTATE3,
"__VIEWSTATE4":VIEWSTATE4,
"__VIEWSTATEGENERATOR":VIEWSTATEGENERATOR,
"__SCROLLPOSITIONX":"0",
"__SCROLLPOSITIONY":"100",
"ctl00$NameTextBox":"",
"ctl00$ContentPlaceHolderNavPane$LeftSection$UserLogin$UserName":username,
"ctl00$ContentPlaceHolderNavPane$LeftSection$UserLogin$Password":password,
"ctl00$ContentPlaceHolderNavPane$LeftSection$UserLogin$LoginButton":"Login",
"ctl00$ContentPlaceHolder1$RetrievePasswordUserNameTextBox":"",
"hiddenInputToUpdateATBuffer_CommonToolkitScripts":"1"
}
r1=s.post(URL, data=login_data)
print (r1.cookies)
d=s.get(durl)
print (d.cookies)
dsoup=BeautifulSoup(r1.content)
print (dsoup)
but the thing is that the cookies are not preserved into the session and I can't get to the next page as a logged in user.
Can someone give me some pointers on this.
Thanks.
When you post to the login page:
r1=s.post(URL, data=login_data)
It's likely issuing a redirect to another page. So the response to the POST request returns the cookies in the response, then it redirects to another page. The redirect is what is captured in r1 and does not contain the cookies.
Try the same command but not allowing redirects:
r1 = s.post(URL, data=login_data, allow_redirects=False)

Categories

Resources