requests.exceptions.TooManyRedirects: Exceeded 30 redirects - python

What should I do with this error() since the beginning of 403 was and could not log in decided to use the agent!
import requests
headers = {
'Host': 'mvideo.ru',
'User-Agent':'Safari',
'Accept': '*/*',
'Accept-Encoding':'gzip, deflate, br',
'Connection': 'keep-alive'
}
mvideo_requests =requests.get('https://www.mvideo.ru/smartfony-i-svyaz-10/smartfony-205/f/category=iphone-914', headers = headers)
print(mvideo_requests)

Try using different headers.
headers = {"User-Agent": 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Mobile Safari/537.36'}
mvideo_requests =requests.get('https://www.mvideo.ru/smartfony-i-svyaz-10/smartfony-205/f/category=iphone-914', headers = headers)

Related

Failed to log in to a website using the requests module

I'm trying to log in to a website through a python script that I've created using the requests module. I've issued a post HTTP request with appropriate parameters and headers to the server, but for some reason I get a different response from that site compared to what I see in dev tools. The status is always 200, though. There is also a get request in place within the script that should fetch the credentials once the login is successful. Currently, it throws a JSONDecodeError on the last line.
import requests
link = 'https://propwire.com/login'
check_url = 'https://propwire.com/search'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36',
'x-requested-with': 'XMLHttpRequest',
'referer': 'https://propwire.com/login',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.9,bn;q=0.8',
'origin': 'https://propwire.com',
}
payload = {"email":"some-email","password":"password","remember":"true"}
with requests.Session() as s:
r = s.get(link)
headers['x-xsrf-token'] = r.cookies['XSRF-TOKEN'].rstrip('%3D')
s.headers.update(headers)
s.post(link,json=payload)
res = s.get(check_url)
print(res.json()['props']['auth'])

Unable to emulate browser POST requests without getting a Response [500]. Why?

import requests
url = 'https://cmoffice.kenes.com/cmsearchableprogrammev15/conferencemanager/CM_W3_SearchableProgram/api/persionid/anonymous/type/normal/getfilteredsessions/conference/igcs19'
headers = {'accept': '*/*',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
'content-type': 'application/json; charset=UTF-8',
'cookie': '_ga=GA1.2.471841928.1549896884; _gid=GA1.2.1479150813.1563120868; __RequestVerificationToken_L2NtU2VhcmNoYWJsZVByb2dyYW1tZVYxNQ2=t57HyXHVNBIm0HZ33v1WyG8hRa4j4RlDEOvFtEfPakPgH5AutBjAN5pSRHnBx_BpBhbMnH6R-tIhSdop_VMtLF-aY7XcXTRFt7vg5X46zgE1; _gat=1',
'origin': 'https://cmoffice.kenes.com',
'referer': 'https://cmoffice.kenes.com/cmsearchableprogrammeV15/conferencemanager/programme/personid/anonymous/igcs19/normal/b833d15f547f3cf698a5e922754684fa334885ed',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36',
'x-requested-with': 'XMLHttpRequest'}
response = requests.post(url, headers = headers)
print(response)
Gives Response [500]
However browser is able to get a json response with status_code 200
Can anyone shed some light why and how to solve this problem?
Something appears not to be right in the backend. It returns a 500 when you try to post to it, which could be actually anything like for example missing configuration or programming errors.
If I hit the given URL in a browser I get actually a 405 'method not allowed' error.

Why does my python post request does not work?

import requests
session = requests.Session()
url = 'https://supremenewyork.com/shop/304070/add'
headers = {
'Accept': '*/*;q=0.5, text/javascript, application/javascript, application/ecmascript, application/x-ecmascript',
'Origin': 'https://www.supremenewyork.com',
'X-CSRF-Token': 'cGh34LIXA5O75UEl+ArjyIQA/CS6BGY9mFleXXZ5GnznS4t8y2rGTpUTumG93EHNwSfnkDDtsYLvbEGbmMymRQ==',
'X-Requested-With': 'XMLHttpRequest',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36',
'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8',
}
post_data = {
'commit': 'add to basket',
'size': '53133',
'style': '25229',
'utf8': '✓'
}
session.post(url=url, headers=headers, data=post_data, timeout=1)
r = session.get('https://supremenewyork.com/shop/cart.json', headers=headers)
print(r.text)
Post data is correct, i took it from Google Chrome, but every time code return nothing (because basket is empty). How do i do post request correct?

python requests, with same headers of chrome browser, get 403 errors

I have written some code for scraping
that program uses requests.get(url, headers=headers)
with headers exactly same with my Chrome browser except cookie
Initially, It works fine. but later. It gets 403 error
My Chrome browser get that data very well without error
but My python requests code doesn't work. What is the problem. I don't know
url = 'http://www.matchesfashion.com/en-kr/products/1171735'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Whale/0.10.36.11 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Language': 'ko-KR,ko;q=0.8,en-US;q=0.6,en;q=0.4',
'Host': 'www.matchesfashion.com',
'Upgrade-Insecure-Requests': '1',
'Cache-Control': 'max-age=0',
'Accept-Encoding':'gzip, deflate'}
r = requests.get(url, headers=headers)

Python Requests post on a site not working

I am trying to scrape property information from https://www.ura.gov.sg/realEstateIIWeb/resiRental/search.action using Python Requests. Using Chrome I have inspected the POST request and emulated it using requests. I use sessions to maintain cookies. When I try my code, the return from the website is "missing parameters in search query" so obviously something is wrong with my requests (though it is not obvious what).
Doing some digging there was one cookie that I did not get when doing request.get on the search side, so I added that manually. Still no go. I tried emulating the request headers exactly as well, still does not return the correct results.
The only time I have gotten it to work is when I manually copy the cookies from my browser to the Python request object.
url = 'https://www.ura.gov.sg/realEstateIIWeb/resiRental/submitSearch.action;jsessionid={}'
values = {'submissionType': 'pn',
'from_Date_Prj': 'JAN-2014',
'to_Date_Prj': 'JAN-2016',
'__multiselect_projectNameList': '',
'selectedProjects': '10 SHELFORD',
'__multiselect_selectedProjects': '',
'propertyType': 'lp',
'from_Date': 'JAN-2016',
'to_Date': 'JAN-2016',
'__multiselect_postalDistrictList': '',
'__multiselect_selectedPostalDistricts': ''}
header1 = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'en-US,en;q=0.8,nb;q=0.6,no;q=0.4',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Host': 'www.ura.gov.sg',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36'
}
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'en-US,en;q=0.8,nb;q=0.6,no;q=0.4',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Content-Type': 'application/x-www-form-urlencoded',
'Host': 'www.ura.gov.sg',
'Origin': 'https://www.ura.gov.sg',
'Referer': 'https://www.ura.gov.sg/realEstateIIWeb/resiRental/search.action',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36'
}
with requests.Session() as r:
page1 = r.get('https://www.ura.gov.sg/realEstateIIWeb/resiRental/search.action', headers=header1)
requests.utils.add_dict_to_cookiejar(r.cookies, {'BIGipServerpl-prod_iis_web_v4': '3334383808.20480.0000'})
page2 = r.post(url.format(r.cookies.get_dict()['JSESSIONID']), data=values, headers=headers)

Categories

Resources