I used Charles Proxy to check the information being sent by a mobile app to its server, and now I am trying to use Python's request package to recreate the same call and get the information. I have three questions that I need help with.
Which of all the headers being shown in Charles do I need to use for my request?
I made the assumption that everything except the cookies, as this app does not require any sort of login to access the information. Am I correct in assuming this or should I add the cookies? If so, which ones?
I did try running a post request via python but got an SSL error. How can I overcome this error? I read about setting the option "verify=False" but that didnt work either. This is a mobile app, so not sure I could use my browser's cert, plus I have no idea how to do that either.
headers = {
"content-type": "application/json;charset=UTF-8",
"tlioscnx": "Wi-Fi",
"accept": "application/json",
"x-app-route": "SL-RSB",
"tliosloc": "gn=my_trips:list&ch=",
"tliosid": "15.6.1, iPhone13,3, 5.29.1",
"x-offer-route": "SL-SHOP",
"x-adapter": "mobile",
"x-acf-sensor-data": "2,i,UNUm0[...]",
"accept-encoding": "gzip;q=1.0, compress;q=0.5",
"accept-language": "en-US,en;q=0.9",
"content-length": "209",
"user-agent": "iPhone, iOS 15.6.1, 5.29.1, Phone"
}
json = {Json text as shown in Charles}
response. requests.post(url=URL, headers=headers,json=json)
response.raise_for_status()
print (response.json)```
Output: requests.exceptions.SSLError: HTTPSConnectionPool(host='api.delta.com', port=443): Max retries exceeded with url: /mwsb/service/shop (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:992)')))'
Related
I am getting the following 403 Forbidden response when trying to access a site from within my Python app. I am NOT being challenged by CloudFlare with any Captcha as far as I can tell, as is the case in a lot of other people’s similar questions, it’s asking me to enable cookies. The website returns 200 OK if I try via CURL or via any browser, so it’s not IP restrictions, it’s just my Python Request it doesn’t like. I have tried various combinations of User-Agent to no avail, tried http, https and nothing at all before the target URL, and I’ve mimicked exactly what the browser Network Inspector shows in the requests header from a successful regular browser GET.
Here’s the error in the http response: (status 403)
Please enable cookies.
Error 1020
Ray ID: 69c89e49895c40d7 • 2021-10-11 14:01:04 UTC
Access denied
What happened?
This website is using a security service to protect itself from online attacks.
Cloudflare Ray ID: 69c89e49895c40d7 • Your IP: x.x.x.x • Performance & security by Cloudflare
Please enable cookies.
Here’s my Python:
'''
r = requests.get( “www.oddschecker.com”,
headers={
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Firefox/68.0",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8",
"Accept-Language": "en-GB,en;q=0.5",
"method": "GET",
"content-type": "text/plain",
"accept-encoding": "gzip, deflate, br",
"Connection": "keep-alive",
"scheme": "https",
"Upgrade-Insecure-Requests": "1",
"Cache-Control": "max-age=0",
"Host": "www.oddschecker.com",
"TE":"Trailers"
},)
'''
Questions:
How does CloudFlare know that I need to enable cookies or that I’m not a regular browser just from my Python request? I send the request and get an immediate 403 back. The request is exactly the same as if I use a browser. It’s almost as though there’s some traffic going on that network inspector doesn’t show between my request and the 403. I used Fiddler too, and that just shows the same: GET request, immediate 403 response.
How DO I enable cookies within Python?
The Python Requests library has support for adding a cookie dictionary:
https://stackoverflow.com/a/7164897/13343799
You can see your cookies key-value pairs by (in Chrome) clicking F12 to open Developer Options -> Application tab -> Cookies -> select a cookie and see the Cookie Value below it. Key is before the =, value is after.
I'm trying to run a simple web-scraping code in cmd Windows10:
import requests
url = 'https://downdetector.ru/ne-rabotaet/tinkoff-bank/'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0', 'Referrer': 'downdetector.ru'}
response = requests.get(url, 'html.parser', headers=headers)
print(response)
and getting an error:
SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)'))
The weirdest thing is that when my colleague runs this exact code on his MacOS, it works perfectly fine. So what could be the problem here?
P.S. I've read all the other questions considering this topic and couldn't find an answer.
This is probably caused of not having the certificate used in https://downdetector.ru in your computer.
Your code can work if you set verify parameter False as follows. Know that this is not suggested.
response = requests.get(url, 'html.parser', headers=headers, verify=False)
You can find here more detail about verify parameter. https://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification
Note that when verify is set to False, requests will accept any TLS certificate presented by the server, and will ignore hostname mismatches and/or expired certificates, which will make your application vulnerable to man-in-the-middle (MitM) attacks. Setting verify to False may be useful during local development or testing.
I'm trying to use requests to connect my python client to HP ALM so I can export Defects and Requirements.
My problem is that when I try to connect to ALM I get this error.
requests.exceptions.SSLError: HTTPSConnectionPool(host='hpalm.xxx.com', port=443): Max retries exceeded with url: /qcbin/authentication-point/authenticate (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
my function is the following
def connection():
#HardCoded login to be deleted when tested and able to run with login verification
user = userPass.user()
pwd = userPass.passwd()
#Not sure if needed, need to test
encPwd = base64.standard_b64encode(pwd.encode('utf-8'))
print(encPwd)
userToEncode=user+':'+pwd
print("user2Encode : {0}", userToEncode)
#requests lib to ALM connection
headers = {
'cache-control': "no-cache",
'Accept': "application/json",
'Content-Type': "application/json"
}
authurl = almURL + "/authentication-point/authenticate"
res = requests.post(authurl, auth=HTTPBasicAuth(user,encPwd),headers = headers)
so far I've tried to follow this example:https://github.com/vkosuri/py-hpalm/blob/master/hpalm/hpalm.py
but when I do the verify=False I get :
InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
and the following print of the log:
{'Date': 'Fri, 05 Mar 2021 17:36:51 GMT', 'X-Frame-Options': 'SAMEORIGIN', 'Content-Type': 'text/html; charset=ISO-8859-1', 'Cache-Control': 'must-revalidate,no-cache,no-store', 'Content-Length': '5937', 'Connection': 'close'}
any ideas what I'm doing wrong?
Thank you
There are alot of answers here on stack overflow about connecting to QC.
I would recommended you go through,
https://github.com/macroking/ALM-Integration/blob/master/ALM_Integration_Util.py
HP ALM results attachment and status update using python
These will help you understand the login process
Sending a post request with proxies but keep running into proxy error.
Already tried multiple solutions on stackoverflow for [WinError 10061] No connection could be made because the target machine actively refused it.
Tried changing, system settings, verified if the remote server is existing and running, also no HTTP_PROXY environment variable is set in the system.
import requests
proxy = {IP_ADDRESS:PORT} #proxy
proxy = {'https': 'https://' + proxy}
#standard header
header={
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"Referer": "https://tres-bien.com/adidas-yeezy-boost-350-v2-black-fu9006-fw19",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8"
}
#payload to be posted
payload = {
"form_key":"1UGlG3F69LytBaMF",
"sku":"adi-fw19-003",
# above two values are dynamically populating the field; hardcoded the value here to help you replicate.
"fullname": "myname",
"email": "myemail#gmail.com",
"address": "myaddress",
"zipcode": "areacode",
"city": "mycity" ,
"country": "mycountry",
"phone": "myphonenumber",
"Size_raffle":"US_11"
}
r = requests.post(url, proxies=proxy, headers=header, verify=False, json=payload)
print(r.status_code)
Expected output: 200, alongside an email verification sent to my email address.
Actual output: requests.exceptions.ProxyError: HTTPSConnectionPool(host='tres-bien.com', port=443): Max retries exceeded with url: /adidas-yeezy-boost-350-v2-black-fu9006-fw19 (Caused by ProxyError('Cannot connect to proxy.', NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',)))
Quite a few things are wrong here... (after looking at the raffle page you're trying to post to, I suspect it is https://tres-bien.com/adidas-yeezy-boost-350-v2-black-fu9006-fw19 based on the exception you posted).
1) I'm not sure whats going on with your first definition of proxy as a dict instead of a string. That said, it's probably a good practice to use both http and https proxies. If your proxy can support https then it should be able to support http.
proxy = {
'http': 'http://{}:{}'.format(IP_ADDRESS, PORT),
'https': 'https://{}:{}'.format(IP_ADDRESS, PORT)
}
2) Second issue is that the raffle you're trying to submit to takes url encoded form data, not json. Thus your request should be structured like:
r = requests.post(
url=url,
headers=headers,
data=payload
)
3) That page has a ReCaptcha present, which is missing from your form payload. This isn't why your request is getting a connection error, but you're not going to successfully submit a form that has a ReCaptcha field without a proper token.
4) Finally, I suspect the root of your ProxyError is you are trying to POST to the wrong url. Looking at Chrome Inspector, you should be submitting this data to
https://tres-bien.com/tbscatalog/manage/rafflepost/ whereas your exception output indicates you are POSTing to https://tres-bien.com/adidas-yeezy-boost-350-v2-black-fu9006-fw19
Good luck with the shoes.
I am working with Python2.7 and I need to access a certain API (Nuagen): https://nuagenetworks.github.io/vsd-api-documentation/usage.html
In the documentation, they say the following:
Getting the API key.
To obtain and API key, the first step is to make a /me API call. This API call returns information about the account being used.
GET /me HTTP/1.1
X-Nuage-Organization: my company
Content-Type: application/json
Authorization: $AUTHORIZATION_STRING
The authorization string for the /me API MUST be formatted like the following:
$AUTHORIZATION_STRING = Basic base64($LOGIN:$PASSWORD)
So I try to build my requests in the folowing way;
import requests
url = 'https://an.ip.add.ress:8443/nuage/api/v4_0/me'
user = 'myuser'
passw = 'mypass'
cps = 'myorganization'
headers = {
"Authorization": "Basic d29jdTpjdXdv",
"Cache-Control": "no-cache",
"Content-Type": "application/json",
"X-Nuage-Organization": "csp",
}
response = requests.get(url, auth=(user, passw), headers=headers)
# Also tried with:
# response = requests.get(url, headers=headers)
However, I'm always getting this error:
requests.exceptions.SSLError: bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)
Any idea on how to access this API with Python requests? Or any other way?