I am trying to send a https request using python requests library
my code is
full_url = ''.join(['https://', get_current_site(request).domain, '/am/reply'])
data = {'agent_type':'trigger','input':platform,'user':request.user.id}
print "hi" ### this is printing
a = requests.get(full_url,params=data,verify=False) ##the execution is stucked here even error are not appearing
print "hello" ## this code is not printed
The problem is that there is no execution after requests whole code is stucked at this point.
I tried to verify my code using python shell and it run perfectly.
Is there any way that i can debug whole my requests response that is going on real time or can someone suggest me a solution
The whole code was working fine when there was http but after switching to https whole code stopped working. I even tried to place the certificate file but also no success
It is normal. Some website only accept http and some https and some of them both. http port is 80 and https port is port 443. For example, if a website is using https which means is secure http. So they actually need extra information in header etc. Check requests api for http
http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification
Related
I'm trying to make one simple request:
ua=UserAgent()
req = requests.get('https://www.casasbahia.com.br/' , headers={'User-Agent':ua.random})
I would understand if I received <Response [403] or something like that, but instead, a recive nothing, the code keep runing with no response.
using logging I see:
I know I could use a timeout to avoid keeping the code running, but I just want to understand why I don't get an response
thanks in advance
I never used this API before, but from what I researched on here just now, there are sites that can block requests from fake users.
So, for reproducing this example on my PC, I installed fake_useragent and requests modules on my Python 3.10, and tried to execute your script. It turns out that with my Authentic UserAgent string, the request can be done. When printed on the console, req.text shows the entire HTML file received from the request.
But if I try again with a fake user agent, using ua.random, it fails. The site was probably developed to detect and reject requests from fake agents (or bots).
Though again, this is just theory. I have no ways to access this site's server files to comprove it.
I'm trying to make a wallpaper page from the website "https://www.wallpaperflare.com",
When I try to run it on localhost it always works and displays the original page of the website.
But when I deploy to Heroku the page doesn't display the original page from the website, but "Error Get Request, Code 403" Which means requests don't work on that url.
This is my code:
#app.route("/wallpapers", methods=["GET"])
def wallpaper ():
page = requests.get("https://www.wallpaperflare.com")
if page.status_code == 200:
return page.text
else:
return "Error Get Request, Code {}".format(page.status_code)
is there a way to solve it?
HTTP Error code 403 means Forbidden. You can read more here
It means wallpaperflare.com is not allowing you to make the request. This is because websites generally do not want scripts to be making requests to them. Make sure to read robots.txt of a site to see it's script crawling policies. More on that here
It works on your local machine as it is not yet blacklisted by wallpaperflare.com
Two things here:
the user agent - unless you spoof it, the request module is going to use its own string and it's very obvious you are a bot
the IP address - your server IP address may be denied for various reasons, whereas your home IP address works just fine.
It is also possible that the remote site applies different policies based on the client, if you are a bot then you might be allowed to crawl a bit but rate limiting measures could apply for example.
I'm hacking together an amazon api and when only using python requests without proxying, it prompts for a captcha. When routing this python requests traffic through fiddler, it seems to pass without a problem. Is it possible that amazon is fingerprinting python requests and fiddler changes the fingerprint since it's a proxy?
I viewed headers sent from fiddler and python requests and they are the same.
There is no exra proxying/fiddler rules/filters set on fiddler to create a change.
To be clear, all mentioned proxying is only done locally, so it will not change the public ip address.
Thank you!
The reason is that websites are fingerprinting your requests with TLS hello package. There exist libraries like JA3 to generate a fingerprint for each request. They will intentionally block http clients like requests or urllib. If you uses a MITM proxy, because the proxy server create a new TLS connection with the server, the server only sees proxy server's fingerprint, so they will not block it.
If the server only blocks certain popular http libraries, you can simply change the TLS version, then you will have different fingerprint than the default one.
If the server only allows popular real-world browsers, and only accepts them as valid requests, you will need libraries that can simulate browser fingerprints, one of which is curl-impersonate and its python binding curl_cffi.
pip install curl_cffi
from curl_cffi import requests
# Notice the impersonate parameter
r = requests.get("https://tls.browserleaks.com/json", impersonate="chrome101")
print(r.json())
# output: {'ja3_hash': '53ff64ddf993ca882b70e1c82af5da49'
# the fingerprint should be the same as target browser
I am running a Flask restful API behind an NGINX web server on AWS. I am hitting that with a python module from my Pi.
Everything worked fine when I was using HTTP to make calls to the api. But I just locked down my api so only HTTPS is possible. I changed the UIRL used by my python module but it now fails. The code is quite simple...here is an extract:
jsonpkg = {'subscriberID': self.api_login, 'token': self.api_token,
'content': speech_content}
headers = {'Content-Type': 'application/json'}
r = requests.post(self.api_apiurl, data=json.dumps(jsonpkg), headers=headers)
The values are being correct set by the class init section. And I am importing the requests module at the top. Error messages indicate it is using python 2.7. However when I monitor the API I can see its not even hitting the server. I can point a browser to the api and its working fine.
Am I to understand the requests module in python 2.7 does not support https?
Are there additional parameters I need to send for https?
Aha! With a little more digging into the request module docs I found the answer. If I use the following
r = requests.post(self.api_apiurl, data=json.dumps(jsonpkg), headers=headers, verify=False)
then it works. So the issue is with verifying the cert. I am not quite sure why the browser gets by without this...but perhaps it does the extra stuff automatically. So I either need to NOT verify the cert or have a local copy(?) that can be verified.
Final Update:
I finally worked out how to concatenate my site certificate with the chain certificate (and understand why). This site here was a great help. Also, once they are concatenated you will probably get a second error, which if you google it you will find is caused by the need for a carriage return after the first certificate and before the second (edit the resulting concatenated file with notepad). I then was able to return the post to using "verify=True" which made the warnings about no verification go away.
I need to intercept an HTTP Response packet from the server and replace it with my own response, or at least modify that response, before it arrives to my browser.
I'm already able to sniff this response and print it, the problem is with manipulating/replacing it.
Is there a way to do so wiht scapy library ?
Or do i have to connect my browser through a proxy to manipulate the response ?
If you want to work from your ordinary browser, then you need proxy between browser and server in order to manipulate it. E.g. see https://portswigger.net/burp/ which is a proxy specifically created for penetration testing with easy replacing of responses/requests (which is sriptable, too).
If you want to script all your session in scapy, then you can create requests and responses to your liking, but response does not go to the browser. Also, you can record ordinary web session (with tcpdump/wireshark/scapy) into pcap, then use scapy to read pcap modify it and send similar requests to the server.