Python requests call fails with HTTPS - python

I am running a Flask restful API behind an NGINX web server on AWS. I am hitting that with a python module from my Pi.
Everything worked fine when I was using HTTP to make calls to the api. But I just locked down my api so only HTTPS is possible. I changed the UIRL used by my python module but it now fails. The code is quite simple...here is an extract:
jsonpkg = {'subscriberID': self.api_login, 'token': self.api_token,
'content': speech_content}
headers = {'Content-Type': 'application/json'}
r = requests.post(self.api_apiurl, data=json.dumps(jsonpkg), headers=headers)
The values are being correct set by the class init section. And I am importing the requests module at the top. Error messages indicate it is using python 2.7. However when I monitor the API I can see its not even hitting the server. I can point a browser to the api and its working fine.
Am I to understand the requests module in python 2.7 does not support https?
Are there additional parameters I need to send for https?

Aha! With a little more digging into the request module docs I found the answer. If I use the following
r = requests.post(self.api_apiurl, data=json.dumps(jsonpkg), headers=headers, verify=False)
then it works. So the issue is with verifying the cert. I am not quite sure why the browser gets by without this...but perhaps it does the extra stuff automatically. So I either need to NOT verify the cert or have a local copy(?) that can be verified.
Final Update:
I finally worked out how to concatenate my site certificate with the chain certificate (and understand why). This site here was a great help. Also, once they are concatenated you will probably get a second error, which if you google it you will find is caused by the need for a carriage return after the first certificate and before the second (edit the resulting concatenated file with notepad). I then was able to return the post to using "verify=True" which made the warnings about no verification go away.

Related

Why don't I get a response from my request?

I'm trying to make one simple request:
ua=UserAgent()
req = requests.get('https://www.casasbahia.com.br/' , headers={'User-Agent':ua.random})
I would understand if I received <Response [403] or something like that, but instead, a recive nothing, the code keep runing with no response.
using logging I see:
I know I could use a timeout to avoid keeping the code running, but I just want to understand why I don't get an response
thanks in advance
I never used this API before, but from what I researched on here just now, there are sites that can block requests from fake users.
So, for reproducing this example on my PC, I installed fake_useragent and requests modules on my Python 3.10, and tried to execute your script. It turns out that with my Authentic UserAgent string, the request can be done. When printed on the console, req.text shows the entire HTML file received from the request.
But if I try again with a fake user agent, using ua.random, it fails. The site was probably developed to detect and reject requests from fake agents (or bots).
Though again, this is just theory. I have no ways to access this site's server files to comprove it.

python requests.post is interpreted as GET

I am interfacing with an external API using python's request module. When I submit the request using requests.post() the API returns with
{"Message":"The requested resource does not support http method 'GET'."}
When I try the exact same request with my browser's RESTED client (also using POST method), it works just fine.
Not really much in the way of sample code to give, it's just a simple
r = requests.post(url, data=data, headers=headers)
It would seem the fault is either on the API's side and/or python for not submitting an entirely standard POST request (since it works with RESTED). Has anyone ever run into this scenario before? Is there a python alternative to requests.post()?
I'm on Python 3.9.1 on Windows.

HTTPS request using python requests library

I am trying to send a https request using python requests library
my code is
full_url = ''.join(['https://', get_current_site(request).domain, '/am/reply'])
data = {'agent_type':'trigger','input':platform,'user':request.user.id}
print "hi" ### this is printing
a = requests.get(full_url,params=data,verify=False) ##the execution is stucked here even error are not appearing
print "hello" ## this code is not printed
The problem is that there is no execution after requests whole code is stucked at this point.
I tried to verify my code using python shell and it run perfectly.
Is there any way that i can debug whole my requests response that is going on real time or can someone suggest me a solution
The whole code was working fine when there was http but after switching to https whole code stopped working. I even tried to place the certificate file but also no success
It is normal. Some website only accept http and some https and some of them both. http port is 80 and https port is port 443. For example, if a website is using https which means is secure http. So they actually need extra information in header etc. Check requests api for http
http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification

Python httplib disble certificate validation

I have the following code to use:
def createCon(host,auth):
con = httplib.HTTPSConnection(host)
return con
def _readJson(con,url):
con.putrequest("GET",url)
...
r = con.getresponse()
It is working on a specific server, but on another I'm getting SSLError. If I open the url from browser, I need to accept the certificate and it is working well. But how can I either accept the certificate or disable it's validation? This is a self-signed certificate stored in a java keystore, so I would prefer to disable the verification...
The code is meant to reuse the connection between the requests, so I would prefer not to modify it deeply.
How could I do this? I tried to modify the examples but haven't beend succeded.
con.putrequest("GET",url, verify=False)
or
con.request.verify=False
I do not know how could I access the session or request objects or modify the default settings.
UPDATE this does not help:
socket.ssl.cert_reqs='CERT_NONE'
well, the actual error message is weird...:
SSLError:'[Errno 1] _ssl.c:492: error:100AE081:elliptic curve routines:EC_GROUP_new_by_curve_name:unknown group'
Regards:
Bence
Your error message points to a bug in the openssl version you use. See https://bugzilla.redhat.com/show_bug.cgi?id=1022468. In short: the client advertises capabilities it does not have and if the server picks such capability you get this error message. Needs to be fixed by upgrading your local openssl installation. A workaround on the server side should be possible too, if you have control over the server.

Is it possible to fetch a https page via an authenticating proxy with urllib2 in Python 2.5?

I'm trying to add authenticating proxy support to an existing script, as it is the script connects to a https url (with urllib2.Request and urllib2.urlopen), scrapes the page and performs some actions based on what it has found. Initially I had hoped this would be as easy as simply adding a urllib2.ProxyHandler({"http": MY_PROXY}) as an arg to urllib2.build_opener which in turn is passed to urllib2.install_opener. Unfortunately this doesn't seem to work when attempting to do a urllib2.Request(ANY_HTTPS_PAGE). Googling around lends me to believe that the proxy support in urllib2 in python 2.5 does not support https urls. This surprised me to say the least.
There appear to be solutions floating around the web, for example http://bugs.python.org/issue1424152 contains a patch for urllib2 and httplib which purports to solve the issue (when I tried it the issue I began to get the following error instead: urllib2.URLError: <urlopen error (1, 'error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol')>). There is a cookbook recipe here http://code.activestate.com/recipes/456195 which I am planning to try next. All in all though I'm surprised this isn't supported "out of the box", which makes me wonder if I'm simply missing out on an obvious solutions, so in short — has anyone got a simple method for fetching https pages using an authenticating proxy with urllib2 in Python 2.5? Ideally this would work:
import urllib2
#perhaps the dictionary below needs a corresponding "https" entry?
#That doesn't seem to work out of the box.
proxy_handler = urllib2.ProxyHandler({"http": "http://user:pass#myproxy:port"})
urllib2.install_opener( urllib2.build_opener( urllib2.HTTPHandler,
urllib2.HTTPSHandler,
proxy_handler ))
request = urllib2.Request(A_HTTPS_URL)
response = urllib2.urlopen( request)
print response.read()
Many Thanks
You may want to look into httplib2. One of the examples claims support for SOCKS proxies if the socks module is installed.

Categories

Resources