certificate-transparency how to query certificates for a domain - python

I want to get a list of ssl certificates used by all fqdn of a domain name. So, we can imagine that I search google.com certificates. I will get the google.com and www.google.com certificate but I want also get checkout.google.com certificate and others.
For this I can use this page which use certificate transparency : https://www.google.com/transparencyreport/https/ct/#domain=google.com&incl_exp=false&incl_sub=true
This page give a github repository : https://github.com/google/certificate-transparency
I cloned it and install all things to use Python dashboard. But I don't know how to query the database to find all google.com certificates ?
There is not public API available, not database filled...
Do you know a way to get all FQDN for a domain name by using certificate transparency ?

This is an old question but in case others run across stumbleupon this...
https://crt.sh exists now and you can query all the certificate domains they have observed. Also facebook has a email subscription service you can get notified for new certificates for specific domains. https://developers.facebook.com/tools/ct/.

Related

Python Request: SSL Verify

I am using python request module to hit rest api. I have to use SSL for security measures.
I see that i can set
requests.get(url,verify=/path/ca/bundle/)
However i am confused as to what needs to be passed as CA_BUNDLE?
I get the server certificate using
cert = ssl.get_server_certificate((server,port))
Can someone let me know, how i should use this certificate in my request? Should i convert the cert to X509/.pem/.der/.crt file ?
Solved it. Apparently i needed to get the entire certificate chain and create a CA bundle out of it.

Getting SSL: CERTIFICATE_VERIFY_FAILED when using python with Apache NIFI rest api

I have various python scripts that use the NIFI rest api to make some calls. Locally on my machine and other's local machine's the scripts work. I am trying to run the scripts with through Jenkins. Jenkins is running using on an AWS EC2 instance. The scripts do not work on the Jenkins EC2 instance however they will work on other EC2 instances within the same AWS account and security group. The only way I am able to get the script to work on the Jenkins EC2 instance is using (verify=False) for the rest call. However I need to be able to get it working on Jenkins without (verify=False) given that some of the rest calls I need to make won't work with it.
The certs I am using are two pem files generated from a p12 file we use for NIFI. The certs work everywhere else so I do not think it is an issue with them. I have also tried various Python versions and I still get the same result so I do not think it is that either. I have the public and private ip addresses of the Jenkins server opened up for ports 22, 8443, 18443, 443, 9999, 8080, 18080. So I don't think it is a port issue either. I don't have much experience with SSL so I'm lost on what to ever try next. But given that it works locally and works on the AWS EC2 instance we're running the NIFI dev version on I am out of ideas.
Python Script (the other scripts have the same issue and similar structure):
import json, requests, sys
with open("jenkinsCerts.dat") as props:
certData = json.load(props)
cert = (certData["crt"],certData["key"])
def makeRestGetCall(url):
#calls a RESTful url and returns the response in json format
if "https" in url:
response = requests.get(url, cert=cert)
else:
response = requests.get(url)
print response
with open('servers.txt') as nifi_server_list:
errorCount=0
data = json.load(nifi_server_list)
for server in data:
try:
print "trying: "+server["name"]+" ("+server["url"]+")"
makeRestGetCall(server["url"], verify=False)
except:
print server["name"]+" ("+server["url"]+") did not respond"
errorCount = errorCount + 1
try:
assert errorCount==0
except AssertionError:
print errorCount, " servers did not respond"
The script above doesn't give any error just an output that doesn't work but works on other machines at the same time.
trying: dev-cluster-node-1
dev-cluster-node-1 did not respond
trying: dev-cluster-node-2
dev-cluster-node-2 did not respond
trying: dev-cluster-node-3
dev-cluster-node-3 did not respond
trying: dev-registry
dev-registry did not respond
trying: dev-standalone
dev-standalone did not respond
5 servers did not respond
This is the ERROR I get from Jenkins when I run a different Python script that uses the same authentication from above but the full script was too long to copy and not necessary:
*requests.exceptions.SSLError: HTTPSConnectionPool(host='ec2-***-***-***.****-1.compute.amazonaws.com', port=8443): Max retries exceeded with url: /nifi-api/flow/process-groups/3856c256-017-****-***** (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'),))*
I believe the issue is that your script isn't aware of the expected public certificates of your NiFi servers in order to verify them during the request.
The crt and key values you are providing should contain the public certificate and private key of the Python script in order to authenticate to the NiFi servers. This material identifies the client in this case, which is required for mutual authentication TLS (one of the various authentication mechanisms NiFi supports).
However, with all TLS handshakes, the server must also provide a public certificate identifying itself and with a CN or SAN matching the hostname serving the connection (e.g. if you visit https://stackoverflow.com, the website needs to present a certificate issued for stackoverflow.com, not andys-fake-stackoverflow.com).
Most websites on the public internet have their certificates signed by a Certificate Authority (Let's Encrypt, Comodo, Verisign, etc.). Your browser and many software components come with a collection of these trusted CAs so that TLS connections work out of the box. However, if the certificates used by your NiFi servers are not signed by one of these CAs, the default Python list is not going to contain their signer, so it won't be able to verify these NiFi certs. You'll need to provide the signing public certificate to your Python code to enable this.
The requests module allows you to configure a custom CA bundle path with the public certificate that signed your NiFi certs. You can obtain that in a variety of ways (you have direct access to the NiFi servers, but any connection [via browser, openssl s_client, curl, etc.] will allow you to obtain the public certificate chain). Store the public cert (nifi_ca.pem) in PEM format somewhere in your Python script's folder structure, and reference it like so:
response = requests.get(url, cert=cert, verify="nifi_ca.pem")

NET::ERR_CERT_COMMON_NAME_INVALID - Error Message

I built a website some time ago with Flask. Now all of a sudden when I try to navigate there I get the following:
NET::ERR_CERT_COMMON_NAME_INVALID
Your connection is not private
Attackers might be trying to steal your information from www.mysite.org (for example, passwords, messages, or credit cards). Learn more
Does anyone know what's going on?
The error means: The host name you use in the web browser does not match one of the names present in the subjectAlternativeName extension in the certificate.
If your server has multiple DNS entries you need to include all of into the certificate to be able to use them with https. If you access the server using it's IP address like https://10.1.2.3 then the IP address also have to present in the certificate (of course this only makes sense if you have a static IP address that never changes).
The certificate subject alternative name can be a domain name or IP address. If the certificate doesn’t have the correct subjectAlternativeName extension, users get a NET::ERR_CERT_COMMON_NAME_INVALID error letting them know that the connection isn’t private. If the certificate is missing a subjectAlternativeName extension, users see a warning in the Security panel in Chrome DevTools that lets them know the subject alternative name is missing.
https://support.google.com/chrome/a/answer/7391219?hl=en
For Chrome 58 and later, only the subjectAlternativeName extension, not commonName, is used to match the domain name and site certificate. So, if you are missing the Subject Alternative Name in your certificate then you will experience the NET::ERR_CERT_COMMON_NAME_INVALID error.
In order to have a Subject Alternate Name (SAN) on an SSL certificate, you must first edit your OpenSSL configuration. On Ubuntu/Debian, that can be found at /etc/ssl/openssl.cnf Find the section of that file with the heading [ v3_ca ], you can add the line with your SAN there:
subjectAltName = www.example.com

Is it safe to disable SSL certificate verification in pythons's requests lib?

I'm well aware of the fact that generally speaking, it's not. But in my particular case, I'm writing a simple python web-scraper which will be run as a cron job every hour and I'd like to be sure that it's not a risk to ignore verifying an SSL certificate by setting verify to False.
P.S.
The reason why I'm set on disabling this feature is because when trying to make a requests response = requests.get('url') It raises an SSLError and I don't see how to handle it.
EDIT:
Okay, with the help of sigmavirus24 and others I've finally managed to resolve the problem. Here's the explanation of how I did it:
I ran a test at https://ssllabs.com/ and according to the report provided by SSLLabs, the SSL error would get raised due to the "incomplete certificate chain" issue (for more details on how certificate verification works read sigmaviruses24's answer).
In my case, one of the intermediaries was missing.
I searched for its fingerprint using google and downloaded it in .pem format.
Then I used "certifi" (it's a python package for providing Mozilla's CA Bundle. If you don't have it, you can install it with sudo pip install certifi) to find the root cert (again by its fingerprint). This can be done as follows:
$ ipython
In [1]: import certifi
In [2]: certifi.where()
Out[2]: /usr/lib/python3.6/site-packages/certifi/cacert.pem
In [3]: quit
$ emacs -nw /usr/lib/python3.6/site-packages/certifi/cacert.pem
Or in bash you can issue $ emacs -nw $(python -m certifi) to open the cacert.pem file.
Concated two certs together in one file and then provided its path to the verify parameter.
Another (more simple but not always possible) way to do this is to download the whole chain from SSLLabs, right in front of the "Additional Certificates (if supplied)" section there's the "Downlaod server chain" button. Click it, save the chain in a .pem file and when calling requests's get method, provide the file path to the verify parameter.
The correct answer here is "it depends".
You've given us very little information to go on, so I'm going to make some assumptions and list them below (if any of them do not match, then you should reconsider your choice):
You are constantly connecting to the same website in your CRON job
You know the website fairly well and are certain that the certificate-related errors are benign
You are not sending sensitive data to the website in order to scrape it (such as login and user name)
If that is the situation (which I am guessing it is) then it should be generally harmless. That said, whether or not it is "safe" depends on your definition of that word in the context of two computers talking to each other over the internet.
As others have said, Requests does not attempt to render HTML, parse XML, or execute JavaScript. Because it simply is retrieving your data, then the biggest risk you run is not receiving data that can be verified came from the server you thought it was coming from. If, however, you're using requests in combination with something that does the above, there are a myriad of potential attacks that a malicious man in the middle could use against you.
There are also options that mean you don't have to forgo verification. For example, if the server uses a self-signed certificate, you could get the certificate in PEM format, save it to a file and provide the path to that file to the verify argument instead. Requests would then be able to validate the certificate for you.
So, as I said, it depends.
Update based on Albert's replies
So what appears to be happening is that the website in question sends only the leaf certificate which is valid. This website is relying on browser behaviour that currently works like so:
The browser connects to the website and notes that the site does not send it's full certificate chain. It then goes and retrieves the intermediaries, validates them, and completes the connection. Requests, however, uses OpenSSL for validation and OpenSSL does not contain any such behaviour. Since the validation logic is almost entirely in OpenSSL, Requests has no way to emulate a browser in this case.
Further, Security tooling (e.g., SSLLabs) has started counting this configuration against a website's security ranking. It's increasingly the opinion that websites should send the entire chain. If you encounter a website that doesn't, contacting them and informing them of that is the best course forward.
If the website refuses to update their certificate chain, then Requests' users can retrieve the PEM encoded intermediary certificates and stick them in a .pem file which they then provide to the verify parameter. Requests presently only includes Root certificates in its trust store (as every browser does). It will never ship intermediary certificates because there are just too many. So including the intermediaries in a bundle with the root certificate(s) will allow you to verify the website's certificate. OpenSSL will have a PEM encoded file that has each link in the chain and will be able to verify up to the root certificate.
This is probably one more appropriate on https://security.stackexchange.com/.
Effectively it makes it only slightly better than using HTTP instead of HTTPS. So almost all (apart from without the server's certificate someone would have to actively do something) of the risks of HTTP would apply.
Basically it would be possible to see both the sent and received data by a Man in The Middle attack.. or even if that site had ever been compromised and the certificate was stolen from them. If you are storing cookies for that site, those cookies will be revealed (i.e. if facebook.com then a session token could be stolen) if you are logging in with a username and password then that could be stolen too.
What do you do with that data once you retrieve it? Are you downloading any executable code? Are you downloading something (images you store on a web-server?) that a skilled attacker (even by doing something like modifying your DNS settings on your router) could force you to download a file ("news.php") and store on your web-server that could become executable (a .php script instead of a web-page)?
From the documentation:
Requests can also ignore verifying the SSL certficate if you set verify to False.
requests.get('https://kennethreitz.com', verify=False)
<Response [200]>
It is 'safe', if you aren't using sensitive information in your request.
You can't put a virus in the HTML itself (as far as I know), Javascript can be a vulnerability, so it's a great thing Python doesn't process it.
So all in all, you should be safe

How to connect gcm server using python?

I am using ubuntu 14.4 . i tried to send push notification to mobile phone.I referred following '''https://www.digitalocean.com/community/tutorials/how-to-create-a-server-to-send-push-notifications-with-gcm-to-android-devices-using-python ''' its working for my local pc .
same code I am trying in web server but i cant able to send push notification .i got error like "gcm.gcm.GCMAuthenticationException: There was an error authenticating the sender account" my webserver also ubuntu 14.4 . so please any one help me
gcm.py
from gcm import *
gcm= GCM("as........k")
data={"message from":"123","messageto":"1234","message":"Hi","time":"10.00AM","langid":"1"}
reg_id='AP...JBA'
gcm.plaintext_request(registration_id=reg_id,data=data)
i added my server ip in white list but still i am getting same error
You need to add your IP to the white-listed IP list.
The article you linked mentioned it...
gcm: add your API KEY from the Google API project; make sure your
server's IP address is in the allowed IPs
When you create your access key you specify which servers can be used there, so you will need to edit the allowed server list by adding your server's IP.
Make sure to update your Authorization key is defined in your request.
Ensure that outbound ports 5228, 5229, and 5230 are open.
For further errors, look at Google's page
I had the same problem and solved it cleaning the whitelist, saving it and re-inserting the ip of my server in the whitelist.
It seemed so but it's not true. It's just random: sometimes works, sometimes it returns the error mentioned.

Categories

Resources