Does Azure Key Vault support storing Client Certificates for mTLS authentication?
Example:
I have an HTTP-triggered Azure Function (Python)
Has HTTPS: Yes and Client Certificates: Required enabled in the Function App
When user sends a request to the endpoint and passes their Client Certificate, I can read in the cert via the X-ARR-ClientCert header
I then want to create a KeyVaultCertificate client which will pull the client cert we have on file for said requestor and validate its properties
not_valid_before/after
issuer
common_name
ocsp_responder_url
etc.
Problem:
Key Vault seems to only allow the upload of server certificates, not client certs.
It only allows .pfx or .pem file extensions
If I'm not mistaken, a client cert would never be in .pfx format because it contains the private key
I tried to split the .pfx file into both .pem (actual certificate) and .key then upload only the .pem, but Key Vault didn't like the format.
Does Key Vault handle client certs in this manner or should I just save them as KV Secrets and avoid KV Certificates altogether?
If I'm not mistaken, a client cert would never be in .pfx
You are mistaken and all your assumptions are incorrect. Mutual TLS requires two sets of certificate and private key, one set for server and another for client. You cannot setup a mutual TLS with two certificates and one private key (like you describe).
Azure Key Vault perfectly supports any kind of certificate, including client and server authentication.
Related
I have various python scripts that use the NIFI rest api to make some calls. Locally on my machine and other's local machine's the scripts work. I am trying to run the scripts with through Jenkins. Jenkins is running using on an AWS EC2 instance. The scripts do not work on the Jenkins EC2 instance however they will work on other EC2 instances within the same AWS account and security group. The only way I am able to get the script to work on the Jenkins EC2 instance is using (verify=False) for the rest call. However I need to be able to get it working on Jenkins without (verify=False) given that some of the rest calls I need to make won't work with it.
The certs I am using are two pem files generated from a p12 file we use for NIFI. The certs work everywhere else so I do not think it is an issue with them. I have also tried various Python versions and I still get the same result so I do not think it is that either. I have the public and private ip addresses of the Jenkins server opened up for ports 22, 8443, 18443, 443, 9999, 8080, 18080. So I don't think it is a port issue either. I don't have much experience with SSL so I'm lost on what to ever try next. But given that it works locally and works on the AWS EC2 instance we're running the NIFI dev version on I am out of ideas.
Python Script (the other scripts have the same issue and similar structure):
import json, requests, sys
with open("jenkinsCerts.dat") as props:
certData = json.load(props)
cert = (certData["crt"],certData["key"])
def makeRestGetCall(url):
#calls a RESTful url and returns the response in json format
if "https" in url:
response = requests.get(url, cert=cert)
else:
response = requests.get(url)
print response
with open('servers.txt') as nifi_server_list:
errorCount=0
data = json.load(nifi_server_list)
for server in data:
try:
print "trying: "+server["name"]+" ("+server["url"]+")"
makeRestGetCall(server["url"], verify=False)
except:
print server["name"]+" ("+server["url"]+") did not respond"
errorCount = errorCount + 1
try:
assert errorCount==0
except AssertionError:
print errorCount, " servers did not respond"
The script above doesn't give any error just an output that doesn't work but works on other machines at the same time.
trying: dev-cluster-node-1
dev-cluster-node-1 did not respond
trying: dev-cluster-node-2
dev-cluster-node-2 did not respond
trying: dev-cluster-node-3
dev-cluster-node-3 did not respond
trying: dev-registry
dev-registry did not respond
trying: dev-standalone
dev-standalone did not respond
5 servers did not respond
This is the ERROR I get from Jenkins when I run a different Python script that uses the same authentication from above but the full script was too long to copy and not necessary:
*requests.exceptions.SSLError: HTTPSConnectionPool(host='ec2-***-***-***.****-1.compute.amazonaws.com', port=8443): Max retries exceeded with url: /nifi-api/flow/process-groups/3856c256-017-****-***** (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)'),))*
I believe the issue is that your script isn't aware of the expected public certificates of your NiFi servers in order to verify them during the request.
The crt and key values you are providing should contain the public certificate and private key of the Python script in order to authenticate to the NiFi servers. This material identifies the client in this case, which is required for mutual authentication TLS (one of the various authentication mechanisms NiFi supports).
However, with all TLS handshakes, the server must also provide a public certificate identifying itself and with a CN or SAN matching the hostname serving the connection (e.g. if you visit https://stackoverflow.com, the website needs to present a certificate issued for stackoverflow.com, not andys-fake-stackoverflow.com).
Most websites on the public internet have their certificates signed by a Certificate Authority (Let's Encrypt, Comodo, Verisign, etc.). Your browser and many software components come with a collection of these trusted CAs so that TLS connections work out of the box. However, if the certificates used by your NiFi servers are not signed by one of these CAs, the default Python list is not going to contain their signer, so it won't be able to verify these NiFi certs. You'll need to provide the signing public certificate to your Python code to enable this.
The requests module allows you to configure a custom CA bundle path with the public certificate that signed your NiFi certs. You can obtain that in a variety of ways (you have direct access to the NiFi servers, but any connection [via browser, openssl s_client, curl, etc.] will allow you to obtain the public certificate chain). Store the public cert (nifi_ca.pem) in PEM format somewhere in your Python script's folder structure, and reference it like so:
response = requests.get(url, cert=cert, verify="nifi_ca.pem")
I'm working on coding an application in Python 3 for users to send and retrieve data between others with this application. The process would be a client inputting an encoded string, and using a server as a middle man to then send data to another client. I'm well versed in what would be used for the client application, but this server knowledge I am new to. I have a VPS server up and running, and I researched and found the module pysftp would be good for transferring data back and forth. However, I'm concerned about the security of the server when using the application. This module requires the authentication details of the server when making a connection, and I don't think having my server's host, username and password in the application code is very safe. What would be the safe way to go about this?
Thanks,
Gunner
You might want to use pre-generated authentication keys. If you are familiar with the process of using the ssh-keygen tool to create SSH key pairs, it's the same thing. You just generate the key pair, place the private key on the client machine, and put the public key on the target server. Then you can use pysftp like this:
with pysftp.Connection('hostname', username='me', private_key='/path/to/keyfile') as sftp:
<do some stuff>
the authentication is handled using the key pair and no password is required. This isn't to say that your security issue is solved: the private key is still a sensitive credential that needs to be treated like a password. The advantage is that you don't have a plaintext password stored in a file anywhere, and you are using a well established and secure process to manage authentication. The private key is set with permission 0600 to prevent anyone but the owner from accessing it.
Is there a way to add an ssh connection to Apache Airflow from the UI either via connections or vairables tab that allow connection using a pem key and not a username and password.
DISCLAIMER: Following answer is purely speculative
I think key_file param of SSHHook is meant for this purpose
And the idiomatic way to supply it is to pass it's name via extra args in Airflow Connection entry (web UI)
Of course when neither key_file nor credentials are provided, then SSHHook falls back to identityfile to initialize paramiko client.
Also have a look how SFTPHook is handling this
I am developing an iOS app, and I want the data return by my server can only be read by my app.
So, I create self-signed certificate, and setup https in Tornado like this:
http_server = tornado.httpserver.HTTPServer(applicaton, ssl_options={
"certfile": os.path.join(data_dir, "mydomain.crt"),
"keyfile": os.path.join(data_dir, "mydomain.key"),
})
http_server.listen(443)
When after I type the API of my server in chrome/safari, they warned me, but the data still can be read.
The browsers don't have my certificate/key pair, why can they access my server and read the data?
According to public/private theory:
the browser have to send its public key, which involved in its certificate
if my server trust the certificate by some ways, my server encrypt the response using the browser's public key
browser receive the response and decrypt it using itself's private key
In step 2, my server should not trust the browser's certificate! Am I right?
Thanks.
According to public/private theory:
the browser have to send its public key, which involved in its certificate
if my server trust the certificate by some ways, my server encrypt the response using the browser's public key
browser receive the response and decrypt it using itself's private key
No, that's not how it works.
In SSL/TLS with only server authentication (most HTTPS sites), the server sends its certificate first, the client checks whether it trusts the certificate, the client and server negotiate a shared secret using the server's public key (how it's done depend on the cipher suite), and an encrypted channel is set up, using keys derived from this shared secret.
In SSL/TLS with mutual authentication, an extra steps involves the client sending its certificate to the server and signing something at the end of the handshake, to prove to the server it's indeed the holder of this certificate.
It's only in the second case that the browser has a certificate and a private key, and it's never used for any encryption in any case.
The code you're using here only sets up certfile and keyfile, which means you've configured your server for a connection where only the server is authenticated. When you're bypassing the browser warning, you're merely telling it to trust the server certificate (since it's self-signed in your case), so the connection can indeed proceed.
If you want to authenticate the client, you'll need to configure the server to request (and require) a client certificate. You'll also need to set up the client certificate (with its private key) in the client (whether it's the browser or your app). This is independent of the server certificate and its private key.
The Tornado documentation seems to indicate the ssl_options parameter uses the ssl.wrap_socket options, so you should look into those if you want to use client certificate authentication (in particular cert_reqs and ca_certs).
Note that, in general, authenticating an app (as opposed to the user of an app) using a client certificate only works as long as no-one is able to decompile the app. The app will contain the private key one way or another, so someone could get hold of it. This problem is of course even worse if you use the same private key for all the copies of your app.
I'm by no means knowledgeable in this field, but the certificate is only meant to go so far as to help ensure that the server is who it says it is.
Anyone can view the page, if they trust the servers certificate.
To get the functionality you want, you probably want to use some form of authentication, even something basic like a given value in a HTTP header field.
here is a bizarre tip, you just hack your user agent, so tornado will only allow the string you gave, i dont know if iOS browsers offers this, but in Chrome on PC you can override your user agent in
developper tool -> Settings -> Overrides.
use:
self.request.headers["User-Agent"]
because it is a string, then you just allow some string to pass:
if personnalized_ua not in self.request.headers["User-Agent"]:
self.redirect("no-way.html")
and now, if you want to make access only for iPhone for example, use user_agents library
I've implemented a HTTP server (CherryPy and Python) that receives an encrypted file from a client (Android). I'm using OpenSSL to decrypt the uploaded file. Currently I'm using openssl -enc -pass file:password.txt -in encryptedfile -out decryptedfile to perform to decryption on the server side. As you can see the password used by openssl is stored in a plain text file (password.txt).
Is there a more secure way to store this OpenSSL password?
Thanks.
Pass it through a higher FD, and use that FD in the command line. Note that you'll need to use the preexec_fn argument to set up the FD before the process gets run.
subprocess.Popen(['openssl', ..., 'file:/dev/fd/12', ...], ...,
preexec_fn=passtofd12(password), ...)
For the sake of privacy for a user and other reasons passwords are generally not stored by servers. Typically users choose a password which is stored as a hash of some sort on the server.
Users then authenticate with the web application by checking the stored hash against a hash supplied based on user input. Once the client is authenticated a session identifier is provided allowing use of server resource(s). During this time a user can for instance upload the file. Encryption of the file on the server should be un-necessary assuming the hosting server is secured properly and and absent of other issues.
In this case, the authentication mechanism is not made clear, neither are the threats that pose a danger, or the life cycle of that uploaded file.
It seems that a server is receiving an encrypted file, plus some type of password. Is the protection of the password being considered during the transmission phase, or as storage on the server? The HTTPS protocol can help guard against threats concerning the transmission of the file/data. As I see from your description the concern seems to be storage on the server side.
Encrypting the passwords once they have been received by the server (either individually or by using a master password) adds another layer of security, but this approach is not without fault as the passphrase either (1) needs to be stored on the server in cleartext for accessing the files (2) or needs to be entered manually by an administrator when needed as part of any processing requiring the password - note that any resources encrypted with the password become un-useable to users.
While I am not completely aware of what is going on, the most secure thing to do would be to re-work the web application and carefully think through the design and its requirements.