I have a gunicorn + uvicorn + fastApi stack.
(Basically, I am using https://hub.docker.com/r/tiangolo/uvicorn-gunicorn-fastapi docker image).
I've already implemented SSL based authentication by providing appropriate gunicorn configuration options: certfile, keyfile, ca_certs, cert_reqs.
And it works fine: user have to provide a client SSL certificate in order to be able to make an API calls.
What I need to do now is to obtain client certificate data and pass it further (add it to request headers) into my application, since it contains some client credentials.
For example, I've found a way to do it using gunicorn worker by overrding gunicorn.workers.sync.SyncWorker: https://gist.github.com/jmvrbanac/089540b255d6b40ca555c8e7ee484c13.
But is there a way to do the same thing using UvicornWorker? I've tried to look through the UvicornWorker's source code, but didn't find a way to do it.
I went deeper into the Uvicorn source code, and as far as I understand, in order to access the client TLS certificate data, I need to do some tricks with python asyncio library (https://docs.python.org/3/library/asyncio-eventloop.html), possibly with Server (https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.Server) class and override some of the UvicornWorker's methods.
I am still not quite sure if it is possible to achieve the desired result though.
I ended up setting the nginx (Openresty) in front of my server and added a script to get a client certificate and put it into header.
Here is a part of my nginx config:
set_by_lua_block $client_cert {
local client_certificate = ngx.var.ssl_client_raw_cert
if (client_certificate ~= nil) then
client_certificate = string.gsub(client_certificate, "\n", "")
ngx.req.set_header("X-CLIENT-ID", client_certificate)
end
return client_certificate
}
It is also possible to extract some specific field from a client certificate (like CN, serial number etc.) directly inside nginx configuration, but I decided to pass the whole certificate further.
My problem is solved without using gunicorn as I originally wanted though, but this is the only good solution I've found so far.
Related
In my API, I have the following dynamics:
POST a processing request to the API.
Do some processing in the background.
When the processing is done, a computer will PATCH the API's original request status field with cURL.
These steps do work when I test them with a normal server, i.e., python manage.py runserver. However, when I try to automate tests within Django, I get:
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
Port 80 is what is specified under the django.test.client module with 'SERVER_PORT': '80', so I really don't get why that wouldn't work.
I don't know if this can be considered a full answer yet because I haven't tested it enough yet. But, apparently, Django's TestCase dummy server is somewhat simple for more specific cases. You could instead use LiveServerTestCase for something more robust. (If you want more info, check out this StackOverflow answer.)
Within the LiveServerTestCase class, you will find the self.live_server_url attribute, which will have the address of where the server actually is. The port assigned to it will be somewhat random apparently due to security reasons. Then, finally, somehow, you can manage to pass the info into your PATCH operation.
I cant seem to get the handshake working properly.
cert = 'path/to/cert_file.pem'
url = 'https://example.com/api'
requests.get(url, cert=cert, verify=True)
This is fine when I use it locally where I have the file physically.
We host our application on heroku and use environvariables.
The requests module doesnt seem to accept certificates as strings. eg.
$ export CERTIFICATE="long-list-of-characters"
requests.get(url, cert=get_env('CERTIFICATE'), verify=True)
I have also tried something like this:
cert = tempfile.NamedTemporaryFile()
cert.write(CERTIFICATE)
cert.seek(0)
requests.get(url, cert=cert.name, verify=True)
First of all, it works locally but not on heroku. Anyways, it doesnt feel like a solid solution.
I get a SSL handshake error.
Any suggestions?
Vasili's answer is technically correct, though per se it doesn't answer your question. The keyfile, truly, must be unencrypted to begin with.
I myself have just resolved a situation like yours. You were on the right path; all you had to do was
1. Pass delete=False to NamedTemporaryFile(), so the file wouldn't be deleted after calling close()
2. close() the tempfile before using it, so it would be saved
Note that this is a very unsafe thing to do. delete=False, as I understand, causes the file to stay on disk even after deleting the reference to it. So, to delete the file, you should manually call os.unlink(tmpfile.name).
Doing this with certificates is a huge security risk: you must ensure that the string with the certificate is secured and hidden and nobody has access to the server.
Nevertheless, it is quite a useful practice in case of, for example, managing your app both on a Heroku server as a test environment and in a Docker image built in the cloud, where COPY directives are not an option. It is also definitely better than storing the file in your git repository :D
This is an old question, but since I ended up here and the question wasn't answered I figure I'll point to the solution I came up with for a similar question that can be used to solve the OP's problem.
This can be done by monkey patching requests using this technique.
One simple hack is to use verify=False and not send the certificates at all. This works in most cases and when you are okay with not verifying the connection.
As per the requests documentation:
The private key to your local certificate must be unencrypted. Currently, Requests does not support using encrypted keys.
You can [also] specify a local cert to use as client side certificate, as a single file (containing the private key and the certificate) or as a tuple of both file's path:
requests.get('https://kennethreitz.com', cert=('/path/client.cert', '/path/client.key'))
You must include the path for both public and private key... or you can include the path to a single file that contains both.
I just set up a RabbitMQ add-on in heroku. After developing my app to queue up and consume messages running on a local instance, I deployed it to Heroku and have not been able to connect successfully yet. The username/password & hostname/port/vhost are all from heroku config. If I change the username or password, the error changes to ProbableAuthenticationError which makes me believe the authentication is at least correct, but likely an issue with my vhost or some other missing configuration. I haven't seen any similar questions on SO or after an hour of Googling that didn't address my issue.
I have tried both the RABBITMQ_BIGWIG_RX_URL and RABBITMQ_BIGWIG_TX_URL environment variables for both sending and consuming, and no combination seems to work. Below is the code I have for attempting to connect.
url = 'small-laurel-24.bigwig.lshift.net'
port = 10019
vhost = '/notmyrealvhost'
credentials = pika.PlainCredentials('username', 'password')
parameters = pika.ConnectionParameters(url, port, vhost, credentials=credentials)
connection = pika.BlockingConnection(parameters)
Is there something I'm missing or any way to figure out what specifically is configured wrong? I'm at a loss here. Much thanks in advance!
I am running pika 0.9.14, python 2.7.3.
The problem was most likely that you added the forward slash character in your virtual-host. Many users confuse this with the forward slash being the root directory, but it is actually just the default virtual-host name.
Unless you actually named the virtual-host using a forward slash, the name will always be identical to the name you see in the management console, e.g:
my_virtualhost and not /my_virtualhost
This is why your solution worked as you did not add the extra forward slash when using URLParameters.
Your original code would have looked like this using URLParameters:
amqp://username:password#small-laurel-24.bigwig.lshift.net:10018/%2Fnotmyrealvhost
While the working version you mentioned in your answer above does not have the forward slash (%2F) character.
amqp://username:password#small-laurel-24.bigwig.lshift.net:10018/notmyrealvhost
I ended up solving my problem by using the URLParameters class on pika to parse the URL from Heroku's environment variable.
This takes the string like
amqp://username:password#small-laurel-24.bigwig.lshift.net:10018/notmyrealvhost and parses everything as needed. I was unnecessarily complicating things by doing it myself.
In my Pylons config file, I have:
[server:main1]
port = 9090
...config here...
[server:main2]
port = 9091
...config here...
Which are ran using:
paster serve --server-name=main1 ...(more stuff)...
paster serve --server-name=main2 ...(more stuff)...
Now, using Haproxy and Stunnel, I have all http requests going to main1 and all https requests going to main2. I would like some of my controllers to react a little differently based on if they are being requested under http or https but pylons.request.scheme always thinks that it is under http even when it is not.
Seeing as I always know that main2 is always the one handling all https requests, is there a way for the controller to determine what sever name it was ran under or what id it is?
I got around this by just changing the workflow to not have to react differently based on what protocol it's under. It doesn't look like there's a way to pass a unique arbitrary identifier to each separate process that it can read.
In my python script, I am fetching pages but I already know the IP of the server.
So I could save it the hassle of doing a DNS lookup, if I can some how pass in the IP and hostname in the request.
So, if I call
http://111.111.111.111/
and then pass the hostname in the HOST attribute, I should be OK. However the issue I see is on the server side, if the user looks at the incomming request (ie REQUEST_URI) then they will see I went for the IP.
Anyone have any ideas?
First, the main idea is suspicious. Well, you can "know" IP of the server but this knowledge is temporary and its correctness time is controlled by DNS TTLs. For stable configuration, server admin can provide DNS record with long TTL (e.g. a few days) so DNS request will be always fulfilled using the nearest caching resolver or nscd. For changing configuration, TTL can be reduced to a few seconds or ever to 0 (means no caching), and it can be useful for some kind of load balancers. You try to organize your own resolver cache which is TTL ignorant, and this can lead to requests to non-functioning or wrong servers, with incorrect contents. So, I suggest not to do this.
If you are strictly sure you shall do this and you can't use external tools as custom resolver or even /etc/hosts, try to install custom "opener" (see urllib2.build_opener() function in documentation) which overrides DNS lookup. However I didn't do this ever, the knowledge is only on documentation read just now.
You can add the ip address mapping to the hosts file.