Cannot connect to RabbitMQ on Heroku with pika due to ProbableAccessDeniedError - python

I just set up a RabbitMQ add-on in heroku. After developing my app to queue up and consume messages running on a local instance, I deployed it to Heroku and have not been able to connect successfully yet. The username/password & hostname/port/vhost are all from heroku config. If I change the username or password, the error changes to ProbableAuthenticationError which makes me believe the authentication is at least correct, but likely an issue with my vhost or some other missing configuration. I haven't seen any similar questions on SO or after an hour of Googling that didn't address my issue.
I have tried both the RABBITMQ_BIGWIG_RX_URL and RABBITMQ_BIGWIG_TX_URL environment variables for both sending and consuming, and no combination seems to work. Below is the code I have for attempting to connect.
url = 'small-laurel-24.bigwig.lshift.net'
port = 10019
vhost = '/notmyrealvhost'
credentials = pika.PlainCredentials('username', 'password')
parameters = pika.ConnectionParameters(url, port, vhost, credentials=credentials)
connection = pika.BlockingConnection(parameters)
Is there something I'm missing or any way to figure out what specifically is configured wrong? I'm at a loss here. Much thanks in advance!
I am running pika 0.9.14, python 2.7.3.

The problem was most likely that you added the forward slash character in your virtual-host. Many users confuse this with the forward slash being the root directory, but it is actually just the default virtual-host name.
Unless you actually named the virtual-host using a forward slash, the name will always be identical to the name you see in the management console, e.g:
my_virtualhost and not /my_virtualhost
This is why your solution worked as you did not add the extra forward slash when using URLParameters.
Your original code would have looked like this using URLParameters:
amqp://username:password#small-laurel-24.bigwig.lshift.net:10018/%2Fnotmyrealvhost
While the working version you mentioned in your answer above does not have the forward slash (%2F) character.
amqp://username:password#small-laurel-24.bigwig.lshift.net:10018/notmyrealvhost

I ended up solving my problem by using the URLParameters class on pika to parse the URL from Heroku's environment variable.
This takes the string like
amqp://username:password#small-laurel-24.bigwig.lshift.net:10018/notmyrealvhost and parses everything as needed. I was unnecessarily complicating things by doing it myself.

Related

Python Uvicorn – obtain SSL certificate information

I have a gunicorn + uvicorn + fastApi stack.
(Basically, I am using https://hub.docker.com/r/tiangolo/uvicorn-gunicorn-fastapi docker image).
I've already implemented SSL based authentication by providing appropriate gunicorn configuration options: certfile, keyfile, ca_certs, cert_reqs.
And it works fine: user have to provide a client SSL certificate in order to be able to make an API calls.
What I need to do now is to obtain client certificate data and pass it further (add it to request headers) into my application, since it contains some client credentials.
For example, I've found a way to do it using gunicorn worker by overrding gunicorn.workers.sync.SyncWorker: https://gist.github.com/jmvrbanac/089540b255d6b40ca555c8e7ee484c13.
But is there a way to do the same thing using UvicornWorker? I've tried to look through the UvicornWorker's source code, but didn't find a way to do it.
I went deeper into the Uvicorn source code, and as far as I understand, in order to access the client TLS certificate data, I need to do some tricks with python asyncio library (https://docs.python.org/3/library/asyncio-eventloop.html), possibly with Server (https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.Server) class and override some of the UvicornWorker's methods.
I am still not quite sure if it is possible to achieve the desired result though.
I ended up setting the nginx (Openresty) in front of my server and added a script to get a client certificate and put it into header.
Here is a part of my nginx config:
set_by_lua_block $client_cert {
local client_certificate = ngx.var.ssl_client_raw_cert
if (client_certificate ~= nil) then
client_certificate = string.gsub(client_certificate, "\n", "")
ngx.req.set_header("X-CLIENT-ID", client_certificate)
end
return client_certificate
}
It is also possible to extract some specific field from a client certificate (like CN, serial number etc.) directly inside nginx configuration, but I decided to pass the whole certificate further.
My problem is solved without using gunicorn as I originally wanted though, but this is the only good solution I've found so far.

DisallowedHost error not going away when adding IP address to ALLOWED_HOSTS

If I set ALLOWED_HOSTS = ['*'] I am able to make a succesfull call, however this seems dangerous and counterintuitive.
When I set ALLOWED_HOSTS to the recommended string, it fails. How to fix this?
Since you've tagged your post with AWS, I assume the host in question is an AWS EC2 instance. If so, try put in your EC2 private IP or your full domain instead, like:
['ip-XX-XX-XX-XX.XX-XXX-X.compute.internal']
OR
['.yourdomain.com']
The preceding . in your domain name represents a wildcard, as described in Django's docs
I encountered this and found the reason. There were 2 different tabs which were running server. For test reasons I just started server in another tab. Django doesn't warn in the second tab. So your requests are most probably falling to the another tab running the server.

SOLR mysolr pysolr Python 401 reply

If there is someone out there who has already worked with SOLR and a python library to index/query solr, would you be able to try and answer the following question.
I am using the mySolr python library but there are others out (like pysolr) there and I don't think the problem is related to the library itself.
I have a default multicore SOLR setup, so no authentication required normally. Don't need it to access the admin page at http://localhost:8080/solr/testcore/admin/ either
from mysolr import Solr
solr = Solr('http://localhost:8080/solr/testcore/')
response = solr.search(q='*:*')
print("response")
print(response)
This code used to work but now I get a 401 reply from SOLR ... just like that, no changes have been made to the python virtual env containing mysolr or the SOLR setup. Still...something must have changed somewhere but I'm out of clues.
What could be the causes of a SOLR 401 reponse?
Additional info: This script and mor advanced script do work on another PC, just not on the one I am working on. Also, adding "/select?q=:" behind the url in the browser does return the correct results. So the SOLR is setup correctly, it has probably something to do with my computer itself. Could windows settings (of any kind) have an impact on how SOLR responds to requests from python? The python env itself has been reinstalled several times to no avail.
Thanks in advance!
The problem was: proxy.
If this exact situation was ever to occur to someone and you are behind a proxy, check if your HTTP and HTTPS environmental variables are not set. If they are... this might cause the python session to try using the proxy while it shouldn't (connecting to localhost via proxy).
It didn't cause any trouble for months but out of the blue it did so whether you encounter this or not might be dependent on how your IT setup your proxy or made some other changes...somewhere.
thank you everyone!

Sentry (Django) Configuration issue - SENTRY_ALLOW_ORIGIN

I'm having issues with Sentry running on my internal server. I walked through the docs to get this installed on a Centos machine. It seems to run, but none of the asynchronous javascript is working.
Can someone help me find my mistake?
This is what Chrome keeps complaining about:
XMLHttpRequest cannot load
http://test.example.com/api/main-testproject/testproject/poll/. No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://test.example.com:9000' is therefore not allowed
access.
I'm new to Django, but I am comfortable with python web services. I figured there was surely a configuration I missed. I found something in the docs referring to a setting I should use; SENTRY_ALLOW_ORIGIN.
# You MUST configure the absolute URI root for Sentry:
SENTRY_URL_PREFIX = 'http://test.example.com' # No trailing slash!
SENTRY_ALLOW_ORIGIN = "http://test.example.com"
I even tried various paths to my server by using the fully qualified domain name, as well as the IP. None of this seemed to help. As you can see from the chrome error, I was actively connected to the domain name that was throwing the error.
I found my issue. The XMLHttpRequest error is showing that port 9000 is used. This needs to be specified in the SENTRY_URL_PREFIX.
SENTRY_URL_PREFIX = 'http://test.example.com:9000'
edit:
I even found this answer listed in the FAQ:
https://docs.getsentry.com/on-premise/server/faq/

Change web service url for a suds client on runtime (keeping the wsdl)

First of all, my question is similar to this one
But it's a little bit different.
What we have is a series of environments, with the same set of services.
For some environments (the local ones) we can get access to the wsdl, and thus generating the suds client.
For external environment, we cannot access the wsdl. But being the same, I was hoping I can change just the URL without regenerating the client.
I've tried cloning the client, but it doesn't work.
Edit: adding code:
host='http://.../MyService.svc'
wsdl_file = 'file://..../wsdl/MyService.wsdl'
client = suds.client.Client(wsdl_file, location=host, cache=None)
#client = baseclient.clone()
#client.options.location = otherhost
client.set_options(port='BasicHttpBinding_IMyService')
result = client.service.IsHealthy()
That gives me this exception:
The message with Action 'http://tempuri.org/IMyService/IsHealthy' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None).
The thing is, if I set the client directly to the host, it works fine:
client = suds.client.Client(host)
As you can see, I've tried cloning the client, but with the same exception. I even tried this:
baseclient = suds.client.Client(host)
client = baseclient.clone()
client.options.location = otherhost
....
And got the same exception.
Anyone can help me?
client.sd[0].service.setlocation(new_url)
...is the "manual" way, ie. per service-description.
client.set_option(new_url)
...should also work, per the author.
options is a wrapped/protected attr -- direct edits may very well be ignored.
I've got it!.
I don't even know how I've figured it out, but with a little of guessing and a much of luck I ended up with this:
wsdl_file = 'file://...../MyService.wsdl'
client = suds.client.Client(wsdl_file)
client.wsdl.url = host #this line did the trick
client.set_options(port='BasicHttpBinding_IMyService')
result = client.service.IsHealthy()
And it works!
I can't find any documentation about that property (client.wsdl.url), but it works, so I post it in case someone have the same problem.
You might be able to do that by specifying the location of the service. Assuming you have a Client object called client, you can modify the service location by updating the URL in client.options.location.
Additionally you are able to use a local copy of a WSDL file as the url when constructing the client by using a file:// scheme for the URL, e.g. file:///path/to/service.wsdl. So this could be another option for you. Of course you would also have to specify the location so that the default location from within the WSDL is overridden.

Categories

Resources