Authenticating rabbitmq using ExternalCredentials - python

I have a rabbitmq server and use the pika library with Python to produce/consume messages. For development purposes, I was simply using
credentials = pika.PlainCredentials(<user-name>, <password>)
I want to change that to use pika.ExternalCredentials or TLS.
I have set up my rabbitmq server to listen for TLS on port 5671, and have configured it correctly. I am able to communicate with rabbitmq from localhost, but the moment I try to communicate with it from outside the localhost it doesn't like that. I have a feeling my "credentials" are based on the "guest" user in rabbitmq.
rabbitmq.config
%% -*- mode: erlang -*-
[
{rabbit,
[
{ssl_listeners, [5671]},
{auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']},
{ssl_options, [{cacertfile,"~/tls-gen/basic/result/ca_certificate.pem"},
{certfile,"~/tls-gen/basic/result/server_certificate.pem"},
{keyfile,"~/tls-gen/basic/result/server_key.pem"},
{verify,verify_none},
{ssl_cert_login_from, common_name},
{fail_if_no_peer_cert,false}]}
]}
].
I can confirm this works, since in my logs for rabbitmq I see:
2019-08-21 15:34:47.663 [info] <0.442.0> started TLS (SSL) listener on [::]:5671
Server-side everything seems to be set up, I have also generated certificates and all the .pem files required.
test_rabbitmq.py
import pika
import ssl
from pika.credentials import ExternalCredentials
context = ssl.create_default_context(cafile="~/tls-gen/basic/result/ca_certificate.pem")
context.load_cert_chain("~/tls-gen/basic/result/client_certificate.pem",
"~/tls-gen/basic/result/client_key.pem")
ssl_options = pika.SSLOptions(context, "10.154.0.27")
params = pika.ConnectionParameters(port=5671,ssl_options=ssl_options, credentials = ExternalCredentials())
connection = pika.BlockingConnection(params)
channel = connection.channel()
When I run the script locally
(<Basic.GetOk(['delivery_tag=1', 'exchange=', 'message_count=0', 'redelivered=False', 'routing_key=foobar'])>, <BasicProperties>, b'Hello, world!')
When I run the script from another instance
Traceback (most recent call last):
File "pbbarcode.py", line 200, in <module>
main()
File "pbbarcode.py", line 187, in main
connection = pika.BlockingConnection(params)
File "/usr/local/lib/python3.7/site-packages/pika/adapters/blocking_connection.py", line 359, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/usr/local/lib/python3.7/site-packages/pika/adapters/blocking_connection.py", line 450, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.AMQPConnectionError
When I run the script locally, and delete the guest user
Traceback (most recent call last):
File "test_mq.py", line 12, in <module>
with pika.BlockingConnection(conn_params) as conn:
File "/home/daudn/.local/lib/python3.7/site-packages/pika/adapters/blocking_connection.py", line 359, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "/home/daudn/.local/lib/python3.7/site-packages/pika/adapters/blocking_connection.py", line 450, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.ProbableAuthenticationError: ConnectionClosedByBroker: (403) 'ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.'
It seems like SSL is configured with the user "guest" and rabbitmq doesn't allow connections to guest outside of localhost. How can I use SSL with a different user?
When I delete the guest user, this is what the rabbitmq log says:
2019-08-22 10:14:40.054 [info] <0.735.0> accepting AMQP connection <0.735.0> (127.0.0.1:59192 -> 127.0.0.1:5671)
2019-08-22 10:14:40.063 [error] <0.735.0> Error on AMQP connection <0.735.0> (127.0.0.1:59192 -> 127.0.0.1:5671, state: starting):
PLAIN login refused: user 'guest' - invalid credentials
2019-08-22 10:14:40.063 [warning] <0.735.0> closing AMQP connection <0.735.0> (127.0.0.1:59192 -> 127.0.0.1:5671):
client unexpectedly closed TCP connection
2019-08-22 10:15:12.613 [info] <0.743.0> Creating user 'guest'
2019-08-22 10:15:28.370 [info] <0.750.0> Setting user tags for user 'guest' to [administrator]
2019-08-22 10:15:51.352 [info] <0.768.0> Setting permissions for 'guest' in '/' to '.*', '.*', '.*'
2019-08-22 10:15:54.237 [info] <0.774.0> accepting AMQP connection <0.774.0> (127.0.0.1:59202 -> 127.0.0.1:5671)
2019-08-22 10:15:54.243 [info] <0.774.0> connection <0.774.0> (127.0.0.1:59202 -> 127.0.0.1:5671): user 'guest' authenticated and granted access to vhost '/'
This also clearly means the SSL is still using the username and password to connect to rabbitmq? HELP!
References:
tls_official_example
pika_official_tls_docs
added_authentication_external

You will have to enable the rabbitmq-auth-mechanism-ssl plugin , i think you are missing that part.
To enable the plugin do the following ( showing the example for a Windows setup)
rabbitmq-plugins.bat enable rabbitmq_auth_mechanism_ssl

Going to leave this here for future reference
ssl_options = pika.SSLOptions(context, "rabbitmq-node-name")
params = pika.ConnectionParameters(host="rabbitmq-node-name",port=5671,ssl_options=ssl_options, credentials = ExternalCredentials())
The confusion was that I believed when doing SSLOptions(context, "rabbitmq-node-name") I thought I had supplied the host here and did not have to supply it again in the args for ConnectionParameters(). But turns out that's incorrect, if no host is supplied, it defaults to localhost. Which is why the script ran locally and not outside of the local network.

Related

Failure to authenticate using Fitbit API, worked yesterday, no idea how to interpret error log

I am a complete beginner to programming (I am using Python through Jupyter Notebooks) and I especially have no idea what any of these errors mean or how to debug them. I obtained a Client ID and Client Secret from Fitbit API and was able to successfully log in and pull some data when I ran my code yesterday and it no longer runs apparently due to authentication issues. As far as I can tell, the only thing I have done that may have tripped something up is to try to set up an API for a friend and tried using his Client ID/Secret in my code, but then I re-ran my code using my own Client ID/Secret and it then no longer worked. I have absolutely no idea what any of the errors or ports mean. Another post on Stackexchange mentioned looking into localhost:8080 but it brings up a 404 error. Lastly, when I ran the code yesterday, a screen would pop-up in the Fitbit site asking for me to login and it no longer does that but rather says that it can't connect. All of my code is based off the following tutorial: https://towardsdatascience.com/collect-your-own-fitbit-data-with-python-ff145fa10873
import os
# use the following to re-direct working directory to where cloned Fitbit GitHub repo is located
%cd C:\Users\David\Documents\python-fitbit-master
cwd = os.getcwd()
import gather_keys_oauth2 as Oauth2
import fitbit
import pandas as pd
import datetime
CLIENT_ID = 'XXXXXX'
CLIENT_SECRET = 'X#X#X#X#...'
server = Oauth2.OAuth2Server(CLIENT_ID, CLIENT_SECRET)
server.browser_authorize()
ACCESS_TOKEN = str(server.fitbit.client.session.token['access_token'])
REFRESH_TOKEN = str(server.fitbit.client.session.token['refresh_token'])
auth2_client = fitbit.Fitbit(CLIENT_ID, CLIENT_SECRET, oauth2=True, access_token=ACCESS_TOKEN,
refresh_token=REFRESH_TOKEN)
[16/Mar/2020:01:15:09] ENGINE Listening for SIGTERM.
[16/Mar/2020:01:15:09] ENGINE Bus STARTING
[16/Mar/2020:01:15:09] ENGINE Set handler for console events.
CherryPy Checker:
The Application mounted at '' has an empty config.
[16/Mar/2020:01:15:09] ENGINE Started monitor thread 'Autoreloader'.
[16/Mar/2020:01:15:10] ENGINE Error in 'start' listener <bound method Server.start of <cherrypy._cpserver.Server object at 0x000001B599668748>>
Traceback (most recent call last):
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\portend.py", line 115, in free
Checker(timeout=0.1).assert_free(host, port)
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\portend.py", line 69, in assert_free
list(itertools.starmap(self._connect, info))
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\portend.py", line 85, in _connect
raise PortNotFree(tmpl.format(**locals()))
portend.PortNotFree: Port 127.0.0.1 is in use on 8080.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\cherrypy\process\wspbus.py", line 230, in publish
output.append(listener(*args, **kwargs))
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\cherrypy\_cpserver.py", line 180, in start
super(Server, self).start()
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\cherrypy\process\servers.py", line 177, in start
portend.free(*self.bind_addr, timeout=Timeouts.free)
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\portend.py", line 119, in free
raise Timeout("Port {port} not free on {host}.".format(**locals()))
portend.Timeout: Port 8080 not free on 127.0.0.1.
[16/Mar/2020:01:15:10] ENGINE Shutting down due to error in start listener:
Traceback (most recent call last):
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\cherrypy\process\wspbus.py", line 268, in start
self.publish('start')
File "C:\Users\David\Anaconda3\envs\Renv\lib\site-packages\cherrypy\process\wspbus.py", line 248, in publish
raise exc
cherrypy.process.wspbus.ChannelFailures: Timeout('Port 8080 not free on 127.0.0.1.')
[16/Mar/2020:01:15:10] ENGINE Bus STOPPING
[16/Mar/2020:01:15:10] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('127.0.0.1', 8080)) already shut down
[16/Mar/2020:01:15:10] ENGINE Removed handler for console events.
[16/Mar/2020:01:15:10] ENGINE Stopped thread 'Autoreloader'.
[16/Mar/2020:01:15:10] ENGINE Bus STOPPED
[16/Mar/2020:01:15:10] ENGINE Bus EXITING
I was in the same boat a few days ago, but because the auth URL never popped up. In your case, your traceback error is showing that your port 8080 (which is the default port cherrypy looks for) is not actually free, so it can't connect to pull up the auth URL:
raise Timeout("Port {port} not free on {host}.".format(**locals()))
portend.Timeout: Port 8080 not free on 127.0.0.1.
This might have happened when you tried to set up the API calls for your friend.
Regardless, you should free up port 8080 on your machine and re-run your code (alternatively, you can modify the redirect uri in the __init__ function of gather_keys_oauth2.py and change the callback URL to be a different port , but you'd also have to change it in your app settings.
Hope this helps.

Nameko/RabbitMQ: OSError: Server unexpectedly closed connection

I have two nameko services that communicate using RPC via RabbitMQ. Locally with docker-compose all works fine. Then I deployed everything to Kubernetes/Istio cluster on DigitalOcean and started get the following errors. It repeats continuously 1 time in 10/20/60 minutes. Communication between services works fine (before and after recconect I suppose) but logs are messy with those unexpected reconnections that should not happen.
Helm RabbitMQ configuration file
I tried to increase RAM and CPU configuration (to the values in the configuration files above: 512Mb and 400m) but still have the same behavior.
NB: I don't touch services after deployment, no messages being sent or any requests made and I have this error for the first time in around 60 minutes. When I make requests they succeed but eventually we still have this errors in logs afterwards.
Nameko service log:
"Connection to broker lost, trying to re-establish connection...",
"exc_info": "Traceback (most recent call last):
File \"/usr/local/lib/python3.6/site-packages/kombu/mixins.py\", line 175, in run for _ in self.consume(limit=None, **kwargs):
File \"/usr/local/lib/python3.6/site-packages/kombu/mixins.py\", line 197, in consume conn.drain_events(timeout=safety_interval)
File \"/usr/local/lib/python3.6/site-packages/kombu/connection.py\", line 323, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
File \"/usr/local/lib/python3.6/site-packages/kombu/transport/pyamqp.py\", line 103, in drain_events
return connection.drain_events(**kwargs)
File \"/usr/local/lib/python3.6/site-packages/amqp/connection.py\", line 505, in drain_events
while not self.blocking_read(timeout):
File \"/usr/local/lib/python3.6/site-packages/amqp/connection.py\", line 510, in blocking_read\n frame = self.transport.read_frame()
File \"/usr/local/lib/python3.6/site-packages/amqp/transport.py\", line 252, in read_frame
frame_header = read(7, True)
File \"/usr/local/lib/python3.6/site-packages/amqp/transport.py\", line 446, in _read
raise IOError('Server unexpectedly closed connection')
OSError: Server unexpectedly closed connection"}
{"name": "kombu.mixins", "asctime": "29/12/2019 20:22:54", "levelname": "INFO", "message": "Connected to amqp://user:**#rabbit-rabbitmq:5672//"}
RabbitMQ log
2019-12-29 20:22:54.563 [warning] <0.718.0> closing AMQP connection <0.718.0> (127.0.0.1:46504 -> 127.0.0.1:5672, vhost: '/', user: 'user'):
client unexpectedly closed TCP connection
2019-12-29 20:22:54.563 [warning] <0.705.0> closing AMQP connection <0.705.0> (127.0.0.1:46502 -> 127.0.0.1:5672, vhost: '/', user: 'user'):
client unexpectedly closed TCP connection
2019-12-29 20:22:54.681 [info] <0.3424.0> accepting AMQP connection <0.3424.0> (127.0.0.1:43466 -> 127.0.0.1:5672)
2019-12-29 20:22:54.689 [info] <0.3424.0> connection <0.3424.0> (127.0.0.1:43466 -> 127.0.0.1:5672): user 'user' authenticated and granted access to vhost '/'
2019-12-29 20:22:54.690 [info] <0.3431.0> accepting AMQP connection <0.3431.0> (127.0.0.1:43468 -> 127.0.0.1:5672)
2019-12-29 20:22:54.696 [info] <0.3431.0> connection <0.3431.0> (127.0.0.1:43468 -> 127.0.0.1:5672): user 'user' authenticated and granted access to vhost '/'
UPD:
Rabbit pod yaml
Issue is with istio proxy getting injected as sidecar container inside rabbitmq pod. You need to exclude istio proxy from rabbitmq then it should work.
Have you tried to increase the heartbeat of the connection? It is likely that your connection gets terminated on lower level due inactivity.
Also make sure that you have enough resources to run all containers on the host machine.
I had similar issue and I am not sure which one of the following solved it for me:
Proper resource management
Making an entry point in the DockerFile of a bash script that runs the file with the code that is supposed to be executed on infinite loop. (I know that one solved the memory leaks - bash script executed the file with your code, your code listens for message, gets a message and executes, exit the code, bash script loads it again....). I had my workers restarting after each message (the whole worker exits and new one is started - bad idea).
Hope this gets you somewhere.

How to connect to rabbit on vagrant host?

I set up a server using vagrant on a virtual machine. After installing rabbitmq, I tried to connect to it using script outside VM. There's already Django and RabbitMQ running on VM. After running a script I have an exception:
pika.exceptions.IncompatibleProtocolError: StreamLostError: ('Transport indicated EOF',)
How to solve my problem?
My friend already used the code provided below on raspberryPi which actually managed to execute it. The only thing I changed on my PC was the hostname changed from the specified IP to my '127.0.0.1'and I added the port number.
import pika
import sys
import random
import time
credentials = pika.PlainCredentials(username='admin', password='admin')
connection = pika.BlockingConnection(pika.ConnectionParameters(host='127.0.0.1',port=15672,credentials=credentials))
channel = connection.channel()
channel.queue_declare(queue='hello',durable=True)
Error message:
$ python send.py
Traceback (most recent call last):
File "send.py", line 8, in <module>
connection = pika.BlockingConnection(pika.ConnectionParameters(host='127.0.0.1',port=15672,credentials=credentials))
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\pika\adapters\blocking_connection.py", line 360, in __init__
self._impl = self._create_connection(parameters, _impl_class)
File "C:\Users\Pigeonnn\AppData\Local\Programs\Python\Python37\lib\site-packages\pika\adapters\blocking_connection.py", line 451, in _create_connection
raise self._reap_last_connection_workflow_error(error)
pika.exceptions.IncompatibleProtocolError: StreamLostError: ('Transport indicated EOF',)
#Pigeonnn provided the answer to his own question in his own comment to the original question on this very post:
Actually I've just found a solution. The thing is if you want to
listen to rabbitmq you need to connect through port 5672 - not 15672.
Changed ports, forwarded and everything works :)
Stating the docs and highlighting the response, RabbitMQ listening ports are:
AMQP: 5672
AMQP/ssl: 5671
HTTP management UI: 15672
first forward the a host port to a guest port on Vagrant in the vagrant configuration file (Vagrantfile). Beware to not utilise a host port that is already used.
Vagrant.configure("2") do |config|
config.vm.network "forwarded_port", guest: 5672, host: 5671 # Rabbit
end
then connect like so:
credentials = pika.PlainCredentials(username='admin', password='admin')
connection = pika.BlockingConnection(pika.ConnectionParameters(host='127.0.0.1',port=5671,credentials=credentials))
don't forget to configure the user admin accordingly.

How to connect to Tor control port (9051) from a remote host?

I'm trying to connect to control port (9051) of tor from a remote machine using stem python library.
dum.py
from stem import Signal
from stem.control import Controller
def set_new_ip():
"""Change IP using TOR"""
with Controller.from_port(address = '10.130.8.169', port=9051) as controller:
controller.authenticate(password='password')
controller.signal(Signal.NEWNYM)
set_new_ip()
I'm getting the following error
Traceback (most recent call last):
File "/home/jkl/anaconda3/lib/python3.5/site-packages/stem/socket.py", line 398, in _make_socket
control_socket.connect((self._control_addr, self._control_port))
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "dum.py", line 28, in <module>
set_new_ip();
File "dum.py", line 7, in set_new_ip
with Controller.from_port(address = '10.130.4.162', port=9051) as controller:
File "/home/jkl/anaconda3/lib/python3.5/site-packages/stem/control.py", line 998, in from_port
control_port = stem.socket.ControlPort(address, port)
File "/home/jkl/anaconda3/lib/python3.5/site-packages/stem/socket.py", line 372, in __init__
self.connect()
File "/home/jkl/anaconda3/lib/python3.5/site-packages/stem/socket.py", line 243, in connect
self._socket = self._make_socket()
File "/home/jkl/anaconda3/lib/python3.5/site-packages/stem/socket.py", line 401, in _make_socket
raise stem.SocketError(exc)
stem.SocketError: [Errno 111] Connection refused
Then I went through /etc/tor/torrc config file.
It says
The port on which Tor will listen for local connections from Tor
controller applications, as documented in control-spec.txt.
ControlPort 9051
## If you enable the controlport, be sure to enable one of these
## authentication methods, to prevent attackers from accessing it.
HashedControlPassword 16:E5364A963AF943CB607CFDAE3A49767F2F8031328D220CDDD1AE30A471
SocksListenAddress 0.0.0.0:9050
CookieAuthentication 1
My question is ,
How do I connect to control port of Tor from a remote host?
Is there is any work around or config parameter that I need to set?
a possible duplicate of Stem is giving the "Unable to connect to port 9051" error which has no answers
Tested with Tor 0.3.3.7.
ControlListenAddress config is OBSOLETE and Tor will ignore it and log the following message
[warn] Skipping obsolete configuration option 'ControlListenAddress'
You can still set ControlPort to 0.0.0.0:9051 in your torrc file. Though, Tor is not very happy about it (and rightly so) and will warn you
You have a ControlPort set to accept connections from a non-local
address. This means that programs not running on your computer can
reconfigure your Tor. That's pretty bad, since the controller protocol
isn't encrypted! Maybe you should just listen on 127.0.0.1 and use a
tool like stunnel or ssh to encrypt remote connections to your control
port.
Also, you have to set either CookieAuthentication or HashedControlPassword otherwise ControlPort will be closed
You have a ControlPort set to accept unauthenticated connections from
a non-local address. This means that programs not running on your
computer can reconfigure your Tor, without even having to guess a
password. That's so bad that I'm closing your ControlPort for you. If
you need to control your Tor remotely, try enabling authentication and
using a tool like stunnel or ssh to encrypt remote access.
All the risks mentioned in #drew010's answer still stand.
You'd need to set ControlListenAddress in addition to the ControlPort. You could set that to to 0.0.0.0 (binds to all addresses) or a specific IP your server listens on.
If you choose to do this it would be extremely advisable to configure your firewall to only allow control connections from specific IP's and block them from all others.
Also note, the control port traffic will not be encrypted, so it'd also be advisable to use cookie authentication so your password isn't sent over the net.
You could also run a hidden service to expose the control port over Tor and then connect to the hidden service using Stem and Tor.
But the general answer is ControlListenAddress needs to be set to bind to an IP other than 127.0.0.1 (localhost).

Celery worker (on remote ec2) cannot communicate with rabbitmq(on another ec2)

I wrote an asynchronous email service that uses Celery and RabbitMQ for our Flask application. So I have rabbitmq running on one server - I created a user, vhost and set the permissions. I created an inbound TCP rule for port 5672. It's outbound rules are open to all. I have the celery application on another ec2 instance. The security for this one is pretty similar as well. Before I set up celeryd/supervisord - I tried to start the celery worker. BUt unfortunately it is giving me errors.
This is the celery config:
celery = Celery('myapp.celery',
broker = 'amqp://user:password#rabbit:5672/cel_host',
backend = 'amqp:/cel_host',
include = 'myapp.tasks')
This is the relevant traceback:
File "/usr/local/lib/python2.7/dist-packages/celery/backends/__init__.py", line 56, in get_backend_by_url
return get_backend_cls(backend, loader), url
File "/usr/local/lib/python2.7/dist-packages/celery/utils/functional.py", line 133, in _M
value = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/__init__.py", line 45, in get_backend_cls
return symbol_by_name(backend, aliases)
File "/usr/local/lib/python2.7/dist-packages/kombu/utils/__init__.py", line 84, in symbol_by_name
return getattr(module, cls_name) if cls_name else module
AttributeError
:
'module' object has no attribute '/cel_host'
*cel_host is vhost for RabbitMQ.
Everything was working fine when I was working on my local machine. Any help will be highly appreciated.
**Both instances are on a private subnet in our VPC
EDIT:
When I changed the broker_url to amqp://guest:guest#rabbit:5672//?ssl=1 - I get the error
consumer: Cannot connect to amqp://guest#rabbit:5672//: [Errno -2] Name or service not known.
So it's definitely connection issue that I am not sure how to fix
Finally found out what the problem was - and it was a dumb mistake on my part. The broker_url settings were fine but the backend setting wasn't.
The first part of the url - amqp:// is for the transport which is fixed. It is the trailing '/' which can be replaced by another vhost.
broker = 'amqp://<my_app:password>#<hostname>:5672/<vhost>',
backend = 'amqp://<vhost>'
So in a sense Lycha you were correct!

Categories

Resources