rabbitmq credential issue. only works with localhost - python

I have a question regarding rabbitmq. I am trying to set up a messaging system based off the queue name and so far everything has been good with localhost. As soon as I set some credentials for local connection, I get a timeout error. (I have lengthened the timeout time as well.) I have also gave the user administrative privileges and the guest account administrative privileges. I get this error when ran with both the consume and produce script. Port 5672 is open as well. This is all being done on a ubuntu 14.04 LTS machine and python 2.7.14. Guest and my other rabbit user are both allowed to use the default vhost too.
import pika, json
credentials = pika.credentials.PlainCredentials('guest', 'guest')
connection = pika.BlockingConnection(pika.ConnectionParameters('<ip here>',
5672, '/', credentials))
channel = connection.channel()
result = channel.queue_declare(queue='hello', durable=True)
def callback(ch, method, properties, body):
print "localhost received %r" % json.loads(body)
channel.basic_consume(callback,
queue='hello')
print 'localhost waiting for messages. To exit press CTRL+C'
channel.start_consuming()
channel.close()
Here is my error message too. Just a timeout method which would make me think that the connection is failing and this is a network problem but when I replace my ip in the credentials with 'localhost', everything works fine. Any ideas?
pika.exceptions.ConnectionClosed: Connection to <ip here>:5672 failed: timeout

The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
You are probably running into multiple issues.
First of all, the guest user can only connect via localhost by default. This document goes into detail. FWIW that document is the first hit when site:rabbitmq.com localhost guest are used as search terms on google.
Second, timeout means that the TCP connection can't be established. Please see this document for a step-by-step guide to diagnosis.

Related

Mosquitto - unable to connect from remote computer

I am trying to test a Mosquitto MQTT server. My plan is to hopefully get it to ingest some notifications from our monitoring system in my work IT dept. Seems perfect for the task. I was able to install Mosquitto on a Ubuntu 20.04 server, and testing with the mosquitto_sub and mosquitto_pub clients worked right away (see attached screenshot). I also was able to Google around and set up some Python script using paho mqtt module for subscribe and publish. I set up the subscribe script as a service and it runs fine. I can receive published messages from both mosquitto_pub and the "publish" Python script...
Now where the issues come up is when I try publishing to the Mosquitto server from a remote computer within my office. For the life of me I cannot determine what the issue is. :(
I am able to telnet to the Mosquitto server on port 1883 so it does not seem to be a firewall issue on the Mosquitto server... Whenever I try to use the publish Python script from the remote computer, it executes but it does not work -- no messages are received at the subscriber running on the server. One odd thing is that the "on_connect" portion on the script does not work when the script is run on the remote computer; no "Connected to MQTT Broker!" message is printed when the script is run remotely. (it does work when it's run locally on the Mosquitto server).
I am attaching a screenshot of my /etc/mosquitto/conf.d/default.conf file. I have tried all sorts of combinations on the config and each time things work when run on the server itself but not remotely. I tried putting the IP address for the server on the config as "listener 1883 10.x.x.x". (Note for this post using "10.x.x.x" instead of my real IP address.) The last thing I tried was putting the adapter name, and still does not work. As you can see I have also defined authentication (usernames, passwords) for a pubuser and subuser. The account info does work with the mosquitto_sub and mosquitto_pub clients, and when used in Python scripts (when script are run on Mosquitto server) so I don't think it's an account info issue. All signs are pointing to either some misconfiguration I have in Mosquitto, or some firewall policy in my work location. Please help!
Please share any tips or possible fixes for me. I might be doing something wrong in my config; please do let me know if you see something amiss. I am a newbie with Mosquitto and MQTT but would love for it to work, kind of ruins the point if I cannot publish to the server form a remote computer though :)..
Below is example Python code I am using on the remote computer to publish (works run locally on the Mosquitto server). I redacted the IP address of the server with 10.x.x.x).
When I run the script from the remote computer I just get "Failed to send message to topic python/mqtt".
:
import random
import time
from paho.mqtt import client as mqtt_client
broker = '10.x.x.x'
port = 1883
topic = "python/mqtt"
# generate client ID with pub prefix randomly
#client_id = f'python-mqtt-{random.randint(0, 1000)}'
#Set the static client id
client_id = 'python-mqtt01'
username = 'pubuser'
password = 'something'
def connect_mqtt():
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id)
client.username_pw_set(username, password)
client.on_connect = on_connect
client.connect(broker, port)
return client
def publish(client):
msg_count = 0
while True:
time.sleep(1)
msg = f"messages: {msg_count}"
result = client.publish(topic, msg)
# result: [0, 1]
status = result[0]
if status == 0:
print(f"Send `{msg}` to topic `{topic}`")
else:
print(f"Failed to send message to topic {topic}")
msg_count += 1
def run():
client = connect_mqtt()
client.loop_start()
publish(client)
if __name__ == '__main__':
run()
Thanks for reading! Hopefully someone out there has encountered the same issue and fixed it!
P.S. - forgot to mention I also disabled the firewall on the Ubuntu server until I get things working (ufw disable).
Example of my default.cfg (before) removing adapter and adding log_type all:
listener 1883
allow_anonymous false
bind_interface ens33

VerneMQ single publish messages lost when client is offline

I am quite new to MQTT and brokers, but I am having an issue with VerneMQ not sending offline messages to clients. Here is my setup. I have a backend written in Python which is using the Paho Eclipse MQTT library's single() method to send messages to a connected client. The client, a virtual machine on my development station, has a client which is written in go-lang, using paho.mqtt.golang to connect to the broker and subscribe.
The call to single() on the backend looks like this:
def send_message(device_id, payload):
token = get_jwt('my_token').decode()
mqtt.single(
f'commands/{device_id}',
payload=payload,
qos=2,
hostname=MESSAGING_HOST,
port=8080,
client_id='client_id',
auth={'username': 'username', 'password': f'Bearer {token}'},
transport='websockets'
)
On the client, the session is established with the following options:
func startListenerRun(cmd *cobra.Command, args []string) {
//mqtt.DEBUG = log.New(os.Stdout, "", 0)
mqtt.ERROR = log.New(os.Stdout, "", 0)
opts := mqtt.NewClientOptions().AddBroker(utils.GetMessagingHost()).SetClientID(utils.GetClientId())
opts.SetKeepAlive(20 * time.Second)
opts.SetDefaultPublishHandler(f)
opts.SetPingTimeout(5 * time.Second)
opts.SetCredentialsProvider(credentialsProvider)
opts.SetConnectRetry(false)
opts.SetAutoReconnect(true)
opts.willQos=2
opts.SetCleanSession(false)
I am not showing all the code, but hopefully enough to illustrate how the session is being set up.
I am running VerneMQ as a docker container. We are using the following environment variables to change configuration defaults in the Dockerfile:
ENV DOCKER_VERNEMQ_PLUGINS.vmq_diversity on
ENV DOCKER_VERNEMQ_VMQ_DIVERSITY.myscript1.file /etc/vernemq/authentication.lua
ENV DOCKER_VERNEMQ_VMQ_ACL.acl_file /etc/vernemq/vmq.acl
ENV DOCKER_VERNEMQ_PLUGINS.vmq_acl on
ENV DOCKER_VERNEMQ_RETRY_INTERVAL=3000
As long as the client has an active connection to the broker, the server's published messages arrive seamlessly. However, if I manually close the client's connection to the broker, and then publish a message on the backend to that client, when the client's connection reopens, the message is not resent by the broker. As I said, I am new to MQTT, so I may need to configure additional options, but so far I've yet to determine which. Can anyone shed any light on what might be happening on my setup that would cause offline messages to not be sent? Thanks for any information.
As thrashed out in the comments
Messages will only be queued for an offline client that has subscribed at greater than QOS 0
More details can be found here
You need to make QOS to 1 or 2 depending on your requirement and also you can use --retain flag which is quite useful. retain flag will make sure that last message will be delivered irrespective of any failure. You can know the last status of device. Check this http://www.steves-internet-guide.com/mqtt-retained-messages-example/

RetriesExhaustedError on connecting to HPE iLO 5 through Python iLO REST client

Following is a Python based RESTful library client (recommended by HP https://developer.hpe.com/platform/ilo-restful-api/home) that uses Redfish REST API (https://github.com/HewlettPackard/python-ilorest-library) to connect to the remote HPE iLO5 server of ProLiant DL360 Gen10 based hardware
#! /usr/bin/python
import redfish
iLO_host = "https://xx.xx.xx.xx"
username = "admin"
password = "xxxxxx"
# Create a REST object
REST_OBJ = redfish.redfish_client(base_url=iLO_host,username=username, password=password, default_prefix='/redfish/v1')
# Login into the server and create a session
REST_OBJ.login(auth="session")
# HTTP GET request
response = REST_OBJ.get("/redfish/v1/systems/1", None)
print response
REST_OBJ.logout()
I am getting RetriesExhaustedError when creating REST object. However, I can successfully do SSH to the server from the VM (RHEL7.4) where I am running this script. The authentication details are given correctly. I verified that the Web Server is enabled (both port 443 and 80) in the iLO Security - Access settings. Also, in my VM box the Firewalld service has been stopped and IPTables is flushed. But still connection could not be established. What other possibilities I can try yet?
I found the root cause. The issue is with SSL Certificate verification being done by the Python code.
This can be turned off by setting the environment variable PYTHONHTTPSVERIFY=0 before running the code solved the problem.
This is a very old topic, but perhaps for other people that have a similar issue when accessing the iLO in any way, and not just over Python:
You most likely need to update the firmware in your server, so that the TLS is updated. You will most likely need to use an old browser to do this, as modern versions of Mozilla/Chrome will not work with old TLS. I have had luck with Konqueror.

Sending Emails through Django - WinError 10060 A connection attempt failed and GetAddrInfo Error

I've been running an instance of Django on Windows R2 2012 for over a year and I've come to a road block. Yesterday something happened, I don't know what it could be. The same two errors keep popping up at different times though when trying to send an email:
[WinError 10060] A connection attempt failed because the connected
party did not properly respond after a period of time, or established
connection failed because connected host has failed to respond
and
socket.gaierror: [Errno 11001] getaddrinfo failed
Users are able to connect to the IP address of the server and the port Django is running on 192.168.1.5:8000, but they cannot send emails anymore. Though a percentage do go through as described here, but very few.
Things I've tried
1) This solution
import socket
socket.getaddrinfo('localhost', 8000)
Since I'm doing python manage.py runserver 192.168.1.5:8000, I added that IP and nothing.
2) I went into the Firewall settings and made sure that the ports were all good. The SMTP one that is declared in the setting.py file in my Django project and 25. All of them, inbound and out.
3) I tried sending things on my local machine and it does work. I used other programs that do not use Django to send emails and they do process on all other machines except the Server. So I know it's not my email server.
4) I changed the email config to use my Gmail account and it does process on all other machines except for the server. So it has to be the environment.
5) Editing http_proxy environment variables
The problem, in my case, was that some install at some point defined
an
environment variable http_proxy on my machine when I had no proxy.
Removing the http_proxy environment variable fixed the problem.
As described here
and in my Django project in the wsgi.y file:
os.environ['http_proxy'] = "http://192.168.1.5:8080"
os.environ['https_proxy'] = "http://192.168.1.5:8080"
6) Given this answer here (can someone please explain how I would do it to a django email function), I've also tried this method of wrapping it from solutions here
import smtplib
import socks
#socks.setdefaultproxy(TYPE, ADDR, PORT)
socks.setdefaultproxy(socks.SOCKS5, '192.168.1.5', 8080)
socks.wrapmodule(smtplib)
smtpserver = 'smtp.live.com'
AUTHREQUIRED = 1
smtpuser = 'example#hotmail.fr'
smtppass = 'mypassword'
RECIPIENTS = 'mailto#gmail.com'
SENDER = 'example#hotmail.fr'
mssg = "test message"
s = mssg
server = smtplib.SMTP(smtpserver,587)
server.ehlo()
server.starttls()
server.ehlo()
server.login(smtpuser,smtppass)
server.set_debuglevel(1)
server.sendmail(SENDER, [RECIPIENTS], s)
server.quit()
Though I wouldn't like to use such a method as I'd prefer using Django's built in email service.
Since you have not changed the code and errors you shared shows that it's a network related problem.
It's most probably a DNS issue. In your settings.py you have specified the EMAIL_HOST, which is i believe a hostname. You need to check you server's DNS server.
You are mentioning about checking your firewall settings but what you are doing wrong is not checking the actual connection.
To address the problem you can use couple of command line utilities like telnet or nslookup. You can check if you can resolve a hostname:
nslookup smptp.mail_host.com
This command will fail most probably.
I would like to point what you did wrong in your steps:
1) You have tried to get your services getaddrinfo in which you needed to put your smtp servers hostname, which would result with the same error. Sockets are the very primitive part of the connection on the application layer, you don't really need to dig in to that.
2) Checking firewall settings is OK.
3) This is a good step which shows that there is a problem with your servers network connection.
4) That is another evidence :)
5) You have got it wrong, if you have a proxy server on your network to connect external networks, than you use this settings. But you have configured it wrong. You should not set your projects url as proxy server.
6) This is another deep level coding. You should not use such low level script, which will cause you numerious problems, which would have been handled in high level modules.
I focused my answer on the strange fact that you can get around the problem using a SOCKS5 proxy. (I believe you. There was no time to ask you for details.) You verified that your example solution by SOCKS5 works for you. Django uses the same smtplib and you can easily wrap it the same way by this code added to wsgi.py.
import smtplib
import socks # it is the package SocksiPy or PySocks
socks.setdefaultproxy(socks.SOCKS5, '192.168.1.5', 8080)
socks.wrapmodule(smtplib)
Http(s) proxy (paragraph 5)) is not related because it does not affect SMTP and other protocols except http(s) because "SOCKS operates at a lower level than HTTP proxying".

SSH server routes tunnel by user

Diagram of what I'm trying to accomplish:
$ sftp joe#gatewayserver.horse
SSHCLIENT_JOE --------> GATEWAY_SERVER (does logic by username to determine
| socket to forward the connection to.)
|
\ 127.0.0.1:1030 CONTAINRR_SSHD_SALLY
---> 127.0.0.1:1031 CONTAINER_SSHD_JOE
127.0.0.1:1032 CONTAINER_SSHD_MRAYMOND
This seems closest to what I'm trying to do:
paramiko server mode port forwarding
http://bitprophet.org/blog/2012/11/05/gateway-solutions/
But instead of the client doing a ProxyCommand or requesting a "direct-tcpip" channel, I want the forwarding to be done by the server, invisibly for the client.
I have been trying to do this with a paramiko server by taking the Transport object of the connecting client and making a direct-tcpip channel on behalf of the client, but I'm running into roadblocks.
I'm Using https://github.com/paramiko/paramiko/blob/master/demos/demo_server.py as a template
# There's a ServerInterface class definition that overrides check_channel_request
# (allowing for direct-tcpip and session), and the other expected overides like
# check_auth_password, etc that I'm leaving out for brevity.
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', 2200))
except Exception as e:
print('*** Bind failed: ' + str(e))
traceback.print_exc()
sys.exit(1)
try:
sock.listen(100)
print('Listening for connection ...')
client, addr = sock.accept()
except Exception as e:
print('*** Listen/accept failed: ' + str(e))
traceback.print_exc()
sys.exit(1)
print('Got a connection!')
try:
t = paramiko.Transport(client)
t.add_server_key(host_key)
server = Server()
try:
t.start_server(server=server)
except paramiko.SSHException:
print('*** SSH negotiation failed.')
sys.exit(1)
# Waiting for authentication.. returns an unwanted channel object since
# it isn't the right "kind" of channel
unwanted_chan = t.accept(20)
dest_addr = ("127.0.0.1", 1030) # target container port
local_addr = ("127.0.0.1", 1234) # Arbitrary port on gateway server
# Trying to put words in the client's mouth here.. fails
# What should I do?
print(" Attempting creation of direct-tcpip channel on client Transport")
tunnel_chan = t.open_channel("direct-tcpip", dest_addr, local_addr)
print("tunnel_chan created.")
tunnel_client = SSHClient()
tunnel_client.load_host_keys(host_key)
print("attempting connection using tunnel_chan")
tunnel_client.connect("127.0.0.1", port=1234, sock=tunnel_channel)
stdin, stdout, stderr = tunnel_client.exec_command('hostname')
print(stdout.readlines())
except Exception as e:
print('*** Caught exception: ' + str(e.__class__) + ': ' + str(e))
traceback.print_exc()
try:
t.close()
except:
pass
sys.exit(1)
current output:
Read key: bc1112352a682284d04f559b5977fb00
Listening for connection ...
Got a connection!
Auth attempt with key: 5605063f1d81253cddadc77b2a7b0273
Attempting creation of direct-tcpip channel on client Transport
*** Caught exception: <class 'paramiko.ssh_exception.ChannelException'>: (1, 'Administratively prohibited')
Traceback (most recent call last):
File "./para_server.py", line 139, in <module>
tunnel_chan = t.open_channel("direct-tcpip", dest_addr, local_addr)
File "/usr/lib/python2.6/site-packages/paramiko/transport.py", line 740, in open_channel
raise e
ChannelException: (1, 'Administratively prohibited')
We currently have a straight-forward sftp server where clients connect and they are chroot-ed to their respective ftp directories.
We are wanting to move the clients into lxc containers but don't want to alter how they connect to sftp.. (Since they are probably using gui ftp clients like filezilla.) I'm also not wanting to make a bridge interface and assign new ips to all the containers. Thus the containers don't have separate ips from the host, they share the same network space.
The client containers' sshds would bind to separate ports on localhost. That way they can have unique ports, and the logic of which port is chosen could conceptually be moved out to... a simple server on the physical host.
This is more of a proof-of-concept, and general curiosity on my part.
As I mentioned in a comment above, I don't know anything about paramiko, but I can comment on this from an OpenSSH perspective, and perhaps you can translate the concepts to what you need.
NAT was mentioned in comments. NAT is something done at a lower level than SSH, not something that would be set up on the basis of an SSH login (SOCK5 notwithstanding). You'd implement it in your firewall, not in your SSH configuration. The way ProxyCommand works is to negotiate the SSH connection, then hand the client to the next hop saying "Here, negotiate with this guy too." It's something implemented right inside the SSH protocol.
You may not be totally out of luck.
A standard ProxyCommand setup might look like this, with the target port specified on the client side:
host joecontainer
User joe
ProxyCommand ssh -x -a -q -Wlocalhost:1031 gatewayserver.horse
An older fashioned version of this might have used Netcat:
host joecontainer
User joe
ProxyCommand ssh -x -a -q gatewayserver.horse nc localhost 1031
The idea here is that nc localhost 1031 is the command which provides SSH access to the "next hop" in the SSH chain. You could run any command here as long as the result of that command is a connection to an SSH daemon.
But you want the port selection to be handled by the GATEWAY rather than by the client. And therein lies a bit of a crunch, because the SSH daemon is only using the target username to select which user account's authorized_keys file to read. It's the keys which are important, not the user. By the time the server gets around to running an script or command associated with a user, the SSH negotiation is complete, and it's too late to forward the connection on to the next hop.
So ... you might consider having everyone connect to a common user, and then have the port selection done on the basis of SSH key. This is the way, for example, gitolite handles users. In your case, Joe and Sally could both connect to common#gatewayserver.horse using their DSA or RSA key.
The fun part is that all your port selection gets handled within the "common" user's .ssh/authorized_keys file. The file would look something like this:
command="/usr/bin/nc localhost 1030",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa ... sally#office
command="/usr/bin/nc localhost 1031",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa ... joe#home
You can read about this under the "AUTHORIZED_KEYS FILE FORMAT" section of the sshd(8) man page.
To use this technique, we still need a client-side ProxyCommand, but because port selection happens server-side based on the client's key, everyone can use exactly the same ProxyCommand:
host mycontainer
ProxyCommand ssh -xaq common#gatewayserver.horse
Sally and Joe will run ssh-keygen to create a key pair if they haven't already. They'll send you the public key which you'll add to ~common/.ssh/authorized_keys using the format above.
When Joe connects using his key, the ssh server only runs the nc command associated with his key. And because of the ProxyCommand, that netcat's output is interpreted as a "next hope" for SSH.
I've tested this with sftp (running on my eventual target, akin to your container) and it appears to work for me.
SSH is magic. :-)
Attempting creation of direct-tcpip channel on client Transport
*** Caught exception: <class '[...]'>: (1, 'Administratively prohibited')
The container ssh server is rejecting your direct-tcpip channel request because it has been configured to refuse these requests. I gather the intent here is to proxy SFTP sessions to the correct container? And I imagine the container SSH server has been configured in the usual fashion to only permit these people to do SFTP? SFTP sessions go through a session channel, not a direct-tcpip channel.
I'm not a python coder and can't give you the specific paramiko code, but your relay agent should open a session channel to the container server and invoke the "sftp" subsystem. And if possible, your relay agent should only do this when the remote client requested an SFTP session, not for other types of channel requests.

Categories

Resources