Connection Error while sending message to RabbitMQ server on EC2 - python

I have my RabbitMQ Server running on AWS EC2
I have run the producer and consumer code locally. It is working.
I am able to access the rabbitMQ management web app as well.
When I am trying to push data from my laptop to EC2
I am getting this error on this line:
connection = pika.BlockingConnection(pika.ConnectionParameters('xx.xx.xx.xx',5672,'/',credentials))
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 339, in init
self._process_io_for_connection_setup()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed
where xx.xx.xx.xx: public IP address of my instance
Pls tell me, if I am using the correct parameters. What should be the IP address , virtual hostname.
I have checked the credentials , the user that I am using exists and it has the rights to access '/' virtual host
I have made the needed changes in the Security Groups.
This is a screenshot of it:
When I am running the same producer code from within the instance it's working properly. No Exceptions and the consumer is able to receive it as well.
This is my complete code for reference:
import pika
print("Start")
credentials=pika.PlainCredentials('manish','manish')#RabbitMQ user created on EC2
connection=pika.BlockingConnection(pika.ConnectionParameters('xx.xx.xx.xx',5672,'/',credentials))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()

I tried socket_timeout and it worked for me, you could try something like :
credentials = pika.PlainCredentials('username,'password')
connection = pika.BlockingConnection(pika.ConnectionParameters('hostname',port,'virtual host',credentials,**socket_timeout=10000**))

Related

Connect Python to MSK with IAM role-based authentication

I've written a python script with aiokafka to produce and consume from a Kafka cluster in AWS MSK, I'm running the script from a EC2 instance that is in the same VPC as my cluster and when I try to connect my script to a cluster it refuse to accept the connection:
The script
from aiokafka import AIOKafkaConsumer
import asyncio
import os
import sys
async def consume():
bootstrap_server = os.environ.get('BOOTSTRAP_SERVER', 'localhost:9092')
topic = os.environ.get('TOPIC', 'demo')
group = os.environ.get('GROUP_ID', 'demo-group')
consumer = AIOKafkaConsumer(
topic, bootstrap_servers=bootstrap_server, group_id=group
)
await consumer.start()
try:
# Consume messages
async for msg in consumer:
print("consumed: ", msg.topic, msg.partition, msg.offset,
msg.key, msg.value, msg.timestamp)
finally:
# Will leave consumer group; perform autocommit if enabled.
await consumer.stop()
def main():
try:
asyncio.run(consume())
except KeyboardInterrupt:
print("Bye!")
sys.exit(0)
if __name__ == "__main__":
print("Welcome to Kafka test script. ctrl + c to exit")
main()
The exception
Unable to request metadata from "boot-xxxxxxx.cx.kafka-serverless.us-xxxx-1.amazonaws.com:9098": KafkaConnectionError: Connection at boot-xxxxxxx.cx.kafka-serverless.us-xxxx-1.amazonaws.com:9098 closed
Traceback (most recent call last):
File "producer.py", line 33, in <module>
main()
File "producer.py", line 25, in main
asyncio.run(produce_message(message))
File "/usr/lib64/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/usr/lib64/python3.7/asyncio/base_events.py", line 587, in run_until_complete
return future.result()
File "producer.py", line 12, in produce_message
await producer.start()
File "/home/ec2-user/py-kafka-test/pykafka/lib64/python3.7/site-packages/aiokafka/producer/producer.py", line 296, in start
await self.client.bootstrap()
File "/home/ec2-user/py-kafka-test/pykafka/lib64/python3.7/site-packages/aiokafka/client.py", line 250, in bootstrap
f'Unable to bootstrap from {self.hosts}')
kafka.errors.KafkaConnectionError: KafkaConnectionError: Unable to bootstrap from [('boot-zm5x2eaw.c3.kafka-serverless.us-east-1.amazonaws.com', 9098, <AddressFamily.AF_UNSPEC: 0>)]
Unclosed AIOKafkaProducer
producer: <aiokafka.producer.producer.AIOKafkaProducer object at 0x7f76d123a510>
I've already tested the connection with the kafka shell scripts and it worked fine:
./kafka-console-producer.sh --bootstrap-server boot-xxxxxxx.cx.kafka-serverless.us-xxxx-1.amazonaws.com:9098 --producer.config client.properties --topic myTopic
But whenever I try with python it just don't work, I've investigated a little and found that it might be the authentication protocol, my KMS Cluster is protected with IAM role-based authentication but no matter how much I search there is no documentation on how to authenticate with IAM in the python kafka libraries: aiokafka, python-kafka, faust, etc.
Does anyone have an example on how to successfully connect to a KMS serverless cluster with IAM role-based authentication using Python?

Python Kafka client cannot connect to remote Kafka server

I have an Ubuntu VM on cloud, where I downloaded Kafka version 2.8.1 from the Official Kafka site and followed the instructions in Kafka's official quickstart guide.
I am using a python client to consume one of the topics that I created as part of the quickstart guide. When I run it on the VM, everything runs fine, however, when I run the same program on my local system, I get the below error
Unable connect to node with id 0: [Errno 8] nodename nor servname provided, or not known
Traceback (most recent call last):
...
...
File "/Path/python3.9/site-packages/aiokafka/client.py", line 547, in check_version
raise KafkaConnectionError(
kafka.errors.KafkaConnectionError: KafkaConnectionError: No connection to node with id 0
The python program I am using:
import asyncio
import aiokafka
async def consume(self):
consumer = aiokafka.AIOKafkaConsumer(
"quickstart-events", bootstrap_servers="IP:9092"
)
try:
await consumer.start()
async for msg in self.consumer:
print(
"consumed: ",
msg.topic,
msg.partition,
msg.offset,
msg.key,
msg.value,
msg.timestamp,
)
finally:
await consumer.stop()
asyncio.run(consume())
I have ensured that the necessary ports (9022) on Ubuntu is open -
I checked that I could telnet into port 9022 from my local system.
I am not sure what could be the reason that I am unable to access Kafka over internet. Am I missing something obvious?
Change the following attribute in config/server.properties to bootstrap server address you are using in your code.
advertised.listeners = PLAINTEXT://IP or FQDN:9092

Cannot acquire connection to Neo4j database

I am trying to connect to my Neo4j graph database server from a new machine. I can successfully connect from an older machine but do not wish to use the older one anymore.
I have reduced the problem to a simple script that returns an exception:
from neo4j.v1 import GraphDatabase, basic_auth
auth = basic_auth("username","password")
session = GraphDatabase.driver("bolt://remote.server:7687",auth=auth).session()
statement = """MATCH (a:Protein)
WHERE a.name={name}
RETURN a.Accession"""
tx = session.begin_transaction()
record = tx.run(statement,{'name':"ARCH_HUMAN"}).single()
print record['a.Accession']
session.close()
And the error message is:
File "Test.py", line 10, in <module>
tx = session.begin_transaction()
File "/home/username/anaconda2/lib/python2.7/site-packages/neo4j/v1/api.py", line 432, in begin_transaction
self._connect()
File "/home/username/anaconda2/lib/python2.7/site-packages/neo4j/v1/api.py", line 269, in _connect
self._connection = self._acquirer(access_mode)
File "/home/username/anaconda2/lib/python2.7/site-packages/neo4j/v1/direct.py", line 52, in acquire
raise ServiceUnavailable("Cannot acquire connection to {!r}".format(self.address))
neo4j.exceptions.ServiceUnavailable: Cannot acquire connection to Address(host='remote.server', port=7687)
Port 7687 is open (confirmed via netstat -tulpn and iptables -L), and neo4j is configured to listen to 0.0.0.0:7687. In addition, .neo4j/known_hosts contains an entry for host 0.0.0.0
What's strange is that I get a different error message (neo4j.exceptions.AuthError) if I break the authentication by using an incorrect password. So the connection is being made to check the password, but still I cannot connect with the correct auth.
What's going on?
I too had the same issue and turns out the driver was the issue.
I did some experiments and found out that the last driver that it works for is neo4j-driver==v1.1.0 but the next version neo4j-driver==v1.2.0 it stops working for some reason.
Try uncomment dbms.connectors.default_listen_address=0.0.0.0 And check this
# Bolt connector
dbms.connector.bolt.enabled=true
dbms.connector.bolt.tls_level=OPTIONAL
dbms.connector.bolt.listen_address=:7687
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=true
dbms.connector.https.listen_address=:7473

Why python code cannot connect to RabbitMQ remotely?

I'm trying to connect from the one machine to another remote server with installed RabbitMQ.
The RabbitMQ is working perfectly locally, but when I connect to it from the another machine then an error is occurs:
root#xxx:~# python3 rabbitmq.py
Traceback (most recent call last):
File "rabbitmq.py", line 8, in <module>
connection = pika.BlockingConnection(pika.ConnectionParameters(parameters))
File "/usr/local/lib/python3.4/dist-packages/pika/connection.py", line 652, in __init__
self.host = host
File "/usr/local/lib/python3.4/dist-packages/pika/connection.py", line 392, in host
(value,))
TypeError: host must be a str or unicode str, but got <ConnectionParameters host=111.111.111.111 port=5672 virtual_host=product ssl=False>
root#xxx:~#
TypeError: host must be a str or unicode str, but got ConnectionParameters host=111.111.111.111 port=5672
virtual_host=product ssl=False
The Python code on other remote machine:
import pika
credentials = pika.PlainCredentials(username='remoteuser', password='mypassword')
parameters = pika.ConnectionParameters(host='111.111.111.111', port=5672, virtual_host='/', credentials=credentials)
#connection = pika.BlockingConnection(pika.ConnectionParameters('111.111.111.111:15672')) # --- it doesn't work too
connection = pika.BlockingConnection(pika.ConnectionParameters(parameters))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='', routing_key='hello', body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()
User "remoteuser" has admin rights and access to virtual host "/"
http://111.111.111.111:15672/#/users
Name Tags Can access virtual hosts Has password
remoteuser administrator / ●
What is the problem?
you have double wrapped parameters, change:
connection = pika.BlockingConnection(pika.ConnectionParameters(parameters))
to:
connection = pika.BlockingConnection(parameters)

Connection refused to Twitter API on PythonAnywhere

I am trying to connect to the twitter streaming API on Python anywhere, but always get a connection refused error.
I use Tweepy in my application, and to test the connection I am using the streaming example that can be found in the repo.
HEre is a sum-up of the code :
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
# Go to http://dev.twitter.com and create an app.
# The consumer key and secret will be generated for you after
consumer_key=""
consumer_secret=""
# After the step above, you will be redirected to your app's page.
# Create an access token under the the "Your access token" section
access_token=""
access_token_secret=""
class StdOutListener(StreamListener):
""" A listener handles tweets are the received from the stream.
This is a basic listener that just prints received tweets to stdout.
"""
def on_data(self, data):
print data
return True
def on_error(self, status):
print status
if __name__ == '__main__':
l = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, l)
stream.filter(track=['basketball'])
When I run this line in a bash console in python anywhere (after having filled the tokens of course)
12:02 ~/tweepy/examples (master)$ python streaming.py
I get the following error :
Traceback (most recent call last):
File "streaming.py", line 33, in <module>
stream.filter(track=['basketball'])
File "/usr/local/lib/python2.7/site-packages/tweepy/streaming.py", line 228, in filter
self._start(async)
File "/usr/local/lib/python2.7/site-packages/tweepy/streaming.py", line 172, in _start
self._run()
File "/usr/local/lib/python2.7/site-packages/tweepy/streaming.py", line 106, in _run
conn.connect()
File "/usr/local/lib/python2.7/httplib.py", line 1157, in connect
self.timeout, self.source_address)
File "/usr/local/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 111] Connection refused
The domain .twitter.com is in the pythonanywhere whithelist though, so i don't understand why the connection would be refused :s.
The very same code works like a charm on my Ubuntu.
Any idea would be more than welcome, thanks !!
If you're using a Free account, tweepy won't work. It does not use the proxy settings from the environment.
There is a fork of tweepy that you might be able to use (http://github.com/ducu/tweepy) until the main line uses the proxy settings correctly.
As Glenn said, there currently isn't proxy support in tweepy.
For a reason I cannot explain (and isn't documented), a pull request was closed without any merge about a month ago.
https://github.com/tweepy/tweepy/pull/152
There apparently is a fork available on github (see Glenn's answer), but I didn't test it.
Knowing that I would need to use my own domain name in the end, I finally got a paid account on pythonanywhere and got rid of the proxy stuff all together.

Categories

Resources