Does anyone have a very basic pub/sub example for Python that uses the emulator.
This here is my subscriber code
## setup subscribers
from google.cloud import pubsub
print("subscribing to topic")
subscriber = pubsub.SubscriberClient()
subscription_path = subscriber.subscription_path(app.config['PUB_SUB_PROJECT'], app.config['PUB_SUB_TOPIC'])
def callback(message):
print('Received message: {}'.format(message))
subscriber.subscribe(subscription_path, callback=callback)
And then here is my code for publishing
from google.cloud import pubsub
publisher = pubsub.PublisherClient()
topic_path = publisher.topic_path(app.config['PUB_SUB_PROJECT'], app.config['PUB_SUB_TOPIC'])
try:
topic = publisher.create_topic(topic_path)
except Exception:
app.logger.info("Topic already exists")
data = "ein test"
data = data.encode('utf-8')
publisher.publish(topic_path, data=data)
print("published topic")
It seems that publishing works -> but I think it's actually publishing to the cloud queue and not to the emulator. Therefor my subscriber never receives anything.
Any tipps and tricks are welcome. I believe it's as simple as ensuring that the publisher publishes to the emulator and the subscriber reads from the emulator.
In Python you don't need to make any code changes to use the emulator. Instead, you must have the PUBSUB_EMULATOR_HOST and PUBSUB_PROJECT_ID environmental variables defined.
The easiest way to set them is to run $(gcloud beta emulators pubsub env-init) before starting your program. if you are using Google App Engine locally, run that command and then start your app with dev_appserver.py app.yaml --env_var PUBSUB_EMULATOR_HOST=${PUBSUB_EMULATOR_HOST}.
This is documented at https://cloud.google.com/pubsub/docs/emulator
Related
What am I doing wrong here? I'm trying to use Stomp to test some things with Artemis 2.13.0, but when I uses either the command line utility of a Python script, I can't subscribe to a queue, even after I use the utility to publish a message to an address.
Also, if I give it a new queue name, it creates it, but then doesn't pull messages I publish to it. This is confusing. My actual Java app behaves nothing like this -- it's using JMS
I'm connection like this with the utility:
stomp -H 192.168.56.105 -P 61616 -U user -W password
> subscribe test3.topic::test.A.queue
Which give me this error:
Subscribing to 'test3.topic::test.A.queue' with acknowledge set to 'auto', id set to '1'
>
AMQ229019: Queue test.A.queue already exists on address test3.topic
Which makes me think Stomp is trying to create the queue when it subscribes, but I don't see how to manage this in the documentation. http://jasonrbriggs.github.io/stomp.py/api.html
I also have a Python script giving me the same issue.
import os
import time
import stomp
def connect_and_subscribe(conn):
conn.connect('user', 'password', wait=True)
conn.subscribe(destination='test3.topic::test.A.queue', id=1, ack='auto')
class MyListener(stomp.ConnectionListener):
def __init__(self, conn):
self.conn = conn
def on_error(self, headers, message):
print('received an error "%s"' % message)
def on_message(self, headers, message):
print('received a message "%s"' % message)
"""for x in range(10):
print(x)
time.sleep(1)
print('processed message')"""
def on_disconnected(self):
print('disconnected')
connect_and_subscribe(self.conn)
conn = stomp.Connection([('192.168.56.105', 61616)], heartbeats=(4000, 4000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
time.sleep(1000)
conn.disconnect()
I recommend you try the latest release of ActiveMQ Artemis. Since 2.13.0 was released a year ago a handful of STOMP related issues have been fixed specifically ARTEMIS-2817 which looks like your use-case.
It's not clear to me why you're using the fully-qualified-queue-name (FQQN) so I'm inclined to think this is not the right approach, but regardless the issue you're hitting should be fixed in later versions. If you want multiple consumers to share the messages on a single subscription then using FQQN would be a good option there.
Also, if you want to use the topic/ or queue/ prefix to control routing semantics from the broker then you should set the anycastPrefix and multicastPrefix appropriately as described in the documentation.
This may be coincidence but ARTEMIS-2817 was originally reported by "BENJAMIN Lee WARRICK" which is surprisingly similar to "BenW" (i.e. your name).
I am currently underway with my Senior Capstone project, in which I am to write a somewhat basic program which allows a custom interface on my iPhone6 device to remotely control or issue critical commands to a NIDS (Suricata) established at my home RaspberryPi(3B+) VPN. My question, however, is whether it's feasible to write said program which can allow remote access control of basic functions/response options on the Pi's IDS, given that I am utilizing it as a device within the VPN network. The main issue would be establish remote signaling to the iOS device whenever there is an anomaly and allowing it to respond back and execute root-level commands on the NIDS.
If it is of any good use, I am currently using Pythonista as a runtime environment on my mobile device and have set my VPN's connection methods to UDP, but I'm not sure if enabling SSH would assist me. I have a rather basic understanding of how to operate programming in regards to network connectivity. I very much appreciate any and all the help given!
from tkinter import *
window=Tk()
window.geometry("450x450")
window.title("IDS Response Manager")
label1=Label(window,text="Intrusion Response Options",fg= 'black',bg ='white',relief="solid",font=("times new roman",12,"bold"))
label1.pack()
button1=Button(window,text="Terminate Session",fg='white', bg='brown',relief=RIDGE,font=("arial",12,"bold"))
button1.place(x=50,y=110) #GROOVE ,RIDGE ,SUNKEN ,RAISED
button2=Button(window,text="Packet Dump",fg='white', bg='brown',relief=RIDGE,font=("arial",12,"bold"))
button2.place(x=220,y=110) #GROOVE ,RIDGE ,SUNKEN ,RAISED
button3=Button(window,text="Block Port",fg='white', bg='brown',relief=RIDGE,font=("arial",12,"bold"))
button3.place(x=110,y=170) #GROOVE ,RIDGE ,SUNKEN ,RAISED
Very basic options as are shown here.
You can use a flask server with an API, which you can send post requests to. You can then send get requests to receive the commands. To host your API, look at Heroku (free tier available, and very much functional, with already configured app_name.herokuapp.com).
Search up to send a post request with the technologies you are using to build your app. Send keyword command with the command to the /send_commands along with the password, "password_here" (changeable to anything you want).
Python:
Modules: Flask (server), request (client)
Server Code:
from flask import Flask
app = Flask(__name__)
commands = []
#app.route('/get_commands', methods=['GET'])
def get_commands():
tmp_commands = commands[::]
commands = []
return {'commands': tmp_commands}
#app.route('/send_commands', methods=['POST'])
def send_commands():
if request.json['password'] == "password_here":
commands.append(request.json['command'])
return {'worked': True}
else:
return {'worked': False}
if __name__ == '__main__':
app.run(debug=True)
Client Code:
import requests
URL = "url_here/get_commands"
commands = requests.get(url = URL)
for command in commands:
os.system(command)
I am asked to use the azure service bus instead of celery in a Django application.
Read the documentation provided but didn't get a clear picture of using service bus instead of a celery task. Any advice provided would be of great help.
Before getting into it, I would like to highlight the differences between Azure Service Bus and Celery.
Azure Service Bus :
Microsoft Azure Service Bus is a fully managed enterprise integration message broker.
You could refer this to know more about the service bus
Celery :
Distributed task queue. Celery is an asynchronous task queue/job queue based on distributed message passing.
I could think of 2 possibilities in your case :
You would like to use Service Bus with Celery in place of other
message brokers.
Replace Celery with the Service Bus
1 : You would like to use Service Bus with Celery in place of other message brokers.
You could refer this to understand why celery needs a message broker.
I am not sure which messaging broker you are using currently, but you could use the Kombu library to meet your requirement.
Reference for Azure Service Bus : https://docs.celeryproject.org/projects/kombu/en/stable/reference/kombu.transport.azureservicebus.html
Reference for others : https://docs.celeryproject.org/projects/kombu/en/stable/reference/index.html
2 : Replace Celery with the Service Bus completely
To meet your requirement :
Consider
Message senders are producers
Message receivers are consumers
These are two different application that you will have to work on.
You could refer the below to get more sample code to build on.
https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/servicebus/azure-servicebus/samples
Explanation :
Every time you would like to execute the actions, you could send
messages to a topic from the producer client.
The Consumer Client - the application that is listening, will receive the message and process the same. You could attach your custom process to it - in that way the your custom process gets executed whenever a message is received at the consumer client end.
The below is sample of the receiving client :
from azure.servicebus.aio import SubscriptionClient
import asyncio
import nest_asyncio
nest_asyncio.apply()
Receiving = True
#Topic 1 receiver :
conn_str= "<>"
name="Allmessages1"
SubsClient = SubscriptionClient.from_connection_string(conn_str, name)
receiver = SubsClient.get_receiver()
async def receive_message_from1():
await receiver.open()
print("Opening the Receiver for Topic1")
async with receiver:
while(Receiving):
msgs = await receiver.fetch_next()
for m in msgs:
print("Received the message from topic 1.....")
##### - Your code to execute when a message is received - ########
print(str(m))
##### - Your code to execute when a message is received - ########
await m.complete()
loop = asyncio.get_event_loop()
topic1receiver = loop.create_task(receive_message_from1())
the section between the below line would be instruction that will be executed every time a message is received.
##### - Your code to execute when a message is received - ########
I am stuck with an issue related to Kafka consumer using confluent-kafka's python library.
CONTEXT
I have a Kafka topic on AWS EC2 that I need to consume.
SCENARIO
Consumer Script (my_topic_consumer.py) uses confluent-kafka-python to create a consumer (shown below) and subscribe to the 'my_topic' topic. The issue is that the consumer is not able to read messages from the Kafka cluster.
All required security steps are met:
1. SSL - security protocol for the consumer and broker.
2. Addition of the consumer EC2 IP block has been added to the Security Group on the cluster.
#my_topic_consumer.py
from confluent_kafka import Consumer, KafkaError
c = Consumer({
'bootstrap.servers': 'my_host:my_port',
'group.id': 'my_group',
'auto.offset.reset': 'latest',
'security.protocol': 'SSL',
'ssl.ca.location': '/path/to/certificate.pem'
})
c.subscribe(['my_topic'])
try:
while True:
msg = c.poll(5)
if msg is None:
print('None')
continue
if msg.error():
print(msg)
continue
else:
#process_msg(msg) - Writes messages to a data file.
except KeyboardInterrupt:
print('Aborted by user\n')
finally:
c.close()
URLS
Broker Host: my_host
Port: my_port
Group ID: my_group
CONSOLE COMMANDS
working - Running the console-consumer script, I am able to see the data:
kafka-console-consumer --bootstrap-server my_host:my_port --consumer.config client-ssl.properties --skip-message-on-error --topic my_topic | jq
Note: client-ssl.properties: points to the JKS file which has the certs.
Further debugging on the Kafka cluster (separate EC2 instance from consumer), I couldn't see any registration of my consumer by my group_id (my_group):
kafka-consumer-groups --botstrap-server my_host:my_port --command-config client-ssl.properties --descrive --group my_group
This leads me to believe the consumer is not getting registered on the cluster, so may be the SSL handshake is failing? How do I check this from consumer side in python?
Note
- the cluster is behind a proxy (corporate), but I do run the proxy on the consumer EC2 before testing.
- ran the process via pm2, yet didn't see any error logs like req timeouts etc.
Is there any way I can check that the Consumer creation is failing in a definite way and find out the root cause? Any help and feedback is appreciated.
I need to have a python client that can discover queues on a restarted RabbitMQ server exchange, and then start up a clients to resume consuming messages from each queue. How can I discover queues from some RabbitMQ compatible python api/library?
There does not seem to be a direct AMQP-way to manage the server but there is a way you can do it from Python. I would recommend using a subprocess module combined with the rabbitmqctl command to check the status of the queues.
I am assuming that you are running this on Linux. From a command line, running:
rabbitmqctl list_queues
will result in:
Listing queues ...
pings 0
receptions 0
shoveled 0
test1 55199
...done.
(well, it did in my case due to my specific queues)
In your code, use this code to get output of rabbitmqctl:
import subprocess
proc = subprocess.Popen("/usr/sbin/rabbitmqctl list_queues", shell=True, stdout=subprocess.PIPE)
stdout_value = proc.communicate()[0]
print stdout_value
Then, just come up with your own code to parse stdout_value for your own use.
As far as I know, there isn't any way of doing this. That's nothing to do with Python, but because AMQP doesn't define any method of queue discovery.
In any case, in AMQP it's clients (consumers) that declare queues: publishers publish messages to an exchange with a routing key, and consumers determine which queues those routing keys go to. So it does not make sense to talk about queues in the absence of consumers.
You can add plugin rabbitmq_management
sudo /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
Then use rest-api
import requests
def rest_queue_list(user='guest', password='guest', host='localhost', port=15672, virtual_host=None):
url = 'http://%s:%s/api/queues/%s' % (host, port, virtual_host or '')
response = requests.get(url, auth=(user, password))
queues = [q['name'] for q in response.json()]
return queues
I'm using requests library in this example, but it is not significantly.
Also I found library that do it for us - pyrabbit
from pyrabbit.api import Client
cl = Client('localhost:15672', 'guest', 'guest')
queues = [q['name'] for q in cl.get_queues()]
Since I am a RabbitMQ beginner, take this with a grain of salt, but there's an interesting Management Plugin, which exposes an HTTP interface to "From here you can manage exchanges, queues, bindings, virtual hosts, users and permissions. Hopefully the UI is fairly self-explanatory."
http://www.rabbitmq.com/blog/2010/09/07/management-plugin-preview-release/
I use https://github.com/bkjones/pyrabbit. It's talks directly to RabbitMQ's mgmt plugin's API interface, and is very handy for interrogating RabbitMQ.
Management features are due in a future version of AMQP. So for now you will have to wait till for a new version that will come with that functionality.
I found this works for me, /els being my demo vhost name..
rabbitmqctl list_queues --vhost /els
pyrabbit didn't work so well for me; However, the Management Plugin itself has its own command line script that you can download from your own admin GUI and use later on (for example, I downloaded mine from
http://localhost:15672/cli/
for local use)
I would use simply this:
Just replace the user(default= guest), passwd(default= guest) and port with your values.
import requests
import json
def call_rabbitmq_api(host, port, user, passwd):
url = 'http://%s:%s/api/queues' % (host, port)
r = requests.get(url, auth=(user,passwd))
return r
def get_queue_name(json_list):
res = []
for json in json_list:
res.append(json["name"])
return res
if __name__ == '__main__':
host = 'rabbitmq_host'
port = 55672
user = 'guest'
passwd = 'guest'
res = call_rabbitmq_api(host, port, user, passwd)
print ("--- dump json ---")
print (json.dumps(res.json(), indent=4))
print ("--- get queue name ---")
q_name = get_queue_name(res.json())
print (q_name)
Referred from here: https://gist.github.com/hiroakis/5088513#file-example_rabbitmq_api-py-L2