I am trying to integrate snmptrapd and RabbitMQ for delivering traps notifications to an exterior system.
My system is composed of 3 components:
A Linux virtual machine with snmptrapd and RabbitMQ (Publisher);
A Linux virtual machine with RabbitMQ (Consumer);
A Linux bare metal with docker so I can have a lot of containers sending traps (using nping)
The snmptrapd part is simple:
authCommunity execute mycom
traphandle default /root/some_script
In my first attempts the some_script was written in Python, but the performance was not perfect (20 containers sending 1 trap per second during 10 seconds, I only received 160 messages in the consumer).
#!/usr/bin/env python
import pika
import sys
message = ""
for line in sys.stdin :
message += (line)
credentials = pika.PlainCredentials('test', 'test')
parameters = pika.ConnectionParameters('my_ip', 5672, '/', credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue='snmp')
channel.basic_publish(exchange='',
routing_key='snmp',
body=message)
connection.close()
I switched to Perl and now I can get 200 traps/messages.
My Perl script uses Net::AMQP::RabbitMQ
#!/usr/bin/perl
use Net::AMQP::RabbitMQ;
foreach my $line ( <STDIN> ) {
chomp( $line );
$message = "$message\n$line";
}
my $mq = Net::AMQP::RabbitMQ->new();
$mq->connect("my_ip", {
user => "test",
password => "test",
vhost => "/"
});
$mq->channel_open(1);
$mq->publish(1, "snmp", $message);
$mq->disconnect();
But I want better. I tried 200 containers sending 1 trap per second and it failed miserably, receiving only around 10% of messages in the consumer.
I think this has to do with the overhead of always have to open, publish and close the channel in RabbitMQ per trap received, because at the network level I receive all the messages (checked trough a tcpdump).
Is there a way to keep an always-open publish channel so I don't have to reopen/create a connection to the queue?
Asking if you can talk to RabbitMQ server without connecting to it first is like asking if you can talk to someone on the telephone without connecting to their phone first (by dialing and answering).
You really should reuse your connection if you're going to send multiple messages, but you do need a connection first!
Anyway, the problem isn't with the publisher. It's the consumer that's buggy if it's losing messages.
Related
I want an Expert Advisor to open an Trade triggerd by a Telegram-Message.
I succesfully set up an Hello-World application using MQ4 as Server and Python/Telegram-bot as Client.
When the Telegram-Bot recieves a Message, he will send a request to MQ4 and gets a simple response without executing a trade.
Running Code below.
# Hello World client in Python
# Connects REQ socket to tcp://localhost:5555
import zmq
context = zmq.Context()
# Socket to talk to server
print("Connecting to trading server…")
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5555")
print("Connecting to trading server succeed")
#################################################################################
# Use your own values from my.telegram.org
api_id = ######
api_hash = '#####'
bot_token = '#####'
#################################################################################
from telethon import TelegramClient, events
client = TelegramClient('anon', api_id, api_hash)
#client.on(events.NewMessage)
async def my_event_handler(event):
if "Ascending" in event.raw_text:
if "AUDUSD" in event.raw_text:
await event.reply("AUDUSD sell")
# Do 1 request, waiting for a response
for request in range(1):
print("Telegram: AUDUSD sell execution requested %s …" % request)
socket.send(b"AUDUSD Sell execute")
#Send 2 variables (Ordertype // Symbol)
# Get the reply. -> Not neccesary for final application
# Apülication just needs to send 2 Varianles to MQ4 and trigger the open_order()
message = socket.recv()
print("Received reply %s [ %s ]" % (request, message))
client.start()
client.run_until_disconnected()
// Hello World server in MQ4
#include <Zmq/Zmq.mqh>
//+------------------------------------------------------------------+
void OnTick()
{
Context context("helloworld");
Socket socket(context,ZMQ_REP);
socket.bind("tcp://*:5555");
while(!IsStopped())
{
ZmqMsg request;
// Wait for next request from client
// MetaTrader note: this will block the script thread
// and if you try to terminate this script, MetaTrader
// will hang (and crash if you force closing it)
socket.recv(request);
Print("Receive: AUDUSD Sell execute");
Sleep(1000);
ZmqMsg reply("Trade was executed");
// Send reply back to client
socket.send(reply);
Print("Feedback: Trade was executed");
}
}
//+------------------------------------------------------------------+
Now I want to send 2 variables from Python to MQ4.
1. Ordertype: buy/sell
2. Symbol: EURUSD, AUDUSD,...
Send "Sell" if message contains "Ascending" -
Send "Buy" if message contains "Descending"
Send "AUDUSD" if message contains "AUDUSD",...
To do so I found a Library from Darwinex and want to combine it (interpretation of message, sending value as an array) with my already functioning telegram-bot.
For testing I wanted to try the example-code from Darwinex by itself.
I found the Code v2.0.1:
Python:
https://github.com/darwinex/DarwinexLabs/blob/master/tools/dwx_zeromq_connector/v2.0.1/Python/DWX_ZeroMQ_Connector_v2_0_1_RC8.py
MQ4: (Note: This Library code may replace the whole MQ4 code above in final app.)
https://github.com/darwinex/DarwinexLabs/blob/master/tools/dwx_zeromq_connector/v2.0.1/MQL4/DWX_ZeroMQ_Server_v2.0.1_RC8.mq4
When I copy the Code without changing I get an error in Python:
NameError: name '_zmq' is not defined
After running: _zmq._DWX_ZeroMQ_Connector() - in the Kernel of Spyder.
What can I do to fix that error?
In the final state I want to run the Python-Code and the Expert Advisor on the same Windows Server 2012 R2.
Is it enough if I run the .py file in the powershell from the server or should I host the file with the Webside?
I expect to get the whole system/examplecode running on my VPS or Webside-Host-Server and get an testing environment for further coding action, but currenty I cant get the Library Code in Python to run properly.
Also the MT4 ceeps crashing with the current code - but should be fixed if I combine my application with the Library-Codeexample.
(running everything on my local PC with WIN 10).
Q : I think it is a connection-problem between MT4 and Python.
Without a fully reproducible MCVE-code this is undecideable.
Having used a ZeroMQ-based bidirectional signalling/messaging between a QuantFX in python and trading ecosystem MetaTrader 4 Terminal implemented in MQL4, there is a positive experience of using this architecture.
Details decide.
The Best Next Step :
Start with a plain PUSH/PULL archetype python-PUSH-es, MQL4-script-PULL-s, preferably using tcp:// transport-class ( win platforms need not be ready to use an even simpler, protocol-less, ipc:// transport-class.
Once you have posack'd this trivial step, move forwards.
Q : How do I need to setup my Server to get a connection betwen those two - since it should be the same as on my local PC?
It is normal to use ZeroMQ on the same localhost during prototyping, so you may test and debug the integration. For details on ZeroMQ, feel free to read all details in other posts.
Q : Is it enough if I run the .py file in the powershell from the server or should I host the file with the Webside I already have and use that as "Python-Server"?
Yes, in case the .py file was designed that way. No code, no advice. That simple.
Possible issues :
Versions - ZeroMQ, since 2.11.x till the recent 4.3.+, has made a lot of changes
Installation DLL-details matter.
MQL4 has similarly gone through many changes ( string ceased to be a string and become struct to name a biggest impacting one ), so start with simple scenarios and integrate the target architecture in steps / phases with due testing whether the completed phases work as expected.
to fix that problem you need this:
from DWX_ZeroMQ_Connector_v2_0_1_RC8 import DWX_ZeroMQ_Connector
_zmq = DWX_ZeroMQ_Connector()
(adjust your version of the connector as appropriate).
should fix that problem.
I am using node-celery to publish messages to a RabbitMQ instance (3.5.7) which then distributes messages to several python celery workers. Everything seems to work fine, but then after testing multiple times, it seems that the Socket Descriptors are filling up thus rejecting more connections. My guess is that the connections should be closing, but they are not.
Here is my Node.js code that sends to the corresponding routes. I tried adding client.end at one point but it wouldn't even send the message.
// Initialize connection to rabbitmq
var client = celery.createClient({
CELERY_BROKER_URL: config.rabbitmq.connection,
CELERY_ROUTES: config.rabbitmq.routes,
IGNORE_RESULT: true
});
//send the message to the worker
client.on('connect', function () {
var calculateTotal = client.createTask("calculateTotal\.run");
var emailUser = client.createTask("emailUser\.run");
var subscribeToMailing = client.createTask("subscribeToMailing\.run");
calculateTotal.call([messageToRabbit]);
emailUser.call([messageToRabbit]);
subscribeToMailing.call([messageToRabbit]);
});
I use apns-client package in python to push notification to IOS devices
self.con = self.session.get_connection("push_production", cert_file=PRO_PEM_FILE, passphrase=PASS_PHRASE)
self.srv = APNs(self.con)
message = Message([token], badge = badge, alert = alert)
res = self.srv.send(message)
for token, reason in res.failed.items():
code, errmsg = reason
print "Device failed: {0}, reason: {1}".format(token, errmsg)
for code, errmsg in res.errors:
print "Error:", errmsg
if res.needs_retry():
retry_message = res.retry()
With the code above,I can receive notification most of the time.
However,if several hours has passed since the last notification is received,I can not receive notification without any exception at server side.
I can tell the code self.srv.send(message) did excute without any response,according to the apns-client document,no response means sending notification successfully.
What can I do to make sure client does receive server's notification?
Any help will be appreciated!
Push Notification is not 100% reliable and it will not guaranteed that you will receive it 100% of the time.
See: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/ApplePushService.html
Quality of Service
Apple Push Notification service includes a default Quality of Service
(QoS) component that performs a store-and-forward function.
If APNs attempts to deliver a notification but the device is offline,
the notification is stored for a limited period of time, and delivered
to the device when it becomes available.
Only one recent notification for a particular application is stored.
If multiple notifications are sent while the device is offline, each
new notification causes the prior notification to be discarded. This
behavior of keeping only the newest notification is referred to as
coalescing notifications.
If the device remains offline for a long time, any notifications that
were being stored for it are discarded.
If you have any important message that you want the user to receive, I would like to suggest you to try Local Notification.
Edit:
The main issue is the 3rd party rabbitmq machine seems to kill idle connections every now and then. That's when I start getting "Broken Pipe" exceptions. The only way to gets comms. back to normal is for me to kill the processes and restart them. I assume there's a better way?
--
I'm a little lost here. I am connecting to a 3rd party RabbitMQ server to push messages to. Every now and then all the sockets on their machine gets dropped and I end up getting a "Broken Pipe" exception.
I've been told to implement a heartbeat check in my code but I'm not sure how exactly. I've found some info here: http://kombu.readthedocs.org/en/latest/changelog.html#version-2-3-0 but no real example code.
Do I only need to add "?heartbeat=x" to the connection string? Does Kombu do the rest? I see I need to call "Connection.heartbeat_check()" at "x/2". Should I create a periodic task to call this? How does the connection get re-established?
I'm using:
celery==3.0.12
kombu==2.5.4
My code looks like this right now. A simple Celery task gets called to send the message through to the 3rd party RabbitMQ server (removed logging and comments to keep it short, basic enough):
class SendMessageTask(Task):
name = "campaign.backends.send"
routing_key = "campaign.backends.send"
ignore_result = True
default_retry_delay = 60 # 1 minute.
max_retries = 5
def run(self, send_to, message, **kwargs):
payload = "Testing message"
try:
conn = BrokerConnection(
hostname=HOSTNAME,
port=PORT,
userid=USER_ID,
password=PASSWORD,
virtual_host=VHOST
)
with producers[conn].acquire(block=True) as producer:
publish = conn.ensure(producer, producer.publish, errback=sending_errback, max_retries=3)
publish(
body=payload,
routing_key=OUT_ROUTING_KEY,
delivery_mode=2,
exchange=EXCHANGE,
serializer=None,
content_type='text/xml',
content_encoding = 'utf-8'
)
except Exception, ex:
print ex
Thanks for any and all help.
While you certainly can add heartbeat support to a producer, it makes more sense for consumer processes.
Enabling heartbeats means that you have to send heartbeats regularly, e.g. if the heartbeat is set to 1 second, then you have to send a heartbeat every second or more or the remote will close the connection.
This means that you have to use a separate thread or use async io to reliably send heartbeats in time, and since a connection cannot be shared between threads this leaves us with async io.
The good news is that you probably won't get much benefit adding heartbeats to a produce-only connection.
I have the nginx upload module handling site uploads, but still need to transfer files (let's say 3-20mb each) to our cdn, and would rather not delegate that to a background job.
What is the best way to do this with tornado without blocking other requests? Can i do this in an async callback?
You may find it useful in the overall architecture of your site to add a message queuing service such as RabbitMQ.
This would let you complete the upload via the nginx module, then in the tornado handler, post a message containing the uploaded file path and exit. A separate process would be watching for these messages and handle the transfer to your CDN. This type of service would be useful for many other tasks that could be handled offline ( sending emails, etc.. ). As your system grows, this also provides you a mechanism to scale by moving queue processing to separate machines.
I am using an architecture very similar to this. Just make sure to add your message consumer process to supervisord or whatever you are using to manage your processes.
In terms of implementation, if you are on Ubuntu installing RabbitMQ is a simple:
sudo apt-get install rabbitmq-server
On CentOS w/EPEL repositories:
yum install rabbit-server
There are a number of Python bindings to RabbitMQ. Pika is one of them and it happens to be created by an employee of LShift, who is responsible for RabbitMQ.
Below is a bit of sample code from the Pika repo. You can easily imagine how the handle_delivery method would accept a message containing a filepath and push it to your CDN.
import sys
import pika
import asyncore
conn = pika.AsyncoreConnection(pika.ConnectionParameters(
sys.argv[1] if len(sys.argv) > 1 else '127.0.0.1',
credentials = pika.PlainCredentials('guest', 'guest')))
print 'Connected to %r' % (conn.server_properties,)
ch = conn.channel()
ch.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False)
should_quit = False
def handle_delivery(ch, method, header, body):
print "method=%r" % (method,)
print "header=%r" % (header,)
print " body=%r" % (body,)
ch.basic_ack(delivery_tag = method.delivery_tag)
global should_quit
should_quit = True
tag = ch.basic_consume(handle_delivery, queue = 'test')
while conn.is_alive() and not should_quit:
asyncore.loop(count = 1)
if conn.is_alive():
ch.basic_cancel(tag)
conn.close()
print conn.connection_close
advice on the tornado google group points to using an async callback (documented at http://www.tornadoweb.org/documentation#non-blocking-asynchronous-requests) to move the file to the cdn.
the nginx upload module writes the file to disk and then passes parameters describing the upload(s) back to the view. therefore, the file isn't in memory, but the time it takes to read from disk–which would cause the request process to block itself, but not other tornado processes, afaik–is negligible.
that said, anything that doesn't need to be processed online shouldn't be, and should be deferred to a task queue like celeryd or similar.