I try to connect to Pusher Websocket API using the following code :
https://github.com/nlsdfnbch/Pysher/
import pysher
# Add a logging handler so we can see the raw communication data
import logging
root = logging.getLogger()
root.setLevel(logging.INFO)
ch = logging.StreamHandler(sys.stdout)
root.addHandler(ch)
pusher = pysher.Pusher('de504dc5763aeef9ff52')
# We can't subscribe until we've connected, so we use a callback handler
# to subscribe when able
def connect_handler(data):
channel = pusher.subscribe('live_trades')
channel.bind('trade', callback)
pusher.connection.bind('pusher:connection_established', connect_handler)
pusher.connect()
while True:
# Do other things in the meantime here...
time.sleep(1)
instead of some valid response, i get this every few seconds :
Connection: Error - [WinError 10042] An unknown, invalid, or
unsupported option or level was specified in a getsockopt or
setsockopt call Connection: Connection closed Attempting to connect
again in 10 seconds.
what is the problem ?
I saw the same error using a different library that uses websockets. I can see from your description (and link) that Pysher uses websockets.
I found (yet another) websocket client for Python that reported an issue with websockets, specifically with Python 3.6.4: [https://github.com/websocket-client/websocket-client/issues/370]
It references the bug in Python tracker as well [https://bugs.python.org/issue32394]
Upgrading to Python 3.6.5 worked for me. Alternatively, they suggest that upgrading to Windows 10 1703+ should work too (just for completeness; I have not verified this).
Related
I am using pub/sub to publish logs from an IoT device to the cloud, where they're stored in cloud logging by a cloud function. This had been working fine, but now I am running into issues where the messages are not being delivered and eventually the application gets killed. This is the error message:
google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling functools.partial(<function _wrap_unary_errors.<locals>.error_remapped_callable at 0x7487bd20>, topic: "projects/projectid/topics/iot_device_logs"
messages {
data: "20210612T04:09:22.116Z - ERROR - Failed to create objects on main "
attributes {
key: "device_num_id"
value: "devincenumid"
}
attributes {
key: "logger_name"
value: "iotXX"
}
}
, metadata=[('x-goog-request-params', 'topic=projects/projectid/topics/iot_device_logs'), ('x-goog-api-client', 'gl-python/3.7.3 grpc/1.33.2 gax/1.23.0 gccl/2.5.0')]), last exception: 503 Transport closed
20210612T04:21:08.211Z - INFO - QueryDevice object created
20210612T04:38:30.880Z - DEBUG - Analyzer failure counts
20210612T04:42:40.760Z - INFO - Attempting to query device
20210612T04:48:05.126Z - DEBUG - Attempting to publish 'info' log on iotXX
bash: line 1: 609 Killed python3.7 path/to/file.py
The code in question is something like this:
def get_callback(self, f, log):
def callback(f):
try:
self.debug(f"Successfully published log: {f.result()}")
except Exception as e:
self.debug(f"Failed to publish log: {e}")
return callback
def publish_log(self, log, severity):
# data must be in bytestring
data = log.encode("utf-8")
try:
self.debug(f"Attempting to publish '{severity}' log on {self.name}")
# add two attributes to distinguish the log once in the cloud
future = PUBLISHER.publish(
TOPIC_PATH, data=data, logger_name=self.name, device_num_id=self.deviceid)
futures[log] = future
# publish failures shall be handled in the callback function
future.add_done_callback(self.get_callback(future, log))
except Exception as e:
self.debug(f"Error on publish_log: {e}")
I believe this is happening during a connection outage, which I can understand it might not be able to send the messages. However, I don't understand why the application is being killed.
So far, I am trying to change the retry settings to see if it improves. But I am concerned that the application will continue to get killed.
Any idea on how to determine why it is being killed instead of simply failing to send and continue on?
I seem to have found out the problem, and it is not what I was thinking. I am posting an answer in case someone else is confused by a similar problem and hopefully they're not misguided.
In my case, the connection problem coincided with my application being killed. But as far as I can tell, this was not the reason and pubsub or its retry settings had nothing to do with my application getting killed.
I found on the kernel logs a more descriptive message saying that the application had been killed by an out of memory reaper because it was consuming too much ram.
Turns out I had a memory leak on my program. I was not handling the futures generated by the pubsub publisher properly, so they kept adding up and consuming memory.
What am I doing wrong here? I'm trying to use Stomp to test some things with Artemis 2.13.0, but when I uses either the command line utility of a Python script, I can't subscribe to a queue, even after I use the utility to publish a message to an address.
Also, if I give it a new queue name, it creates it, but then doesn't pull messages I publish to it. This is confusing. My actual Java app behaves nothing like this -- it's using JMS
I'm connection like this with the utility:
stomp -H 192.168.56.105 -P 61616 -U user -W password
> subscribe test3.topic::test.A.queue
Which give me this error:
Subscribing to 'test3.topic::test.A.queue' with acknowledge set to 'auto', id set to '1'
>
AMQ229019: Queue test.A.queue already exists on address test3.topic
Which makes me think Stomp is trying to create the queue when it subscribes, but I don't see how to manage this in the documentation. http://jasonrbriggs.github.io/stomp.py/api.html
I also have a Python script giving me the same issue.
import os
import time
import stomp
def connect_and_subscribe(conn):
conn.connect('user', 'password', wait=True)
conn.subscribe(destination='test3.topic::test.A.queue', id=1, ack='auto')
class MyListener(stomp.ConnectionListener):
def __init__(self, conn):
self.conn = conn
def on_error(self, headers, message):
print('received an error "%s"' % message)
def on_message(self, headers, message):
print('received a message "%s"' % message)
"""for x in range(10):
print(x)
time.sleep(1)
print('processed message')"""
def on_disconnected(self):
print('disconnected')
connect_and_subscribe(self.conn)
conn = stomp.Connection([('192.168.56.105', 61616)], heartbeats=(4000, 4000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
time.sleep(1000)
conn.disconnect()
I recommend you try the latest release of ActiveMQ Artemis. Since 2.13.0 was released a year ago a handful of STOMP related issues have been fixed specifically ARTEMIS-2817 which looks like your use-case.
It's not clear to me why you're using the fully-qualified-queue-name (FQQN) so I'm inclined to think this is not the right approach, but regardless the issue you're hitting should be fixed in later versions. If you want multiple consumers to share the messages on a single subscription then using FQQN would be a good option there.
Also, if you want to use the topic/ or queue/ prefix to control routing semantics from the broker then you should set the anycastPrefix and multicastPrefix appropriately as described in the documentation.
This may be coincidence but ARTEMIS-2817 was originally reported by "BENJAMIN Lee WARRICK" which is surprisingly similar to "BenW" (i.e. your name).
I am using this tool to get metrics on the appliance.
The script is using sockets to send syslog message, and I'm attempting to get it to send the message using native syslog functions.
I added the below code along but I cannot seem to get this working.
def sendSyslog2(jsonObj):
if (logHostAvailable["syslog"]==True):
logging.info("SYSLOG REQUEST: "+json.dumps(jsonObj))
try:
syslog.openlog(facility=syslog.LOG_LOCAL5)
syslog.syslog(syslog.LOG_INFO, json.dumps(jsonObj))
except:
logging.error("syslog failed")
Even using test scripts is failing. I'm not well versed in python or programming but I can get by with some reference, any pointers in right direction appreciated.
From: https://stackoverflow.com/a/38929289/1033927
You can use a remote syslog server: rsyslog or the python package loggerglue implements the syslog protocol as decribed in rfc5424 and rfc5425. . When you use a port above 1024 you can run it as a non-root user.
Within the python logging module you have a SyslogHandler which also supports the syslog remote logging.
import logging
import logging.handlers
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
handler = logging.handlers.SysLogHandler(address = ('127.0.0.1',514))
my_logger.addHandler(handler)
my_logger.debug('this is debug')
my_logger.critical('this is critical')
I am trying to create a simple client with pykafka. For this I need SSL certificates. The client runs under RHEL 7 and Python 3.6.x
It looks like the connection works but I don't get any feedback or data, only a black screen.
How can I check the connection or get error messages.
#!/usr/bin/scl enable rh-python36 -- python3
from pykafka import KafkaClient, SslConfig
config = SslConfig(cafile='key/root_ca.crt',
certfile='key/cert.crt',
keyfile='key/key.key',
password='xxxx')
client = KafkaClient(hosts="xxxxxxx:9093",
ssl_config=config)
print("topics", client.topics)
topic = client.topics['xxxxxx']
consumer = topic.get_simple_consumer(
consumer_group="yyyyy",
auto_offset_reset=OffsetType.EARLIEST,
reset_offset_on_start=False
)
for message in consumer:
if message is not None:
print (message.offset, message.value)
I want an Expert Advisor to open an Trade triggerd by a Telegram-Message.
I succesfully set up an Hello-World application using MQ4 as Server and Python/Telegram-bot as Client.
When the Telegram-Bot recieves a Message, he will send a request to MQ4 and gets a simple response without executing a trade.
Running Code below.
# Hello World client in Python
# Connects REQ socket to tcp://localhost:5555
import zmq
context = zmq.Context()
# Socket to talk to server
print("Connecting to trading server…")
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5555")
print("Connecting to trading server succeed")
#################################################################################
# Use your own values from my.telegram.org
api_id = ######
api_hash = '#####'
bot_token = '#####'
#################################################################################
from telethon import TelegramClient, events
client = TelegramClient('anon', api_id, api_hash)
#client.on(events.NewMessage)
async def my_event_handler(event):
if "Ascending" in event.raw_text:
if "AUDUSD" in event.raw_text:
await event.reply("AUDUSD sell")
# Do 1 request, waiting for a response
for request in range(1):
print("Telegram: AUDUSD sell execution requested %s …" % request)
socket.send(b"AUDUSD Sell execute")
#Send 2 variables (Ordertype // Symbol)
# Get the reply. -> Not neccesary for final application
# Apülication just needs to send 2 Varianles to MQ4 and trigger the open_order()
message = socket.recv()
print("Received reply %s [ %s ]" % (request, message))
client.start()
client.run_until_disconnected()
// Hello World server in MQ4
#include <Zmq/Zmq.mqh>
//+------------------------------------------------------------------+
void OnTick()
{
Context context("helloworld");
Socket socket(context,ZMQ_REP);
socket.bind("tcp://*:5555");
while(!IsStopped())
{
ZmqMsg request;
// Wait for next request from client
// MetaTrader note: this will block the script thread
// and if you try to terminate this script, MetaTrader
// will hang (and crash if you force closing it)
socket.recv(request);
Print("Receive: AUDUSD Sell execute");
Sleep(1000);
ZmqMsg reply("Trade was executed");
// Send reply back to client
socket.send(reply);
Print("Feedback: Trade was executed");
}
}
//+------------------------------------------------------------------+
Now I want to send 2 variables from Python to MQ4.
1. Ordertype: buy/sell
2. Symbol: EURUSD, AUDUSD,...
Send "Sell" if message contains "Ascending" -
Send "Buy" if message contains "Descending"
Send "AUDUSD" if message contains "AUDUSD",...
To do so I found a Library from Darwinex and want to combine it (interpretation of message, sending value as an array) with my already functioning telegram-bot.
For testing I wanted to try the example-code from Darwinex by itself.
I found the Code v2.0.1:
Python:
https://github.com/darwinex/DarwinexLabs/blob/master/tools/dwx_zeromq_connector/v2.0.1/Python/DWX_ZeroMQ_Connector_v2_0_1_RC8.py
MQ4: (Note: This Library code may replace the whole MQ4 code above in final app.)
https://github.com/darwinex/DarwinexLabs/blob/master/tools/dwx_zeromq_connector/v2.0.1/MQL4/DWX_ZeroMQ_Server_v2.0.1_RC8.mq4
When I copy the Code without changing I get an error in Python:
NameError: name '_zmq' is not defined
After running: _zmq._DWX_ZeroMQ_Connector() - in the Kernel of Spyder.
What can I do to fix that error?
In the final state I want to run the Python-Code and the Expert Advisor on the same Windows Server 2012 R2.
Is it enough if I run the .py file in the powershell from the server or should I host the file with the Webside?
I expect to get the whole system/examplecode running on my VPS or Webside-Host-Server and get an testing environment for further coding action, but currenty I cant get the Library Code in Python to run properly.
Also the MT4 ceeps crashing with the current code - but should be fixed if I combine my application with the Library-Codeexample.
(running everything on my local PC with WIN 10).
Q : I think it is a connection-problem between MT4 and Python.
Without a fully reproducible MCVE-code this is undecideable.
Having used a ZeroMQ-based bidirectional signalling/messaging between a QuantFX in python and trading ecosystem MetaTrader 4 Terminal implemented in MQL4, there is a positive experience of using this architecture.
Details decide.
The Best Next Step :
Start with a plain PUSH/PULL archetype python-PUSH-es, MQL4-script-PULL-s, preferably using tcp:// transport-class ( win platforms need not be ready to use an even simpler, protocol-less, ipc:// transport-class.
Once you have posack'd this trivial step, move forwards.
Q : How do I need to setup my Server to get a connection betwen those two - since it should be the same as on my local PC?
It is normal to use ZeroMQ on the same localhost during prototyping, so you may test and debug the integration. For details on ZeroMQ, feel free to read all details in other posts.
Q : Is it enough if I run the .py file in the powershell from the server or should I host the file with the Webside I already have and use that as "Python-Server"?
Yes, in case the .py file was designed that way. No code, no advice. That simple.
Possible issues :
Versions - ZeroMQ, since 2.11.x till the recent 4.3.+, has made a lot of changes
Installation DLL-details matter.
MQL4 has similarly gone through many changes ( string ceased to be a string and become struct to name a biggest impacting one ), so start with simple scenarios and integrate the target architecture in steps / phases with due testing whether the completed phases work as expected.
to fix that problem you need this:
from DWX_ZeroMQ_Connector_v2_0_1_RC8 import DWX_ZeroMQ_Connector
_zmq = DWX_ZeroMQ_Connector()
(adjust your version of the connector as appropriate).
should fix that problem.