Websocket code works on Windows but not Linux - python

I'm running the same code; the following works in Windows, but will run correctly on Ubuntu (16.04).
import websocket
import json
class WhatEver(object):
def __init__(self):
self.ws = websocket.WebSocketApp(
'wss://beijing.51nebula.com/',
on_message=self.on_ws_message,
on_open=self.on_open
)
def rin_forever(self):
print("start run forever")
self.ws.run_forever()
def on_ws_message(self, ws,message):
print (message)
self.ws.close()
def _send_msg(self, params):
call = {"id": 1, "method": "call",
"params": params}
self.ws.send(json.dumps(call))
def on_open(self, ws):
print("start open function")
self._send_msg([1, "login", ["",""]])
if __name__ == '__main__':
ws=WhatEver()
print("start")
ws.rin_forever()
print("close")
I've tried to reinstalled all modules (including the same version of Python and websocket between both Windows and Ubuntu), the print of this code is correct on the Windows system:
start
start run forever
start open function
{"id":1,"jsonrpc":"2.0","result":true}
close
But when it run in Ubuntu, while it does print, it misses some print statements:
start
start run forever
close
When I debug the code in Ubuntu, I found that the main thread stops in the self.ws.run_forever() call and never jumps to the on_open function. Then it breaks out.

You are using two different versions of the library, with the version on Windows being older than version 0.53. As of version 0.53, the websocket project differentiates callback behaviour between bound methods and regular functions.
You are passing in bound methods (self.on_open and self.on_ws_message), at which point the ws argument is not passed in. Those methods are apparently expected to have access to the websocket already via their instance, probably because the expected use-case is to create a subclass from the socket class.
This is unfortunately not documented by the project, and the change appears to have been causing problems for more people.
So for version 0.53 and newer, drop the ws argument from your callbacks:
class WhatEver(object):
def __init__(self):
self.ws = websocket.WebSocketApp(
'wss://beijing.51nebula.com/',
on_message=self.on_ws_message,
on_open=self.on_open
)
# ...
def on_ws_message(self, message):
print(message)
self.ws.close()
# ...
def on_open(self):
print("start open function")
self._send_msg([1, "login", ["", ""]])
And you can discover issues like these by enabling logging; the websocket module logs exceptions it encounters in callbacks to the logger.getLogger('websocket') logger. A quick way to see these issues is to enable tracing:
websocket.enableTrace(True)
which adds a logging handler just to that logging object, turns on logging.DEBUG level reporting for that object and in addition enables full socket data echoing.
Or you can configure logging to output messages in general with the logging.basicConfig() function:
import logging
logging.basicConfig()
which lets you see logging.ERROR level messages and up.
With using the latter option, the uncorrected version of the code prints out:
start
start run forever
ERROR:websocket:error from callback <bound method WhatEver.on_open of <__main__.WhatEver object at 0x1119ec668>>: on_open() missing 1 required positional argument: 'ws'
close
You can verify the version of websocket-client you have installed by printing websocket.__version__:
>>> import websocket
>>> websocket.__version__
'0.54.0'

Related

What's the design solution for this situation?

I have the following situation ( all 3 are functions in a python class ) where I have to send a message to a remote device with 2 callbacks that give detail about the state of the remote device.
# callback when a app has completed downloaded on a remote device
def handleAppDownloadComplete():
#something
# callback when an app has restarted on a remote device
def handleAppRestart():
# app restart callback
def sendMessage(message):
// Do things like validation etc
sendMessageToRemoteDevice(message)
My situation is
1) call sendMessage when handleAppDownloadComplete callback is called
2) At any point during sendMessage(), if handleAppRestart() is called, stop execution of sendMessage(), wait for handleAppDownloadComplete() to be called back and call sendMessage() again.
I have tried to use threading.events(), but this seems very cyclical for me. And to add, both the call backs are provided by third party libraries and I can't change them. Any better way/design to handle this situation?
https://docs.python.org/3/library/asyncio-task.html#future (look at the example)
You could model the call to sendMessage() as a task which could be cancelled by handleAppRestart(). So you'd have a class variable task which would bind to a task.
import asyncio
class foo:
task = None
loop = asyncio.get_event_loop()
def handleAppDownloadComplete()
{
task = asyncio.ensure_future(sendMessage(bar))
loop.run_until_complete(tasks)
}
# callback when an app has restarted on a remote device
def handleAppRestart()
{
task.cancel()
}
#asyncio.coroutine
def sendMessage(message)
{
// Do things like validation etc
sendMessageToRemoteDevice(message)
}
Btw what you gave in your question isn't Python code, and neither is my answer (Python doesn't use {} and I didn't indent correctly).
Anyway, answer is: Use asynchronous abstractions to do what you want.
EDIT: Wait, you can't change handleAppDownloadComplete(), handleAppRestart() or sendMessage(message)?

Provide remote shell for Python script

I want to create a convenient simple way to connect to my running Python script remotely (via file sockets, TCP or whatever) to get a remote interactive shell.
I thought that this would be easy via IPython or so. However, I didn't really found any good example. I tried to start IPython.embed_kernel(), but that is blocking. So I tried to run that in another thread but that had many strange side effects on the rest of my script and I don't want any side effects (no replacement of sys.stdout, sys.stderr, sys.excepthook or whatever) and it also didn't worked - I could not connect. I found this related bug report and this code snippet which suggest to use mock.patch('signal.signal') but that also didn't worked. Also, why do I need that - I also don't want IPython to register any signal handlers.
There are also hacks such as pyringe and my own pydbattach to attach to some running Python instance but they seem to be too hacky.
Maybe QdbRemotePythonDebugger can help me?
My current solution is to setup an IPython ZMQ kernel. I don't just use
IPython.embed_kernel()
because that has many side effects, such as messing around with sys.stdout, sys.stderr, sys.excepthook, signal.signal, etc and I don't want these side effects. Also, embed_kernel() is blocking and doesn't really work out-of-the-box in a separate thread (see here).
So, I came up with this code, which is far too complicated in my opinion. (That is why I created a feature request here.)
def initIPythonKernel():
# You can remotely connect to this kernel. See the output on stdout.
try:
import IPython.kernel.zmq.ipkernel
from IPython.kernel.zmq.ipkernel import Kernel
from IPython.kernel.zmq.heartbeat import Heartbeat
from IPython.kernel.zmq.session import Session
from IPython.kernel import write_connection_file
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
IPython.kernel.zmq.ipkernel.signal = lambda sig, f: None # Overwrite.
except ImportError, e:
print "IPython import error, cannot start IPython kernel. %s" % e
return
import atexit
import socket
import logging
import threading
# Do in mainthread to avoid history sqlite DB errors at exit.
# https://github.com/ipython/ipython/issues/680
assert isinstance(threading.currentThread(), threading._MainThread)
try:
connection_file = "kernel-%s.json" % os.getpid()
def cleanup_connection_file():
try:
os.remove(connection_file)
except (IOError, OSError):
pass
atexit.register(cleanup_connection_file)
logger = logging.Logger("IPython")
logger.addHandler(logging.NullHandler())
session = Session(username=u'kernel')
context = zmq.Context.instance()
ip = socket.gethostbyname(socket.gethostname())
transport = "tcp"
addr = "%s://%s" % (transport, ip)
shell_socket = context.socket(zmq.ROUTER)
shell_port = shell_socket.bind_to_random_port(addr)
iopub_socket = context.socket(zmq.PUB)
iopub_port = iopub_socket.bind_to_random_port(addr)
control_socket = context.socket(zmq.ROUTER)
control_port = control_socket.bind_to_random_port(addr)
hb_ctx = zmq.Context()
heartbeat = Heartbeat(hb_ctx, (transport, ip, 0))
hb_port = heartbeat.port
heartbeat.start()
shell_stream = ZMQStream(shell_socket)
control_stream = ZMQStream(control_socket)
kernel = Kernel(session=session,
shell_streams=[shell_stream, control_stream],
iopub_socket=iopub_socket,
log=logger)
write_connection_file(connection_file,
shell_port=shell_port, iopub_port=iopub_port, control_port=control_port, hb_port=hb_port,
ip=ip)
print "To connect another client to this IPython kernel, use:", \
"ipython console --existing %s" % connection_file
except Exception, e:
print "Exception while initializing IPython ZMQ kernel. %s" % e
return
def ipython_thread():
kernel.start()
try:
ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
pass
thread = threading.Thread(target=ipython_thread, name="IPython kernel")
thread.daemon = True
thread.start()
Note that this code is outdated now. I have made a package here which should contain a more recent version, and which can be installed via pip.
Other alternatives to attach to running CPython process without having it prepared beforehand. Those usually use the OS debugging capabilities (or use gdb/lldb) to attach to the native CPython process and then inject some code or just analyze the native CPython thread stacks.
pyringe
pyrasite
pystuck
pdb-clone
Here are other alternatives where you prepare your Python script beforehand to listen on some (tcp/file) socket to provide an interface for remote debugging and/or just a Python shell / REPL.
winpdb (cross platform) remote debugger
PyCharm IDE remote debugger,
doc
PyDev IDE remote debugger
Twisted Conch Manhole,
official example,
lothar.com example,
lysator.liu.se example,
related StackOverflow question,
blog.futurefoundries.com (2013)
very simple manhole, has also some overview over related projects
ispyd
Eric IDE
Trepan (based on pydb)
rpdb
rconsole
(part of rfoo)
Some overviews and collected code examples:
(QGIS) Example code for PyDev, Winpdb, Eric
Python Wiki: Python debugging tools,
Python Wiki: Python debuggers
(This overview is from here.)

Python Tornado - disable logging to stderr

I have minimalistic Tornado application:
import tornado.ioloop
import tornado.web
class PingHandler(tornado.web.RequestHandler):
def get(self):
self.write("pong\n")
if __name__ == "__main__":
application = tornado.web.Application([ ("/ping", PingHandler), ])
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
Tornado keeps reporting error requests to stderr:
WARNING:tornado.access:404 GET / (127.0.0.1) 0.79ms
Question: It want to prevent it from logging error messages. How?
Tornado version 3.1; Python 2.6
Its clear that "someone" is initializing logging subsystem when we start Tornado. Here is the code from ioloop.py that reveals the mystery:
def start(self):
if not logging.getLogger().handlers:
# The IOLoop catches and logs exceptions, so it's
# important that log output be visible. However, python's
# default behavior for non-root loggers (prior to python
# 3.2) is to print an unhelpful "no handlers could be
# found" message rather than the actual log entry, so we
# must explicitly configure logging if we've made it this
# far without anything.
logging.basicConfig()
basicConfig is called and configures default stderr handler.
So to setup proper logging for tonado access, you need to:
Add a handler to tornado.access logger: logging.getLogger("tornado.access").addHandler(...)
Disable propagation for the above logger: logging.getLogger("tornado.access").propagate = False. Otherwise messages will arrive BOTH to your handler AND to stderr
The previous answer was correct, but a little incomplete. This will send everything to the NullHandler:
hn = logging.NullHandler()
hn.setLevel(logging.DEBUG)
logging.getLogger("tornado.access").addHandler(hn)
logging.getLogger("tornado.access").propagate = False
You could also quite simply (in one line) do:
logging.getLogger('tornado.access').disabled = True

Python logging module doesn't work within installed windows service

Why is it that calls to logging framework within a python service do not produce output to the log (file, stdout,...)?
My python service has the general form:
import logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler('out.log')
logger.addHandler(fh)
logger.error("OUTSIDE")
class Service (win32serviceutil.ServiceFramework):
_svc_name_ = "example"
_svc_display_name_ = "example"
_svc_description_ = "example"
def __init__(self,args):
logger.error("NOT LOGGED")
win32serviceutil.ServiceFramework.__init__(self,args)
self.hWaitStop = win32event.CreateEvent(None,0,0,None)
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_,''))
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
self.stop = True
def SvcDoRun(self):
self.ReportServiceStatus(win32service.SERVICE_RUNNING)
self.main()
def main(self):
# Service Logic
logger.error("NOT LOGGED EITHER")
pass
The first call to logger.error produces output, but not the two inside the service class (even after installing the service and making sure it is running).
I've found that only logging within the actual service loop works with the logging module and the log file ends up in something like C:\python27\Lib\site-packages\win32.
I abandoned logging with the logging module for Windows as it didn't seem very effective. Instead I started to use the Windows logging service, e.g. servicemanager.LogInfoMsg() and related functions. This logs events to the Windows Application log, which you can find in the Event Viewer (start->run->Event Viewer, Windows Logs folder, Application log).
You have to write the full path of the log file.
e.g.
fh = logging.FileHandler('C:\\out.log')
actually outside logger initialized twice.
these 2 outside loggers are in different processes. one is the python process and another one is windows-service process.
for some reason, the 2nd one didn't configured success, and inside loggers in this process too. that's why u cant find the inside logs.

Best way to run remote commands thru ssh in Twisted?

I have a twisted application which now needs to monitor processes running on several boxes. The way I manually do is 'ssh and ps', now I'd like my twisted application to do. I have 2 options.
Use paramiko or leverage the power of twisted.conch
I really want to use twisted.conch but my research led me to believe that its primarily intended to create SSHServers and SSHClients. However my requirement is a simple remoteExecute(some_cmd)
I was able to figure out how to do this using paramiko but I didnt want to stickparamiko in my twisted app before looking at how to do this using twisted.conch
Code snippets using twisted on how to run remote_cmds using ssh would be highly appreciated. Thanks.
Followup - Happily, the ticket I referenced below is now resolved. The simpler API will be included in the next release of Twisted. The original answer is still a valid way to use Conch and may reveal some interesting details about what's going on, but from Twisted 13.1 and on, if you just want to run a command and handle it's I/O, this simpler interface will work.
It takes an unfortunately large amount of code to execute a command on an SSH using the Conch client APIs. Conch makes you deal with a lot of different layers, even if you just want sensible boring default behavior. However, it's certainly possible. Here's some code which I've been meaning to finish and add to Twisted to simplify this case:
import sys, os
from zope.interface import implements
from twisted.python.failure import Failure
from twisted.python.log import err
from twisted.internet.error import ConnectionDone
from twisted.internet.defer import Deferred, succeed, setDebugging
from twisted.internet.interfaces import IStreamClientEndpoint
from twisted.internet.protocol import Factory, Protocol
from twisted.conch.ssh.common import NS
from twisted.conch.ssh.channel import SSHChannel
from twisted.conch.ssh.transport import SSHClientTransport
from twisted.conch.ssh.connection import SSHConnection
from twisted.conch.client.default import SSHUserAuthClient
from twisted.conch.client.options import ConchOptions
# setDebugging(True)
class _CommandTransport(SSHClientTransport):
_secured = False
def verifyHostKey(self, hostKey, fingerprint):
return succeed(True)
def connectionSecure(self):
self._secured = True
command = _CommandConnection(
self.factory.command,
self.factory.commandProtocolFactory,
self.factory.commandConnected)
userauth = SSHUserAuthClient(
os.environ['USER'], ConchOptions(), command)
self.requestService(userauth)
def connectionLost(self, reason):
if not self._secured:
self.factory.commandConnected.errback(reason)
class _CommandConnection(SSHConnection):
def __init__(self, command, protocolFactory, commandConnected):
SSHConnection.__init__(self)
self._command = command
self._protocolFactory = protocolFactory
self._commandConnected = commandConnected
def serviceStarted(self):
channel = _CommandChannel(
self._command, self._protocolFactory, self._commandConnected)
self.openChannel(channel)
class _CommandChannel(SSHChannel):
name = 'session'
def __init__(self, command, protocolFactory, commandConnected):
SSHChannel.__init__(self)
self._command = command
self._protocolFactory = protocolFactory
self._commandConnected = commandConnected
def openFailed(self, reason):
self._commandConnected.errback(reason)
def channelOpen(self, ignored):
self.conn.sendRequest(self, 'exec', NS(self._command))
self._protocol = self._protocolFactory.buildProtocol(None)
self._protocol.makeConnection(self)
def dataReceived(self, bytes):
self._protocol.dataReceived(bytes)
def closed(self):
self._protocol.connectionLost(
Failure(ConnectionDone("ssh channel closed")))
class SSHCommandClientEndpoint(object):
implements(IStreamClientEndpoint)
def __init__(self, command, sshServer):
self._command = command
self._sshServer = sshServer
def connect(self, protocolFactory):
factory = Factory()
factory.protocol = _CommandTransport
factory.command = self._command
factory.commandProtocolFactory = protocolFactory
factory.commandConnected = Deferred()
d = self._sshServer.connect(factory)
d.addErrback(factory.commandConnected.errback)
return factory.commandConnected
class StdoutEcho(Protocol):
def dataReceived(self, bytes):
sys.stdout.write(bytes)
sys.stdout.flush()
def connectionLost(self, reason):
self.factory.finished.callback(None)
def copyToStdout(endpoint):
echoFactory = Factory()
echoFactory.protocol = StdoutEcho
echoFactory.finished = Deferred()
d = endpoint.connect(echoFactory)
d.addErrback(echoFactory.finished.errback)
return echoFactory.finished
def main():
from twisted.python.log import startLogging
from twisted.internet import reactor
from twisted.internet.endpoints import TCP4ClientEndpoint
# startLogging(sys.stdout)
sshServer = TCP4ClientEndpoint(reactor, "localhost", 22)
commandEndpoint = SSHCommandClientEndpoint("/bin/ls", sshServer)
d = copyToStdout(commandEndpoint)
d.addErrback(err, "ssh command / copy to stdout failed")
d.addCallback(lambda ignored: reactor.stop())
reactor.run()
if __name__ == '__main__':
main()
Some things to note about it:
It uses the new endpoint APIs introduced in Twisted 10.1. It's possible to do this directly on reactor.connectTCP, but I did it as an endpoint to make it more useful; endpoints can be swapped easily without the code that actually asks for a connection knowing.
It does no host key verification at all! _CommandTransport.verifyHostKey is where you would implement that. Take a look at twisted/conch/client/default.py for some hints about what kinds of things you might want to do.
It takes $USER to be the remote username, which you may want to be a parameter.
It probably only works with key authentication. If you want to enable password authentication, you probably need to subclass SSHUserAuthClient and override getPassword to do something.
Almost all of the layers of SSH and Conch are visible here:
_CommandTransport is at the bottom, a plain old protocol that implements the SSH transport protocol. It creates a...
_CommandConnection which implements the SSH connection negotiation parts of the protocol. Once that completes, a...
_CommandChannel is used to talk to a newly opened SSH channel. _CommandChannel does the actual exec to launch your command. Once the channel is opened it creates an instance of...
StdoutEcho, or whatever other protocol you supply. This protocol will get the output from the command you execute, and can write to the command's stdin.
See http://twistedmatrix.com/trac/ticket/4698 for progress in Twisted on supporting this with less code.

Categories

Resources