Looking at the python socketio documentation I defined a custom namespace:
import socketio
class MyCustomNamespace(socketio.ClientNamespace):
def on_connect(self):
pass
def on_disconnect(self):
pass
def on_my_event(self, data):
self.emit('my_response', data)
sio = socketio.Client()
sio.register_namespace(MyCustomNamespace('/chat'))
sio.connect("http://127.0.0.1:8888")
sio.emit("get", namespace="/chat")
Now this namespace works as long as I start the connection after registering the namespace. Makes sense.
But is there a way to register namespaces after the connection has started? I get the following error:
File "//src/pipeline/reporting/iam.py", line 30, in request_iam_credentials
self.socket_client.emit(
File "/usr/local/lib/python3.8/dist-packages/socketio/client.py", line 393, in emit
raise exceptions.BadNamespaceError(
socketio.exceptions.BadNamespaceError: /chat is not a connected namespace.
Each namespace is kinda different space you have to connect to. If you don't use the namespace explicitely it's defaulted to '/'. Look:
sio = socketio.Client()
sio.emit("get") # emit before connect
# socketio.exceptions.BadNamespaceError: / is not a connected namespace.
Your situation is the same, but with /chat namespace - you connect to / but try emiting to /chat. You need to connect to /chat namespace by yourself:
sio.connect("http://127.0.0.1:8888", namespaces=["/chat"]) # notice you can connect to more namespaces at once
which is exactly what sio.connect do when you register namespace earlier.
Related
I'm a newbie in Python and I have problem with mockito in Python.
My production code looks like below:
from stompest.config import StompConfig
from stompest.sync import Stomp
class Connector:
def sendMessage(self):
message = {'message'}
dest = '/queue/foo'
def _send(self, message='', dest=''):
config = StompConfig(uri="tcp://localhost:61613")
client = Stomp(config)
client.connect()
client.send(body=message, destination=dest,
headers='')
client.disconnect()
as you see I would like to send a message using Stomp protocol. I my test I would like to test that w when I invoke a send method from Connector class a send method from Stompest library will be only once invoked.
My unit test looks like:
from Connector import Connector
import unittest
from mockito import *
import stompest
from stompest.config import StompConfig
from stompest.sync import Stomp
class test_Connector(unittest.TestCase):
def test_shouldInvokeConnectMethod(self):
stomp_config = StompConfig(uri="tcp://localhost:61613")
mock_stomp = mock(Stomp(stomp_config))
connector = Connector()
connector.sendMessage()
verify(mock_stomp, times=1).connect()
When I run test in debug mode I see that method for instance connect() is invoked and method send as well, but as a result of test I get:
Failure
Traceback (most recent call last):
File "C:\development\systemtest_repo\robot_libraries\test_Connector.py", line 16, in test_shouldInvokeConnectMethod
verify(mock_stomp, times=1).connect()
File "C:\Python27\lib\site-packages\mockito\invocation.py", line 111, in __call__
verification.verify(self, len(matched_invocations))
File "C:\Python27\lib\site-packages\mockito\verification.py", line 63, in verify
raise VerificationError("\nWanted but not invoked: %s" % (invocation))
VerificationError:
Wanted but not invoked: connect()
What did I do wrong?
You don't actually call the connect method on the mock object - you just check that it was called. This is what the error says as well Wanted but not invoked: connect(). Perhaps adding a call to mock_stomp.connect() before the call to verify will fix this:
mock_stomp = mock(Stomp(stomp_config))
# call the connect method first...
mock_stomp.connect()
connector = Connector()
connector.sendMessage()
# ...then check it was called
verify(mock_stomp, times=1).connect()
If you are instead trying to check that the mock is called from Connector, you probably at least need to pass in the mock_stomp object via dependency injection. For example
class Connector:
def __init__(self, stomp):
self.stomp = stomp
def sendMessage(self, msg):
self.stomp.connect()
# etc ...
and in your test
mock_stomp = mock(Stomp(stomp_config))
connector = Connector(mock_stomp)
connector.sendMessage()
verify(mock_stomp, times=1).connect()
Otherwise, I don't see how the connect() method could be invoked on the same instance of mock_stomp that you are basing your assertions on.
How can I manage my rabbit-mq connection in Pyramid app?
I would like to re-use a connection to the queue throughout the web application's lifetime. Currently I am opening/closing connection to the queue for every publish call.
But I can't find any "global" services definition in Pyramid. Any help appreciated.
Pyramid does not need a "global services definition" because you can trivially do that in plain Python:
db.py:
connection = None
def connect(url):
global connection
connection = FooBarBaz(url)
your startup file (__init__.py)
from db import connect
if __name__ == '__main__':
connect(DB_CONNSTRING)
elsewhere:
from db import connection
...
connection.do_stuff(foo, bar, baz)
Having a global (any global) is going to cause problems if you ever run your app in a multi-threaded environment, but is perfectly fine if you run multiple processes, so it's not a huge restriction. If you need to work with threads the recipe can be extended to use thread-local variables. Here's another example which also connects lazily, when the connection is needed the first time.
db.py:
import threading
connections = threading.local()
def get_connection():
if not hasattr(connections, 'this_thread_connection'):
connections.this_thread_connection = FooBarBaz(DB_STRING)
return connections.this_thread_connection
elsewhere:
from db import get_connection
get_connection().do_stuff(foo, bar, baz)
Another common problem with long-living connections is that the application won't auto-recover if, say, you restart RabbitMQ while your application is running. You'll need to somehow detect dead connections and reconnect.
It looks like you can attach objects to the request with add_request_method.
Here's a little example app using that method to make one and only one connection to a socket on startup, then make the connection available to each request:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
def index(request):
return Response('I have a persistent connection: {} with id {}'.format(
repr(request.conn).replace("<", "<"),
id(request.conn),
))
def add_connection():
import socket
s = socket.socket()
s.connect(("google.com", 80))
print("I should run only once")
def inner(request):
return s
return inner
if __name__ == '__main__':
config = Configurator()
config.add_route('index', '/')
config.add_view(index, route_name='index')
config.add_request_method(add_connection(), 'conn', reify=True)
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
server.serve_forever()
You'll need to be careful about threading / forking in this case though (each thread / process will need its own connection). Also, note that I am not very familiar with pyramid, there may be a better way to do this.
I am trying to listen to new socketIO connections on the user's id namespace. The user ID is stored in the flask session object.
#socketio.on('connect', namespace=session['userId'])
def test_connect():
emit('newMessage')
This code is producing the following error:
raise RuntimeError('working outside of request context')
How can I get the above connect listener to run within the request context?
Thanks!
Unfortunately this cannot be done, because namespaces aren't dynamic, you have to use a static string as a namespace.
The idea of the namespace in SocketIO is not to add information about the connection, but to allow the client to open more than one individual channel with the server. Namespaces allow the SocketIO protocol to multiplex all these channels into a single physical connection.
What you want to do is to provide an input argument of the connection into the server. For that, just add the value to your payload:
#socketio.on('connect', namespace='/chat')
def test_connect():
userid = session['userId']
# ...
I'm using HTTPServer for a basic HTTP server using SSL. I would like to log any time a client initiates an SSL Handshake (or perhaps any time a socket is accepted?) along with any associated errors. I imagine that I'd need to extend some class or override some method, but I'm not sure which or how to properly go about implementing it. I'd greatly appreciate any help. Thanks in advance!
Trimmed down sample code:
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
from threading import Thread
import ssl
import logging
import sys
class MyHTTPHandler(BaseHTTPRequestHandler):
def log_message(self, format, *args):
logger.info("%s - - %s" % (self.address_string(), format%args))
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write('test'.encode("utf-8"))
class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):
pass
logger = logging.getLogger('myserver')
handler = logging.FileHandler('server.log')
formatter = logging.Formatter('[%(asctime)s] %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
server = ThreadedHTTPServer(('', 443), MyHTTPHandler)
server.socket = ssl.wrap_socket (server.socket, keyfile='server.key', certfile='server.crt', server_side=True, cert_reqs=ssl.CERT_REQUIRED, ca_certs='client.crt')
Thread(target=server.serve_forever).start()
try:
quitcheck = input("Type 'quit' at any time to quit.\n")
if quitcheck == "quit":
server.shutdown()
except (KeyboardInterrupt) as error:
server.shutdown()
From looking at the ssl module, most of the relevant magic happens in the SSLSocket class.
ssl.wrap_socket() is just a tiny convenience function that basically serves as a factory for an SSLSocket with some reasonable defaults, and wraps an existing socket.
Unfortunately, SSLSocket does not seem to do any logging of its own, so there's no easy way to turn up a logging level, set a debug flag or register any handlers.
So what you can do instead is to subclass SSLSocket, override the methods you're interested in with your own that do some logging, and create and use your own wrap_socket helper function.
Subclassing SSLSocket
First, copy over ssl.wrap_socket() from your Python's .../lib/python2.7/ssl.py into your code. (Make sure that any code you copy and modify actually comes from the Python installation you're using - the code may have changed between different Python versions).
Now adapt your copy of wrap_socket() to
create an instance of a LoggingSSLSocket (which we'll implement below) instead of SSLSocket
and use constants from the ssl module where necessary (ssl.CERT_NONE and ssl.PROTOCOL_SSLv23 in this example)
def wrap_socket(sock, keyfile=None, certfile=None,
server_side=False, cert_reqs=ssl.CERT_NONE,
ssl_version=ssl.PROTOCOL_SSLv23, ca_certs=None,
do_handshake_on_connect=True,
suppress_ragged_eofs=True,
ciphers=None):
return LoggingSSLSocket(sock=sock, keyfile=keyfile, certfile=certfile,
server_side=server_side, cert_reqs=cert_reqs,
ssl_version=ssl_version, ca_certs=ca_certs,
do_handshake_on_connect=do_handshake_on_connect,
suppress_ragged_eofs=suppress_ragged_eofs,
ciphers=ciphers)
Now change your line
server.socket = ssl.wrap_socket (server.socket, ...)
to
server.socket = wrap_socket(server.socket, ...)
in order to use your own wrap_socket().
Now for subclassing SSLSocket. Create a class LoggingSSLSocket that subclasses SSLSocket by adding the following to your code:
class LoggingSSLSocket(ssl.SSLSocket):
def accept(self, *args, **kwargs):
logger.debug('Accepting connection...')
result = super(LoggingSSLSocket, self).accept(*args, **kwargs)
logger.debug('Done accepting connection.')
return result
def do_handshake(self, *args, **kwargs):
logger.debug('Starting handshake...')
result = super(LoggingSSLSocket, self).do_handshake(*args, **kwargs)
logger.debug('Done with handshake.')
return result
Here we override the accept() and do_handshake() methods of ssl.SSLSocket - everything else stays the same, since the class inherits from SSLSocket.
Generic approach to overriding methods
I used a particular pattern for overriding these methods in order to make it easier to apply to pretty much any method you'll ever override:
def methodname(self, *args, **kwargs):
*args, **kwargs makes sure our method accepts any number of positional and keyword arguments, if any. accept doesn't actually take any of those, but it still works because of Python's packing / unpacking of argument lists.
logger.debug('Before call to superclass method')
Here you get the opportunity to do your own thing before calling the superclass' method.
result = super(LoggingSSLSocket, self).methodname(*args, **kwargs)
This is the actual call to the superclass' method. See the docs on super() for details on how this works, but it basically calls .methodname() on LoggingSSLSocket's superclass (SSLSocket). Because we pass *args, **kwargs to the method, we just pass on any positional and keyword arguments our method got - we don't even need to know what they are, the method signatures will always match.
Because some methods (like accept()) will return a result, we store that result and return it at the end of our method, just before doing our post-call work:
logger.debug('After call.')
return result
Logging more details
If you want to include more information in your logging statements, you'll likely have to completely overwrite the respective methods. So copy them over and modify them as required, and make sure you satisfy any missing imports.
Here's an example for accept() that includes the IP address and local port of the client that's trying to connect:
def accept(self):
"""Accepts a new connection from a remote client, and returns
a tuple containing that new connection wrapped with a server-side
SSL channel, and the address of the remote client."""
newsock, addr = socket.accept(self)
logger.debug("Accepting connection from '%s'..." % (addr, ))
newsock = self.context.wrap_socket(newsock,
do_handshake_on_connect=self.do_handshake_on_connect,
suppress_ragged_eofs=self.suppress_ragged_eofs,
server_side=True)
logger.debug('Done accepting connection.')
return newsock, addr
(Make sure to include from socket import socket in your imports at the top of your code - refer to the ssl module's imports to determine where you need to import missing names from if you get a NameError. An good text editor with PyFlakes configured is very helpful in pointing those missing imports out to you).
This method will result in logging output like this:
[2014-10-24 22:01:40,299] Accepting connection from '('127.0.0.1', 64152)'...
[2014-10-24 22:01:40,300] Done accepting connection.
[2014-10-24 22:01:40,301] Accepting connection from '('127.0.0.1', 64153)'...
[2014-10-24 22:01:40,302] Done accepting connection.
[2014-10-24 22:01:40,306] Accepting connection from '('127.0.0.1', 64155)'...
[2014-10-24 22:01:40,307] Done accepting connection.
[2014-10-24 22:01:40,308] 127.0.0.1 - - "GET / HTTP/1.1" 200 -
Because it involves quite a few changes scattered all over the place, here's a gist containing all the changes to your example code.
I am trying to make a very simple XML RPC Server with Python that provides basic authentication + ability to obtain the connected user's IP. Let's take the example provided in http://docs.python.org/library/xmlrpclib.html :
import xmlrpclib
from SimpleXMLRPCServer import SimpleXMLRPCServer
def is_even(n):
return n%2 == 0
server = SimpleXMLRPCServer(("localhost", 8000))
server.register_function(is_even, "is_even")
server.serve_forever()
So now, the first idea behind this is to make the user supply credentials and process them before allowing him to use the functions. I need very simple authentication, for example just a code. Right now what I'm doing is to force the user to supply this code in the function call and test it with an if-statement.
The second one is to be able to get the user IP when he calls a function or either store it after he connects to the server.
Moreover, I already have an Apache Server running and it might be simpler to integrate this into it.
What do you think?
This is a related question that I found helpful:
IP address of client in Python SimpleXMLRPCServer?
What worked for me was to grab the client_address in an overridden finish_request method of the server, stash it in the server itself, and then access this in an overridden server _dispatch routine. You might be able to access the server itself from within the method, too, but I was just trying to add the IP address as an automatic first argument to all my method calls. The reason I used a dict was because I'm also going to add a session token and perhaps other metadata as well.
from xmlrpc.server import DocXMLRPCServer
from socketserver import BaseServer
class NewXMLRPCServer( DocXMLRPCServer):
def finish_request( self, request, client_address):
self.client_address = client_address
BaseServer.finish_request( self, request, client_address)
def _dispatch( self, method, params):
metadata = { 'client_address' : self.client_address[ 0] }
newParams = ( metadata, ) + params
return DocXMLRPCServer._dispatch( self, method, metadata)
Note this will BREAK introspection functions like system.listMethods() because that isn't expecting the extra argument. One idea would be to check the method name for "system." and just pass the regular params in that case.