I'm currently trying to figure out how to send OSC messages from Python to Max/MSP. I'm currently using osc4py3 to do so, and I have a sample code from the documentation that should hypothetically be working, written out here:
from osc4py3.as_eventloop import *
from osc4py3 import oscbuildparse
# Start the system.
osc_startup()
# Make client channels to send packets.
osc_udp_client("127.0. 0.1", 5000, "tester")
msg = oscbuildparse.OSCMessage("/test/me", ",sif", ["text", 672, 8.871])
osc_send(msg, "tester")
The receiver in Max is just a udprecieve object listening to port 5000. I managed to get Processing to send OSC messages to Max and it worked pretty simply using the oscp5 library, but I can't seem to have the same luck in Python.
What is it I'm missing? Moreover, I don't entirely understand the structure for building OSC messages in osc4py3, even after doing my best with the documentation; if someone would be willing to explain what exactly is going on (namely, the arguments) in something like
msg = oscbuildparse.OSCMessage("/test/me", ",sif", ["text", 672, 8.871])
then I would be forever grateful.
I'm entirely open to using another OSC library, but all I ask is a run-through on how to send a message (I've attempted using pyOSC but that too proved too confusing for me).
Maybe you already solved it but in the posted code there are two problems. One is the IP address format (there is a space before the second "0"). Then you need the command osc.process() at the end. So the following way should work
from osc4py3.as_eventloop import *
from osc4py3 import oscbuildparse
# Start the system.
osc_startup()
# Make client channels to send packets.
osc_udp_client("127.0.0.1", 5000, "tester")
msg = oscbuildparse.OSCMessage("/test/me", ",sif", ["text", 672,
8.871])
osc_send(msg, "tester")
osc_process()
Hope it will work out
There are different possible scheduling policies in osc4py3. The documentation uses the event-loop model with as_eventloop, where user code must periodically call osc_process() to have osc4py3 deal with internal messages queues and communications.
The client example for sending OSC messages wrap osc_process() call in a loop (generally it is into an event processing loop).
You may dismiss osc_process() call simply by importing names with full multithreading scheduling policy at the beginning of your code:
from osc4py3.as_allthreads import *
The third scheduling policy is as_comthreads, where communications are processed in background threads, but received messages (in server side) are processed synchronously at the osc_process() call.
(by the author of osc4py3)
Related
I have a first sender script in Python 3.10 which needs to send some data
def post_updates(*args):
sender.send_message("optional_key", args)
Then a second receiver script in Python 3.7 which needs to receive this data
while True:
args = receiver.get_message("optional_key", blocking=True)
print("args received:", args)
Constraints:
Each script should not depend on the presence of the other to run.
The sender should try to send regardless if the receiver is running.
The receiver should try to receive regardless if the sender is running.
The message can consist of basic python objects (dict, list) and should be serialized automatically.
I need to send over 100 messages per second (minimizing latency if possible).
Local PC only (Windows) and no need for security.
Are there 1-liner solutions to this simple problem? Everything I look up seems overly complicated or requires a TCP server to be started beforehand. I don't mind installing popular modules.
UDP and JSON look perfect for what you're asking for, as long as
you don't need there to be more than one receiver
you don't need very large messages
you just need to send combinations of dicts, lists, strings, and numbers, not Python objects of arbitrary classes
you're not being overly literal about finding a "1-liner": it's a very small amount of code to write, and you're free to define your own helper functions.
Python's standard library has all you need for this. Encoding and decoding from JSON is as simple as json.dumps() and json.loads(). For sending and receiving, I suggest following the example on the Python wiki. You need to create the socket first with
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
regardless of if you're making the sender or the receiver. The receiver will then need to bind to the local port to listen to it:
sock.bind(('127.0.0.1', PORT))
And then the sender sends with sock.sendto() and the receiver receives with sock.recvfrom().
The good old pipe might do the job, but you need to assess how big the buffer size needs to be (given the async nature of your sender/receiver), and change the default pipe buffer size.
Im trying to make a tcp communication, where the server sends a message every x seconds through a socket, and should stop sending those messages on a certain condition where the client isnt sending any message for 5 seconds.
To be more detailed, the client also sends constant messages which are all ignored by the server on the same socket as above, and can stop sending them at any unknown time. The messages are, for simplicity, used as alive messages to inform the server that the communication is still relevant.
The problem is that if i want to send repeated messages from the server, i cannot allow it to "get busy" and start receiving messages instead, thus i cannot detect when a new messages arrives from the other side and act accordingly.
The problem is independent of the programming language, but to be more specific im using python, and cannot access the code of the client.
Is there any option of receiving and sending messages on a single socket simultaneously?
Thanks!
Option 1
Use two threads, one will write to the socket and the second will read from it.
This works since sockets are full-duplex (allow bi-directional simultaneous access).
Option 2
Use a single thread that manages all keep alives using select.epoll. This way one thread can handle multiple clients. Remember though, that if this isn't the only thread that uses the sockets, you might need to handle thread safety on your own
As discussed in another answer, threads are one common approach. The other approach is to use an event loop and nonblocking I/O. Recent versions of Python (I think starting at 3.4) include a package called asyncio that supports this.
You can call the create_connection method on an event_loop to create an asyncio connection. See this example for a simple server that reads and writes over TCP.
In many cases an event loop can permit higher performance than threads, but it has the disadvantage of requiring most or all of your code to be aware of the event model.
I am currently implementing a socket server using Python's socketServer module. I am struggling to understand how a client 'signals' the server to perform certain tasks.
As you can tell, I am a beginner in this area. I have looked at many tutorials, however, these only tell you how to perform singular tasks in the server e.g. modify a message from the client and send it back.
Ideally what I want to know is there a way for the client to communicate with the server to perform different kinds of tasks.
Is there a standard approach to this issue?
Am I even using the correct type of server?
I was thinking of implementing some form of message passing from the client that tells the server which task it should perform.
I was thinking of implementing some form of message passing from the client that tells the server which task it should perform.
That's exactly what you need: an application protocol.
A socket (assuming a streaming Internet socket, or TCP) is a stream of bytes, nothing more. To give those bytes any meaning, you need a protocol that determines which byte (or sequence thereof) means what.
The main problem to tackle is that the stream that such a socket provides has no notion of "messages". So when one party sends "HELLO", and "BYE" after that, it all gets concatenated into the stream: "HELLOBYE". Or worse even, your server first receives "HELL", followed by "OBYE".
So you need message framing, or rules how to interpret where messages start and end.
You generally don't want to invent your own application protocol. Usually HTTP or other existing protocols are leveraged to pass messages around.
I'm working on a really basic "image streaming" server as a school subject, and I've done most of the work but I'm still stuck on the separation between data and control related sockets:
My structure is : TCPServer (my server, used as control socket) contains a dataSocket (only used to send images and initialized within my TCPServer object, when I receive a certain query)
When I'm sending data (images) through my dataSocket, I still need to see if the client sent a PAUSE or STOP request, but if I use python's self.request.recv(1024) the server awaits a response instead of continuing to send data (which is quite logical).
What should I do to prevent this behavior ? Should I launch my recv(1024) on a separate thread and run it at each loop (and check if I get any relevant data in between two iterations) ?
Twisted should do the trick! It handles asynchronous sockets in Python
I'm currently writing a project in Python which has a client and a server part. I have troubles with the network communication, so I need to explain some things...
The client mainly does operations the server tells him to and sends the results of the operations back to the server. I need a way to communicate bidirectional on a TCP socket.
Current Situation
I currently use a LineReceiver of the Twisted framework on the server side, and a plain Python socket (and ssl) on client side (because I was unable to correctly implement a Twisted PushProducer). There is a Queue on the client side which gets filled with data which should be sent to the server; a subprocess continuously pulls data from the queue and sends it to the server (see code below).
This scenario works well, if only the client pushes its results to the manager. There is no possibility the server can send data to the client. More accurate, there is no way for the client to receive data the server has sent.
The Problem
I need a way to send commands from the server to the client.
I thought about listening for incoming data in the client loop I use to send data from the queue:
def run(self):
while True:
data = self.queue.get()
logger.debug("Sending: %s", repr(data))
data = cPickle.dumps(data)
self.socket.write(data + "\r\n")
# Here would be a good place to listen on the socket
But there are several problems with this solution:
the SSLSocket.read() method is a blocking one
if there is no data in the queue, the client will never receive any data
Yes, I could use Queue.get_nowait() instead of Queue.get(), but all in all it's not a good solution, I think.
The Question
Is there a good way to achieve this requirements with Twisted? I really do not have that much skills on Twisted to find my way round in there. I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event.
Is Twisted (or more general any event driven framework) able to solve this problem? I don't even have real event on the communication side. If the server decides to send data, it should be able to send it; there should not be a need to wait for any event on the communication side, as possible.
"I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event."
You can send data using protocol.transport.write from anywhere, not just in lineReceived.
"I need a way to send commands from the server to the client."
Don't do this. It inverts the usual meaning of "client" and "server". Clients take the active role and send stuff or request stuff from the server.
Is Twisted (or more general any event driven framework) able to solve this problem?
It shouldn't. You're inverting the role of client and server.
If the server decides to send data, it should be able to send it;
False, actually.
The server is constrained to wait for clients to request data. That's generally the accepted meaning of "client" and "server".
"One to send commands to the client and one to transmit the results to the server. Does this solution sound more like a standard client-server communication for you?"
No.
If a client sent messages to a server and received responses from the server, it would meet more usual definitions.
Sometimes, this sort of thing is described as having "Agents" which are -- each -- a kind of server and a "Controller" which is a single client of all these servers.
The controller dispatches work to the agents. The agents are servers -- they listen on a port, accept work from the controller, and do work. Each Agent must do two concurrent things (usually via the select API):
Monitor a well-known socket on which it will receive work from the one-and-only client.
Do the work (in the background).
This is what Client-Server usually means.
If each Agent is a Server, you'll find lots of libraries will support this. This is the way everyone does it.