How to communicate with an external Python process? (not a subprocess) - python

The other python process was launched externally, only the process identifier is known. This external process is not a subprocess launched from a python process. The path to both processes could be the same. How do I communicate with that process? How can I easily send python data types between these processes?
Best regards,
Czarek

If you can accept communicating between the processes using a tcp connection, you could use zeromq. http://zeromq.org/
See these threads for examples:
interprocess communication in python
how to communicate two separate python processes?

Related

Java - run a python code from shared memory

When I use shared memory, I can ask the OS memory which is allowed to execute.
Is it possible to write to the memory the executable code in python (from a python process) and then execute it from a Java process?
I have a Java process that receives some data from a remote server, and another process in Python that knows how to process that data. After the Java process receives the data, it needs the processed data back. So Instead of sending the data to the python process and then get it back, I thought maybe the python process can share its processing code and the Java process will just use it. This way there's no need to lock the memory while writing.

Python posix IPC - communication between process running as a different user

I am trying to establish communication between two different processes on Linux using POSIX IPC. I am using python 3 with posix message queues based on this library http://semanchuk.com/philip/posix_ipc/ .
The problem is that I want to communicate between a server that is running as root and a client that is running with normal user permissions (separate python program).
If the client creates the message queue then it works, presumably because it allocates under a normal user and the process running under root has higher permissions. I however want the server to create the message queue as that can properly manage the closure of the message queue when the server terminates etc.
Is it possible for a root process to create an IPC message queue and allow processes running under a different user to write to the queue? If so how?
Or is there any alternative to POSIX IPC that could be used instead (eg. Sys V)?
I'm hoping to avoid using UNIX sockets as I don't want the additional overhead that uses.
-- Update on latest attempt --
I've read up on all the documentation I can find. The library readme says that they found it to work regardless of permissions, but that's not my experience.
The Linux programming interface (on which the library relies) states to use both mode and umask, but even if I use os.umask(000) followed by mode=666 within the message queue setup I still get permission denied from the client.
You might want to try Linux domain sockets.
Access to filesystem-based ones can be managed with filesystem permissions. Domain sockets in abstract namespace can be secured by checking credentials (PID/UID) of connecting process, — see also: "SCM_RIGHTS".
Domain sockets are very fast, — they are used by Xorg, so kernel developers have optimized them well. They are also more portable than POSIX IPC (supported on Android). Stream-based mode might be a bit awkward to use for message-oriented IPC, so you should consider switching to datagram mode instead.

Multiprocessing with in a linux daemon written in python

I have a linux daemon (based on python module python-daemon) that needs to spawn two processes (consider a producer and a consumer) of the Multiprocessing module to handle some concurrent I/O (the producer reads from an input stream and the consumer uploads the data using python requests).
As per the python docs (https://docs.python.org/2/library/multiprocessing.html), daemonic processes are not allowed to start child processes. How can I handle this? Are there any documents or examples for this approach?
Pls. advise.
Context:
I have tried using threading module. But, due to the GIL problem, the consumer rarely gets a chance to execute. I also looked into tornado and gevent. But, that would require rewriting a lot of the code.
I think there is some confusion here. Document says only if you mark the process that has been created from python as daemon then it cannot create sub process. But your python-daemon is a normal linux daemon.
linux daemon - process running in background. (python daemon library creates such process), these can have subprocess
Only a daemon process created from multiprocessing library cannot create sub-process.

remote python manager without going through IP stack

Python multiprocessing package supports a remote manager feature where one python process can IPC with another process, however from their example it seems this must go through the OS's IP stack.
Is there a way of using the remote manager without going through the IP stack, assuming the two processes are local, thus making it quicker?

Can I send SIGINT to a Python subprocess on Windows?

I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb)
It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms?
(I am using the subprocess module, and python 2.5/2.6)
Windows doesn't have the unix signals IPC mechanism.
I would look at sending a CTRL-C to the gdb process.

Categories

Resources