Python multiprocessing package supports a remote manager feature where one python process can IPC with another process, however from their example it seems this must go through the OS's IP stack.
Is there a way of using the remote manager without going through the IP stack, assuming the two processes are local, thus making it quicker?
Related
I have a CLI application which is executed via Wine on Linux as it needs some closed source DLLs which are only available for Windows. However I also have another tool which is much easier to compile/run on Linux. That Linux application communicates via STDIN/STDOUT.
So I want to spawn a native Linux process from Wine, pass some data (ideally via stdin), wait for the process to complete and read its result (ideally via stdout). This is trivial if both processes would be run in the same OS environment (pure Linux/Posix/Windows) but more complicated in my case.
I can spawn a Linux process using popen but I can't get its stdout (always getting an empty string).
I understand that Wine itself won't/can't provide blocking process creation (probably this creates a lot of edge cases when trying to maintain Windows semantics) as detailed in Wine bug 18335, stackoverflow answer "Execute Shell Commands from Program running in WINE".
However the Wine process is still running under Linux so I think it should be possible to somehow tap into Linux's (= kernel) functionality and do a blocking read.
Does anyone have some pointers on how to launch a Linux process and get its stdout from Wine?
Any other ideas on how to do IPC without complicated server installs?
Theoretically I could use to file system and wait for a result file to appear or run a TCP/HTTP server for communication. Ideally the input is only accessible for the launched application without a server port which every application on the same host can access.
I read about "winelib" as a way to access native Unix functionality from "Windows" programs but I'm not sure I fully grasp how to use it and if it helps me (I can adapt the Wine program but as I mentioned earlier I need to access some closed source DLLs which I can not modify).
Edit: I just noticed the zugbruecke library which allows to communicate with a Windows DLL from (Unix) Python (via a custom wine+TCP connection from Python's multiprocesing). I can not use that as-is (my DLL library uses a lot of pointers so I have wrapped it via pybind11) and it would mean I have to rework my application a bit. However it might result in an elegant solution where the Windows bits are more isolated and I can have more Linux fun. :-)
I'm trying to understand all the methods available to execute remote commands on Windows through the impacket scripts:
https://www.coresecurity.com/corelabs-research/open-source-tools/impacket
https://github.com/CoreSecurity/impacket
I understand the high level explanation of psexec.py and smbexec.py, how they create a service on the remote end and run commands through cmd.exe -c but I can't understand how can you create a service on a remote windows host through SMB. Wasn't smb supposed to be mainly for file transfers and printer sharing? Reading the source code I see in the notes that they use DCERPC to create this services, is this part of the smb protocol? All the resources on DCERPC i've found were kind of confusing, and not focused on its service creating capabilities. Looking at the sourcecode of atexec.py, it says that it interacts with the task scheduler service of the windows host, also through DCERPC. Can it be used to interact with all services running on the remote box?
Thanks!
DCERPC (https://en.wikipedia.org/wiki/DCE/RPC) : the initial protocol, which was used as a template for MSRPC (https://en.wikipedia.org/wiki/Microsoft_RPC).
MSRPC is a way to execute functions on the remote end and to transfer data (parameters to these functions). It is not a way to directly execute remote OS commands on the remote side.
SMB (https://en.wikipedia.org/wiki/Server_Message_Block ) is the file sharing protocol mainly used to access files on Windows file servers. In addition, it provides Named Pipes (https://msdn.microsoft.com/en-us/library/cc239733.aspx), a way to transfer data between a local process and a remote process.
One common way for MSRPC is to use it via Named Pipes over SMB, which has the advantage that the security layer provided by SMB is directly approached for MSRPC.
In fact, MSRPC is one of the most important, yet very less known protocols in the Windows world.
Neither MSRPC, nor SMB has something to do with remote execution of shell commands.
One common way to execute remote commands is:
Copy files (via SMB) to the remote side (Windows service EXE)
Create registry entries on the remote side (so that the copied Windows Service is installed and startable)
Start the Windows service.
The started Windows service can use any network protocol (e.g. MSRPC) to receive commands and to execute them.
After the work is done, the Windows service can be uninstalled (remove registry entries and delete the files).
In fact, this is what PSEXEC does.
All the resources on DCERPC i've found were kind of confusing, and not
focused on its service creating capabilities.
Yes, It’s just a remote procedure call protocol. But it can be used to start a procedure on the remote side, which can just do anything, e.g. creating a service.
Looking at the sourcecode of atexec.py, it says that it interacts with
the task scheduler service of the windows host, also through DCERPC.
Can it be used to interact with all services running on the remote
box?
There are some MSRPC commands which handle Task Scheduler, and others which handle generic service start and stop commands.
A few final words at the end:
SMB / CIFS and the protocols around are really complex and hard to understand. It seems ok trying to understand how to deal with e.g. remote service control, but this can be a very long journey.
Perhaps this page (which uses Java for trying to control Windows service) may also help understanding.
https://dev.c-ware.de/confluence/pages/viewpage.action?pageId=15007754
I am trying to establish communication between two different processes on Linux using POSIX IPC. I am using python 3 with posix message queues based on this library http://semanchuk.com/philip/posix_ipc/ .
The problem is that I want to communicate between a server that is running as root and a client that is running with normal user permissions (separate python program).
If the client creates the message queue then it works, presumably because it allocates under a normal user and the process running under root has higher permissions. I however want the server to create the message queue as that can properly manage the closure of the message queue when the server terminates etc.
Is it possible for a root process to create an IPC message queue and allow processes running under a different user to write to the queue? If so how?
Or is there any alternative to POSIX IPC that could be used instead (eg. Sys V)?
I'm hoping to avoid using UNIX sockets as I don't want the additional overhead that uses.
-- Update on latest attempt --
I've read up on all the documentation I can find. The library readme says that they found it to work regardless of permissions, but that's not my experience.
The Linux programming interface (on which the library relies) states to use both mode and umask, but even if I use os.umask(000) followed by mode=666 within the message queue setup I still get permission denied from the client.
You might want to try Linux domain sockets.
Access to filesystem-based ones can be managed with filesystem permissions. Domain sockets in abstract namespace can be secured by checking credentials (PID/UID) of connecting process, — see also: "SCM_RIGHTS".
Domain sockets are very fast, — they are used by Xorg, so kernel developers have optimized them well. They are also more portable than POSIX IPC (supported on Android). Stream-based mode might be a bit awkward to use for message-oriented IPC, so you should consider switching to datagram mode instead.
I have an interactive console application and I need to work with it using Python (send commands and receive output). The application is started by another one, I can't start it from Python script.
Is it possible to connect to already running console application and get access to its stdin/stdout?
Ideally the solution should work both in Windows and Unix, but just Windows version would also be helpful. Currently I am using the solution found here
http://code.activestate.com/recipes/440554/
but it doesn't allow connecting to existing process.
Thanks for any input,
Have you considered using sockets since they are straight forward for simple/streaming. They are also platform independent.
The most critical point is thread safety where having to pass IO streams between threads/processes tends to be hectic.
If on the other hand you use a socket, a lot can be communicated without adding too much complexity to how the processes work(coding an error prone RPC for instance).
try Documentation or
example
I've been searching for a library that can access multiple ssh connections at once, Ruby has a Net::SSH::Multi module that allows multiple ssh connections at once. However I rather prefer coding this in Python, are there any similar SSH module for python?
Paramiko is Python's SSH library.
I've never tried concurrent connections with Paramiko, but this answer says it's possible, and this little script seems to make multiple connections in different threads.
The Paramiko mailing list also confirms it's possible to make multiple connections by forking -- there was a security issue regarding that, and it was patched in early 2008.