read stdout of a Linux process spawned by a wine application - python

I have a CLI application which is executed via Wine on Linux as it needs some closed source DLLs which are only available for Windows. However I also have another tool which is much easier to compile/run on Linux. That Linux application communicates via STDIN/STDOUT.
So I want to spawn a native Linux process from Wine, pass some data (ideally via stdin), wait for the process to complete and read its result (ideally via stdout). This is trivial if both processes would be run in the same OS environment (pure Linux/Posix/Windows) but more complicated in my case.
I can spawn a Linux process using popen but I can't get its stdout (always getting an empty string).
I understand that Wine itself won't/can't provide blocking process creation (probably this creates a lot of edge cases when trying to maintain Windows semantics) as detailed in Wine bug 18335, stackoverflow answer "Execute Shell Commands from Program running in WINE".
However the Wine process is still running under Linux so I think it should be possible to somehow tap into Linux's (= kernel) functionality and do a blocking read.
Does anyone have some pointers on how to launch a Linux process and get its stdout from Wine?
Any other ideas on how to do IPC without complicated server installs?
Theoretically I could use to file system and wait for a result file to appear or run a TCP/HTTP server for communication. Ideally the input is only accessible for the launched application without a server port which every application on the same host can access.
I read about "winelib" as a way to access native Unix functionality from "Windows" programs but I'm not sure I fully grasp how to use it and if it helps me (I can adapt the Wine program but as I mentioned earlier I need to access some closed source DLLs which I can not modify).
Edit: I just noticed the zugbruecke library which allows to communicate with a Windows DLL from (Unix) Python (via a custom wine+TCP connection from Python's multiprocesing). I can not use that as-is (my DLL library uses a lot of pointers so I have wrapped it via pybind11) and it would mean I have to rework my application a bit. However it might result in an elegant solution where the Windows bits are more isolated and I can have more Linux fun. :-)

Related

Interprocess communication in multi-user windows environment

I have multiple GUI utilities written in Python/QT, and I'd like to be able to do some RPC between them to allow the tools to work together. In the past for personal things, I've used rpyc to do python RPC even across machines / OSes (windows to linux), but that doesn't seem to be a great solution in a multi-user environment.
What's the best way to do this in a windows multi-user environment where multiple users can be simultaneously running the same set of tools? It seems rpyc is rather clunky here, as each user's process would have to find a free port to start rpyc on, and then need some way to communicate to the "clients" in the same session which port belongs to which user/session.
I just want a simple, no hassle way of "if process 1 is running in the same session, process 2 can interact with it".

How can I monitor a python scrypt and restart it in the event of a crash? (Windows)

I have a simple python script to send data from a Windows 7 box to a remote computer via SFTP. The script is set to continuously send a single file every 5 minutes. This all works fine but I'm worried about the off chance that the process stops or fails and the customer doesn't notice the data files have stopped coming in. I've found several ways to monitor python processes in a ubuntu/unix environment but nothing for Windows.
If there are no other mitigating factors in your design or requirements, my suggestion would be to simplify the script so that it doesn't do the polling; it simply sends the file when invoked, and use Windows Scheduler to invoke the script on whatever schedule you need. By relying on a core Windows service, you can factor that complexity out of your script.
You can check out restartme the following link shows how you can use it
http://www.howtogeek.com/130665/quickly-and-automatically-restart-a-windows-program-when-it-crashes/

How to launch a python process in Windows SYSTEM account

I am writing a test application in python and to test some particular scenario, I need to launch my python child process in windows SYSTEM account.
I can do this by creating exe from my python script and then use that while creating windows service. But this option is not good for me because in future if I change anything in my python script then I have to regenerate exe every-time.
If anybody have any better idea about how to do this then please let me know.
Bishnu
Create a service that runs permanently.
Arrange for the service to have an IPC communications channel.
From your desktop python code, send messages to the service down that IPC channel. These messages specify the action to be taken by the service.
The service receives the message and performs the action. That is, executes the python code that the sender requests.
This allows you to decouple the service from the python code that it executes and so allows you to avoid repeatedly re-installing a service.
If you don't want to run in a service then you can use CreateProcessAsUser or similar APIs.
You could also use Windows Task Scheduler, it can run a script under SYSTEM account and its interface is easy (if you do not test too often :-) )
To run a file with system account privileges, you can use psexec. Download this :
Sysinternals
Then you may use :
os.system
or
subprocess.call
And execute:
PSEXEC -i -s -d CMD "path\to\yourfile"
Just came across this one - I know, a bit late, but anyway. I encountered a similar situation and I solved it with NSSM (Non_Sucking Service Manager). Basically, this program enables you to start any executable as a service, which I did with my Python executable and gave it the Python script I was testing on as a parameter.
So I could run the service and edit the script however I wanted. I just had to restart the service when I made any changes to the script.
One point for productive environments: Try not to rely on third party software like NSSM. You could also achieve this with the standard SC command (see this answer) or PowerShell (see this MS doc).

Python: connect to already opened console application

I have an interactive console application and I need to work with it using Python (send commands and receive output). The application is started by another one, I can't start it from Python script.
Is it possible to connect to already running console application and get access to its stdin/stdout?
Ideally the solution should work both in Windows and Unix, but just Windows version would also be helpful. Currently I am using the solution found here
http://code.activestate.com/recipes/440554/
but it doesn't allow connecting to existing process.
Thanks for any input,
Have you considered using sockets since they are straight forward for simple/streaming. They are also platform independent.
The most critical point is thread safety where having to pass IO streams between threads/processes tends to be hectic.
If on the other hand you use a socket, a lot can be communicated without adding too much complexity to how the processes work(coding an error prone RPC for instance).
try Documentation or
example

Starting and stopping server script

I've been building a performance test suite to exercise a server. Right now I run this by hand but I want to automate it. On the target server I run a python script that logs server metrics and halts when I hit enter. On the tester machine I run a bash script that iterates over JMeter tests, setting timestamps and naming logs and executing the tests.
I want to tie these together so that the bash script drives the whole process, but I am not sure how best to do this. I can start my python script via ssh, but how to halt it when a test is done? If I can do it in ssh then I don't need to mess with existing configuration and that is a big win. The python script is quite simple and I don't mind rewriting it if that helps.
The easiest solution is probably to make the Python script respond to signals. Of course, you can just SIGKILL the script if it doesn't require any cleanup, but having the script actually handle a shutdown request seems cleaner. SIGHUP might be a popular choice. Docs here.
You can send a signal with the kill command so there is no problem sending the signal through ssh, provided you know the pid of the script. The usual solution to this problem is to put the pid in a file in /var/run when you start the script up. (If you've got a Debian/Ubuntu system, you'll probably find that you have the start-stop-daemon utility, which will do a lot of the grunt work here.)
Another approach, which is a bit more code-intensive, is to create a fifo (named pipe) in some known location, and use it basically like you are currently using stdin: the server waits for input from the pipe, and when it gets something it recognizes as a command, it executes the command ("quit", for example). That might be overkill for your purpose, but it has the advantage of being a more articulated communications channel than a single hammer-hit.

Categories

Resources