Let me start with what I'm really trying to do. We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath. We picked Jython in particular because we only need to depend on the standalone jython.jar in our startup script. We decided we could write a jython script that uses subprocess.Popen to launch our application's jvm and then terminates.
One more thing. Our application uses a lot of legacy debug code that prints to standard out. So the startup script typically has been redirecting stdout/stderr to a log file. I attempted to reproduce that with our jython script like this:
subprocess.Popen(args,stdout=logFile,stderr=logFile)
After this line the launcher script and hosting jvm for jython terminates. The problem is nothing shows up in the logFile. If I instead do this:
subprocess.Popen(args,stdout=logFile,stderr=logFile).wait()
then we get logs. So the parent process needs to run parallel to the application process launched via subprocess? I want to avoid having two running jvms.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates? Is there a better way to launch the application jvm from jython? Is Jython a bad solution anyway?
We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath.
You could use a platform independent script to generate a platform specific startup script either at installation time or before each invocation. In the latter case, additionally, you need a simple static platform specific script that invokes your platform independent startup-script-generating script and then the generated script itself. In both cases you start your application by calling a static platform specific script.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates?
You could open file/redirect in a child process e.g., using shell:
Popen(' '.join(args+['>', 'logFile', '2>&1']), # shell specific cmdline
shell=True) # on Windows see _cmdline2list to understand what is going on
Related
I am using buildbot. Is it possible to write my own build-step class that executes Python code on the worker?
The build-step will consist of
find all files of a certain type in the source
start a 3rd-party application that is installed on the worker via PythonCOM
command the started app to do some checks for the files found in step 1
close the app
Unfortunately the app does not support command line parameters for performing the required operation.
I know I could write my own shell script and have that run on the worker via the RemoteCommand class. But I'd prefer to have all code in one place (in the new build-step) and not having to place such a script on each worker.
I have a CLI application which is executed via Wine on Linux as it needs some closed source DLLs which are only available for Windows. However I also have another tool which is much easier to compile/run on Linux. That Linux application communicates via STDIN/STDOUT.
So I want to spawn a native Linux process from Wine, pass some data (ideally via stdin), wait for the process to complete and read its result (ideally via stdout). This is trivial if both processes would be run in the same OS environment (pure Linux/Posix/Windows) but more complicated in my case.
I can spawn a Linux process using popen but I can't get its stdout (always getting an empty string).
I understand that Wine itself won't/can't provide blocking process creation (probably this creates a lot of edge cases when trying to maintain Windows semantics) as detailed in Wine bug 18335, stackoverflow answer "Execute Shell Commands from Program running in WINE".
However the Wine process is still running under Linux so I think it should be possible to somehow tap into Linux's (= kernel) functionality and do a blocking read.
Does anyone have some pointers on how to launch a Linux process and get its stdout from Wine?
Any other ideas on how to do IPC without complicated server installs?
Theoretically I could use to file system and wait for a result file to appear or run a TCP/HTTP server for communication. Ideally the input is only accessible for the launched application without a server port which every application on the same host can access.
I read about "winelib" as a way to access native Unix functionality from "Windows" programs but I'm not sure I fully grasp how to use it and if it helps me (I can adapt the Wine program but as I mentioned earlier I need to access some closed source DLLs which I can not modify).
Edit: I just noticed the zugbruecke library which allows to communicate with a Windows DLL from (Unix) Python (via a custom wine+TCP connection from Python's multiprocesing). I can not use that as-is (my DLL library uses a lot of pointers so I have wrapped it via pybind11) and it would mean I have to rework my application a bit. However it might result in an elegant solution where the Windows bits are more isolated and I can have more Linux fun. :-)
I have a question about Python on Linux. I have a Python application that currently runs on Windows. The application controls some hardware, and each hardware control application runs in it's own process, which in turn sometimes start processes of their own. The processes communicate with named pipes, named mutexes, and named memory-mapped files to the main control process, but each application process has its own console window. This allows the user to select the window for one application process, representing one hardware item, and view what it's doing. It also allows a simple "print" to produce debug statements on the window for that process. On Windows this is easy because either os.startfile or subprocess.popen can run a python script in a separate process, in a new console window, and capturing "print" output in that window. The main process starts all the application processes and then minimizes the windows, making it easy for the user to select one (or more) for viewing. The application processes write log files when they are done, but having a console window for each one allows viewing of progress and messages in real time.
I need to make a linux version of this and I'm running into issues. I can't figure out how to make Linux open an application in a separate, visible, process window. If I use subprocess.popen with shell=True, I get a separate process but no visible window. If I set stdout=subprocess.PIPE, the application isn't in a separate process, but uses the main process window for printing, and incidentally hangs the main process until it's done (this is disasterous in my application). I found a workaround where I open the application process with shell=True, and the application process then creates a named pipe and opens its own GUI (using shell=True) for output display. But this means I have to change all the print statements in the application processes to go to the named pipe, which is a huge amount of work. Plus, it would be nice (but not essential) if the Windows and Linux versions looked the same in how the windows appear.
Is there a way, in Python, on Linux, to start a new Python process in an open, visible window that will capture "print" statements? Or am I trying to do something Linux doesn't support? I'd rather not change everything to sockets - it would probably be easier to use the GUI and named pipe method than to do that.
Thanks for any answers or insight.
I want to write a python script which can launch an application.The application being launched can also read python commands which I am passing through another script.
The problem I am facing is that I need to use two python scripts, one to launch an application and second one to run commands in launched application.
Can I achieve this using a single script? How do I tell python to run next few lines of script in launched application?
In general, you use subprocess.Popen to launch a command from python. If you set it as non-blocking, it'll let you keep running python statements. You also have access to the running subprocesses stdin and stdout so you can interact with the running application.
If I understand what you're asking, it'd look something like this:
import subprocess
app = subprocess.Popen(["/path/to/app", "-and", "args"], stdin=subprocess.PIPE)
app.stdin.write("python command\n")
I have several python programs that runs in parallel.
I want to write a python program which will manage the programs logs, which mean that the other programs will sent log message to this program and the program will write it to the log file.
Another important feature is that if one of the programs will crash, the 'manage log program' will know about it and could write it to the log file.
I try to use this sample http://docs.python.org/library/logging.html#sending-and-receiving-logging-events-across-a-network
but I failed.
Can anyone please help me?
I wrote a python logger that does just this (even with mpi support).
It is available at https://github.com/JensTimmerman/VSC-tools/blob/master/vsc/fancylogger.py
This logger can log to an udp port on a remote machine.
There I run a daemon that collects the logs and writes them to file:
https://github.com/JensTimmerman/VSC-tools/blob/master/bin/logdaemon.py
This script will start the daemon for you:
https://github.com/JensTimmerman/VSC-tools/blob/master/bin/startlogdaemon.sh
If you then start your python processes and run them in parallel with mpi for example you will only need to use fancylogger.getLogger() and use it as a normal python logger.
It will pick up the environment variables set with the script, log to that server, and have some extra mpi info in the log records. (like the mpi thread number)
If you do not use mpi you will have two options:
set the 'FANCYLOG_SERVER' and 'FANCYLOG_SERVER_PORT' variables manually in each shell where you start the remote python process
or just start the daemon. And in the python scripts get your logger
like this:
import fancylogger
fancylogger.logToUDP(hostname, port=5005)
logger = fancylogger.getLogger()