I have an executable (main.exe) that I've packaged with pyinstaller, appears to be functioning as expected. I execute main.exe from a nodejs server as a child_process and in task manager I can see 2 main.exe processes running.
It looks like this is a result of the bootloader: https://pyinstaller.readthedocs.io/en/stable/advanced-topics.html#the-bootstrap-process-in-detail
" It begins the setup and then returns itself in another process. This approach of using two processes allows a lot of flexibility and is used in all bundles except one-folder mode in Windows. So do not be surprised if you will see your bundled app as two processes in your system task manager."
My issue is, how can I cleanly access and terminate this second process from within nodejs? Currently I terminate the original child process but am left with a single main.exe process running.
Related
I am running python script on a client machine. It is a multi processes application that spawns several processes. However in the task Manager of windows I see all the processes running (see image).
I am looking to try to rename it to the company name but this seems to be a struggle.
Have tried adding a name to the spawning of the processes, that didn't work.
processes = multiprocessing.Process(name="mycompany",target=activateMainProgram, args=(argument1,))
Have tried to rename Python.exe to myPython.exe and still python appears in the list.
Is there a solution to do this? There are several clients so to do it manually (manually renaming the processes in Task Manager - if this is even possible) isn't as option.
Thank you.
I am using buildbot. Is it possible to write my own build-step class that executes Python code on the worker?
The build-step will consist of
find all files of a certain type in the source
start a 3rd-party application that is installed on the worker via PythonCOM
command the started app to do some checks for the files found in step 1
close the app
Unfortunately the app does not support command line parameters for performing the required operation.
I know I could write my own shell script and have that run on the worker via the RemoteCommand class. But I'd prefer to have all code in one place (in the new build-step) and not having to place such a script on each worker.
Opened program doesnt shutdown when called upon with terminate() on macOS.
I tried to open an external file through python, and then closing it. Everything seems to be working except for killing the process (application) on macOS. How to kill it?
prog = subprocess.Popen(['open', FileName])
while True:
if keyboard.is_pressed("q"):
prog.terminate()
break
Not so easy.
In mac, the open command will dig up the system to find an application to launch your file, which means it will fork-exec another executable as a new process. Once it is done, the open command will terminate while leaving the other process run (in the background, as orphaned process).
So using the subprocess context, you will see the process ID of open but not the process ID of the child process that open launched. Moreover, consider the case that open launches a directory, which on Mac it will be opened as a new window on Finder. Then you have no new process ID created! Similarly for other files if the application invoked already running before you called open and it prefers to open the new file in new tabs of existing process instead.
In your situation, if you want a better control, probably you need to figure out the right application to open your file, and launch it directly instead of relying on open.
Edit:
This is the man page of open. You may use some switch to make open running until the child process terminates and so on. But still, I am not sure you can kill the child processes by killing open (whether or not that succeed depends on a lot of factors). Probably you need to figure out the process IDs of the child processes and kill them directly.
I have a question about Python on Linux. I have a Python application that currently runs on Windows. The application controls some hardware, and each hardware control application runs in it's own process, which in turn sometimes start processes of their own. The processes communicate with named pipes, named mutexes, and named memory-mapped files to the main control process, but each application process has its own console window. This allows the user to select the window for one application process, representing one hardware item, and view what it's doing. It also allows a simple "print" to produce debug statements on the window for that process. On Windows this is easy because either os.startfile or subprocess.popen can run a python script in a separate process, in a new console window, and capturing "print" output in that window. The main process starts all the application processes and then minimizes the windows, making it easy for the user to select one (or more) for viewing. The application processes write log files when they are done, but having a console window for each one allows viewing of progress and messages in real time.
I need to make a linux version of this and I'm running into issues. I can't figure out how to make Linux open an application in a separate, visible, process window. If I use subprocess.popen with shell=True, I get a separate process but no visible window. If I set stdout=subprocess.PIPE, the application isn't in a separate process, but uses the main process window for printing, and incidentally hangs the main process until it's done (this is disasterous in my application). I found a workaround where I open the application process with shell=True, and the application process then creates a named pipe and opens its own GUI (using shell=True) for output display. But this means I have to change all the print statements in the application processes to go to the named pipe, which is a huge amount of work. Plus, it would be nice (but not essential) if the Windows and Linux versions looked the same in how the windows appear.
Is there a way, in Python, on Linux, to start a new Python process in an open, visible window that will capture "print" statements? Or am I trying to do something Linux doesn't support? I'd rather not change everything to sockets - it would probably be easier to use the GUI and named pipe method than to do that.
Thanks for any answers or insight.
Let me start with what I'm really trying to do. We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath. We picked Jython in particular because we only need to depend on the standalone jython.jar in our startup script. We decided we could write a jython script that uses subprocess.Popen to launch our application's jvm and then terminates.
One more thing. Our application uses a lot of legacy debug code that prints to standard out. So the startup script typically has been redirecting stdout/stderr to a log file. I attempted to reproduce that with our jython script like this:
subprocess.Popen(args,stdout=logFile,stderr=logFile)
After this line the launcher script and hosting jvm for jython terminates. The problem is nothing shows up in the logFile. If I instead do this:
subprocess.Popen(args,stdout=logFile,stderr=logFile).wait()
then we get logs. So the parent process needs to run parallel to the application process launched via subprocess? I want to avoid having two running jvms.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates? Is there a better way to launch the application jvm from jython? Is Jython a bad solution anyway?
We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath.
You could use a platform independent script to generate a platform specific startup script either at installation time or before each invocation. In the latter case, additionally, you need a simple static platform specific script that invokes your platform independent startup-script-generating script and then the generated script itself. In both cases you start your application by calling a static platform specific script.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates?
You could open file/redirect in a child process e.g., using shell:
Popen(' '.join(args+['>', 'logFile', '2>&1']), # shell specific cmdline
shell=True) # on Windows see _cmdline2list to understand what is going on