Opened program doesnt shutdown when called upon with terminate() on macOS.
I tried to open an external file through python, and then closing it. Everything seems to be working except for killing the process (application) on macOS. How to kill it?
prog = subprocess.Popen(['open', FileName])
while True:
if keyboard.is_pressed("q"):
prog.terminate()
break
Not so easy.
In mac, the open command will dig up the system to find an application to launch your file, which means it will fork-exec another executable as a new process. Once it is done, the open command will terminate while leaving the other process run (in the background, as orphaned process).
So using the subprocess context, you will see the process ID of open but not the process ID of the child process that open launched. Moreover, consider the case that open launches a directory, which on Mac it will be opened as a new window on Finder. Then you have no new process ID created! Similarly for other files if the application invoked already running before you called open and it prefers to open the new file in new tabs of existing process instead.
In your situation, if you want a better control, probably you need to figure out the right application to open your file, and launch it directly instead of relying on open.
Edit:
This is the man page of open. You may use some switch to make open running until the child process terminates and so on. But still, I am not sure you can kill the child processes by killing open (whether or not that succeed depends on a lot of factors). Probably you need to figure out the process IDs of the child processes and kill them directly.
Related
I have an executable (main.exe) that I've packaged with pyinstaller, appears to be functioning as expected. I execute main.exe from a nodejs server as a child_process and in task manager I can see 2 main.exe processes running.
It looks like this is a result of the bootloader: https://pyinstaller.readthedocs.io/en/stable/advanced-topics.html#the-bootstrap-process-in-detail
" It begins the setup and then returns itself in another process. This approach of using two processes allows a lot of flexibility and is used in all bundles except one-folder mode in Windows. So do not be surprised if you will see your bundled app as two processes in your system task manager."
My issue is, how can I cleanly access and terminate this second process from within nodejs? Currently I terminate the original child process but am left with a single main.exe process running.
I am working on Unix systems and have a GUI application that in turn spawns couple of other processes. These processes required to run independent of the parent process (GUI application). Basically, when the GUI is crashed or closed, the child processes should keep running.
One approach could be to demonize the processes. Here is an useful answer that runs a process in background through double forking.
What I would like to ask is, if it is possible to have the same result using terminal-multiplexer, like tmux or GNU-Screen. I am not sure how these terminal-multiplexers creates and maintain shell sessions but the basic idea would be to start the GUI application, that uses 'tmux' or 'screen' to creates a shell session and run child processes within the shell session. Would it make the child process independent of parent processes?
Thanks in advance!
It should work if your GUI runs something like this:
tmux new-session -s test -d vim
which creates a detached session named "test", running the "vim" command. The session can then be attached with:
tmux attach-session -t test
I have a question about Python on Linux. I have a Python application that currently runs on Windows. The application controls some hardware, and each hardware control application runs in it's own process, which in turn sometimes start processes of their own. The processes communicate with named pipes, named mutexes, and named memory-mapped files to the main control process, but each application process has its own console window. This allows the user to select the window for one application process, representing one hardware item, and view what it's doing. It also allows a simple "print" to produce debug statements on the window for that process. On Windows this is easy because either os.startfile or subprocess.popen can run a python script in a separate process, in a new console window, and capturing "print" output in that window. The main process starts all the application processes and then minimizes the windows, making it easy for the user to select one (or more) for viewing. The application processes write log files when they are done, but having a console window for each one allows viewing of progress and messages in real time.
I need to make a linux version of this and I'm running into issues. I can't figure out how to make Linux open an application in a separate, visible, process window. If I use subprocess.popen with shell=True, I get a separate process but no visible window. If I set stdout=subprocess.PIPE, the application isn't in a separate process, but uses the main process window for printing, and incidentally hangs the main process until it's done (this is disasterous in my application). I found a workaround where I open the application process with shell=True, and the application process then creates a named pipe and opens its own GUI (using shell=True) for output display. But this means I have to change all the print statements in the application processes to go to the named pipe, which is a huge amount of work. Plus, it would be nice (but not essential) if the Windows and Linux versions looked the same in how the windows appear.
Is there a way, in Python, on Linux, to start a new Python process in an open, visible window that will capture "print" statements? Or am I trying to do something Linux doesn't support? I'd rather not change everything to sockets - it would probably be easier to use the GUI and named pipe method than to do that.
Thanks for any answers or insight.
I have an external server that I can SSH into. It runs bots on reddit.
Whenever I close the terminal window the bot is running in, the process stops, which means the bot stops as well.
I've tried using
nohup python mybot.py
but it doesn't work - when I close the window and check the processes (ps -e), python does not show up. Are there any alternatives to nohup? Ideally, that print the output to the terminal, instead of an external file.
Have you considered using tmux/screen? They have lots of features and can help you detach a terminal and re-attach to it at a later date without disrupting the running process.
In a project I am working on, there is some code that starts up a long-running process using sudo:
subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
I would like to clean up this process when the parent exits. Currently, the subprocess keeps running when the parent exits (re-attached to init, of course).
I am not sure of the best solution to this problem. The code is limited to only running certain commands via sudo, and granting blanket authority to run sudo kill would be sketchy at best.
I don't have an open pipe to the child process that I can close (the child process is not reading from stdin), and I am not able to modify the code of the child process.
Are there any other mechanisms that might work in this situation?
First of all I just answer the question. Though I do not think it is a good thing to do, it is what you asked for. I would wrap that child process into a small program that can listen stdin. Then you may sudo that program, and it will be able to run the process without sudo, and will know its pid and have the rights needed to kill the process when you ask it through stdin to do so.
However, generally such a situation means sudo with no password and poor security. The most common technique is to use lowering your program's privileges, not elevating them. In such case you should create a runner program that is started by superuser, than it starts your main program with lowering of privileges and listens for a pipe to communicate. When it is necessary to run a command, your main program tells that to the runner program, and runner program does the job. When it is necessary to terminate command, you again tell this to a runner program via the pipe.
The common rules are:
If you need superuser rights, you should give them to the very parent process.
If a child process needs to do a privileged operation, it requests the top-level process to do that for him.
The top-level process should be kept as small as possible and do as little as possible. The larger it is, the more holes in security it creates.
That's what many applications do. The first example that comes into my mind is Apache web server (at least on *nix) that has a small top-level program and preforked working programs that are not run as root/wheel/whatever-else-is-the-superuser-username.
This will raise OSError: [Errno 1] Operation not permitted on the last line:
p = subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
print p.stdout.read()
p.terminate()
Assuming sudo will not ask for a password, one workaround is to make a shell script which calls sudo …
#!/bin/sh
sudo /usr/bin/somecommand
… and then do this in Python:
p = subprocess.Popen("/path/to/script.sh", cwd="/path/to")
print p.stdout.read()
p.terminate()