Okay I have looked at python-daemon, and also at various other daemon related code recipes. Are there any 'hello world' tutorials out there that can help me get started using a python based daemonized process?
The PEP 3143 contains several examples, the simplest one of which is:
import daemon
from spam import do_main_program
with daemon.DaemonContext():
do_main_program()
This seems as straightforward as it gets. If there's something that's unclear, please pose specific questions.
Using subprocess.Popen, you can launch another process that will survive your current process...
In a python console run :
import subprocess
subprocess.Popen(["/bin/sh", "-c", "sleep 500"])
Kill your console, look at existing processes, sleep is alive...
Related
I would imagine there are many different possible solutions to this very leveled problem. I am trying to make a swift-based mac app that can manage all my discord bots from one window. I have gotten it to turn on the discord bots successfully(using a global thread, NOT process objects). However, when I quit the app, I noticed the Python process launched by the app keeps running, and so does the discord bot. Instead of the app killing all child processes, it switches the parent process of the python to null when quit. I don't know swift very well, so I had some trouble getting it to kill all child processes when it closes(and yes, I know there is something with info.plist, but that is only for newer XCode versions than I can install). To fix this, I made the applicationWillTerminate code for AppDelegate.swift execute some python code to kill any process that mentions the files of the one bot that the app works with right now. The bot is stored in a folder called roleManager. Here's the python code:
import os
import subprocess
import re
subprocess = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
output, error = subprocess.communicate()
print(output)
roleProcesses=re.findall("roleManager.{20}", str(output))
#this regex probably could have been better but it works
PIDs=[i.split('\\n')[1] for i in roleProcesses]
for pid in PIDs:
with open('killProcesses.sh','w') as file:
file.write(f'kill {int(pid)}')
os.system('sudo /Users/nathanwolf/Documents/coding/PycharmProjects/botManagerPythonSide/killProcesses.sh')
subprocess.communicate() returns a bytes object with a list of processes formatted like so(I'm pretty sure about this):
CPU time command associated with process(like usr/local/bin/python3.9 some/python/file)\n(not an enter character actually \n)PID ??
The sudo is only there because one time it said it didn't have permission to kill one of the running bots. This approach has had 2 problems. One time it killed its own python process despite not being in the folder roleManager and crashed PyCharm, most of the time it fails to kill the bot. For debugging, I looked for the bot's PID in subprocess.communicate() and it was associated with the following command:
/Library/Developer/PrivateFrameworks/CoreSimulator.framework/Versions/A/XPCServices/SimulatorTrampoline.xpc/Contents/MacOS/SimulatorTrampoline.
The way I see it, there are two pathways to a possible solution: get swift to kill child processes(not sure why that isn't a default), or get python to successfully kill the bot(is the above process related to this?). I would prefer the first one, but either one is fine.
Let me know if you need more info.
Thanks so much in advance!
Found the solution from another stackoverflow question. I just had to execute the following terminal command:
pgrep -f Python | xargs kill -9. This kills all running Python apps, which are all going to be controlled from the app, so that works as a patch for me.
I am working from a windows platform. In my python script, I can make a call to an external program in the following way:
os.system("C:\mainfolder\menu.exe C:\others\file1.inp C:\others\file2.inp")
os.popen("C:\mainfolder\menu.exe C:\others\file1.inp C:\others\file2.inp")
subprocess.call(["C:\mainfolder\menu.exe","C:\others\file1.inp" "C:\others\file2.inp"])
where:
menu.exe: is my external program.
file1 and file2: are input files to my external program.
All the above works fine. Now that my external program has finished successfully, I need to totally close it along with all the windows that are left opened by it. I have gone through lots of other posts, python documentation, etc and found commands as for example:
os.system("taskkill /im C:\mainfolder\menu.exe")
os.kill(proc.pid,9)
child.kill()
But they did not work. I spent a lot of time trying to find something that worked for me, until I realised that no matter which commands I type after, they will not be read as the computer does not know that my external program has finished. That is the reason why I can easily terminate the program from the command line anytime just by typing taskkill /im menu.exe, but not from python.
Does anybody know how to sort this out?, should I include something else when I make the call to my external program?
Here's some example code, how to detect if a program opens a window. All you need to know is the title of the message box, that menu.exe opens, when it is finished:
import subprocess
import win32gui
import time
def enumHandler(hwnd, lParam):
if win32gui.IsWindowVisible(hwnd):
if 'Menu.exe Finished' in win32gui.GetWindowText(hwnd):
proc.kill()
proc = subprocess.Popen(["C:\mainfolder\menu.exe","C:\others\file1.inp" "C:\others\file2.inp"])
while proc.poll() is None:
win32gui.EnumWindows(enumHandler, None)
time.sleep(1)
If you want to have a process end immediately, i.e., wait for it to end, this is a blocking call, and os.system() normally waits, discussed here as well as .communicate[0] using subprocess there.
If you want to end a process later in your program, an asynchronous, non-blocking process, perhaps get its pid and depending on whether shell=True or not that will either be the pid of the spawned shell or of the child process.
That pid can be used to end it either immediately by using psutil or os, or wait until it ends using little cpu time, though then other tasks can be done while waiting, or threads could be used.
It might be a bit late to post my findings to this question as I asked it some months back but it may still be helpful for other readers.
With the help of the people who tried answering my question, especially Daniel's answer, I found out a solution for my problem. Not sure if it is the best one but I got what I was needing.
Basically instead of looking for the word "finished" on my pop up window, I looked for the results that my external program generates. If these results have been generated, it means that the program has finished so I then kill the process:
proc=subprocess.Popen(["C:\mainfolder\menu.exe","C:\others\file1.inp" "C:\others\file2.inp"])
while proc.poll() is None:
if os.path.exists("D:\\Results_folder\\Solution.txt"):
time.sleep(10)
os.system('taskkill /im menu.exe')
Is there a way to open a second python console and let the new console run while the original console keeps going and when the new console finishes it sends back its data, in the form of a variable back to the original console?
Perhaps you should look into the multiprocessing or multi-threading modules. These help you spawn off child processes from your original (parent) program.
The sub-process module might be what you are looking for. However, the thing is, you get the output of the process after the entire program finishes. this means, if the program you are trying to run runs forever you will not be able to see the output until it is quit (either by forcing it or using termination methods).
An example of how you would assign the output to a variable would be:
output,error=your_process.communicate()
The output part of this is what you would be using (based on your question). However, the error is what you get if you run it and there is a problem (doesn't return 0). If you are not looking to capture errors then you can simply assign it to _.
Also note that if you are using key-word arguments, i would suggest using the shlex library for splitting your string into arguments. (you can just use a regular string such as: var="mypythonprogram.py argument1 argument2" and use arguments=shlex.split(var) and you can then just supply it into the arguments for the sub-process.
Another option if you don't need to interact with the program would be using Threads, and there are many questions on stack overflow about them, as well as plenty of documentation both officially, and on other websites all over the internet.
Read up on Python multiprocessing. It has examples of exchanging objects between processes.
It sounds like you want to do something like what the multiprocessing library offers. With out more info all i can do is point you to the docs.
for instance:
http://www.ibm.com/developerworks/aix/library/au-multiprocessing/
or
http://docs.python.org/library/multiprocessing.html
If you are on windows you can use win32console module to open a second console for your thread or subprocess output
Here is a sample code:
import win32console
import multiprocessing
def subprocess(queue):
win32console.FreeConsole() #Frees subprocess from using main console
win32console.AllocConsole() #Creates new console and all input and output of subprocess goes to this new console
while True:
print(queue.get())
#prints any output produced by main script passed to subprocess using queue
if __name__ == "__main__":
queue = multiprocessing.Queue()
multiprocessing.Process(target=subprocess, args=[queue]).start()
while True:
print("Hello World")
queue.put("Hello to subprocess console")
#sends above string to subprocess which prints it into its own console
#and whatever else you want to do in ur main process
You can also do this with threading. You have to use queue module if you want the queue functionality as threading module doesn't have queue
Here is the win32console module documentation
My application creates subprocesses. Usually, these processeses run and terminate without any problems. However, sometimes, they crash.
I am currently using the python subprocess module to create these subprocesses. I check if a subprocess crashed by invoking the Popen.poll() method. Unfortunately, since my debugger is activated at the time of a crash, polling doesn't return the expected output.
I'd like to be able to see the debugging window(not terminate it) and still be able to detect if a process is crashed in the python code.
Is there a way to do this?
When your debugger opens, the process isn't finished yet - and subprocess only knows if a process is running or finished. So no, there is not a way to do this via subprocess.
I found a workaround for this problem. I used the solution given in another question Can the "Application Error" dialog box be disabled?
Items of consideration:
subprocess.check_output() for your child processes return codes
psutil for process & child analysis (and much more)
threading library, to monitor these child states in your script as well once you've decided how you want to handle the crashing, if desired
import psutil
myprocess = psutil.Process(process_id) # you can find your process id in various ways of your choosing
for child in myprocess.children():
print("Status of child process is: {0}".format(child.status()))
You can also use the threading library to load your subprocess into a separate thread, and then perform the above psutil analyses concurrently with your other process.
If you find more, let me know, it's no coincidence I've found this post.
This question already has answers here:
How to start a background process in Python?
(9 answers)
Closed 9 years ago.
I am creating a little dashboard for a user that will allow him to run specific jobs. I am using Django so I want him to be able to click a link to start the job and then return the page back to him with a message that the job is running. The results of the job will be emailed to him later.
I believe I am supposed to use subprocess.Popen but I'm not sure of that. So in pseudocode, here is what I want to do:
if job == 1:
run script in background: /path/to/script.py
return 'Job is running'
p = subprocess.Popen([sys.executable, '/path/to/script.py'],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
That will start the subprocess in background. Your script will keep running normally.
Read the documentation here.
Running this through a message queue is definitely the way to go if you're thinking about long-term scaling. Send a message to the queue who's running constantly in the background, and write job handlers to deal with the different sorts of messages.
Since you're using Django, I think Beanstalkd is a pretty good fit. Here's a pretty nice tutorial on the subject. The first comment in that article also has some good tips.
Personally I've rolled with a custom in-memory queue server written in Erlang, with Python-bindings written in C. But redis looks like it might work out as a great contender for future queuing/messaging-needs. Hope this helps!
subprocess.Popen is indeed what you are looking for.
Although if you find that you want to start communicating a bunch of information between the subprocess and the parent, you may want to consider a thread, or RPC framework like Twisted.
But most likely those are too heavy for your application.