Currently i can list my processes with a simple python script:
import os
os.system("Tasklist")
I would like to list all the threads associated with those processes, if any. The count of threads per process might be sufficient.
Would someone direct me where i might find this information.
Thank You.
You can use the psutil module (download here) for cross-platform process information delivery.
After installing, use the following code to get the thread count of any process id.
import psutil
for proc in psutil.process_iter():
print proc.name+' ['+str(proc.get_num_threads())+' threads]'
Related
I am looking for a way to do multiprocessing (specifically use Process and Queue) but with an external program instead of a Python class/module. Is this possible? I don't want to use the subprocess module because it doesn't provide good message passing like multiprocessing does or at least I wasn't able to use it the same way as multiprocessing -- Running a process, sending a task every now and then and reading the results while keeping the process alive.
Any hint is greatly appreciated.
I have a python script that runs another python program and then gathers results from the logs. The only problem is that I want it to run a limited number of seconds. So I want to kill the process after say, 1 minute.
How can I do this?
I'm running an external program with the command os.system("./test.py")
you need more control over your child process than os.system allows for. subprocess, especially Popen and Popen objects give you enough control for managing child processes. For a timer, see again the section in the standard library
Check out the psutil module. It provides a cross-platform interface to retrieving information on all running processes, and allows you to kill processes also. (It can do more, but that's all you should need!)
Here's the basic idea of how you could use it:
import os
import psutil
import time
os.system('./test.py')
# Find the PID for './test.py'.
# psutil has helper methods to make finding the PID easy.
pid = <process id of ./test.py>
time.sleep(60)
p = psutil.Process(pid)
p.kill()
#!/usr/bin/env python
import time, os, subprocess
process = subprocess.Popen(
['yes'], stdout=open(os.devnull,'w'))
time.sleep(60)
process.terminate()
I'm considering using Python's multiprocessing package for messaging between local python programs.
This seems like the right way to go IF:
The programs will always run locally on the same machine (and same OS instance)
The programs' implementation will remain in Python
Speed is important
Is it possible in case the python processes were run independently by the user, i.e. one did not spawn the other?
How?
The docs seem to give examples only of cases where one spawns the other.
See Listeners and Clients
The programs will always run locally on the same machine (and same OS instance)
Multiprocessing allows to have remote concurrency.
The programs' implementation will remain in Python
Yes and no. You could wrap another command in a python function. This will work, for example:
from multiprocessing import Process
import subprocess
def f(name):
subprocess.call(["ls", "-l"])
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
Speed is important
That depends from a number of factors:
how much overhead will cause the co-ordination between processes?
how many cores does your CPU have?
how much disk I/O is required by each process? Do them work on the same physical disk?
...
Is it possible in case the python processes were run independently by the user, i.e. one did not spawn the other?
I'm not an expert on the subject, but I implemented something similar once by using files to exchange data [basically one process' output file was monitored as input source by the other, and vice-versa].
HTH!
Okay I have looked at python-daemon, and also at various other daemon related code recipes. Are there any 'hello world' tutorials out there that can help me get started using a python based daemonized process?
The PEP 3143 contains several examples, the simplest one of which is:
import daemon
from spam import do_main_program
with daemon.DaemonContext():
do_main_program()
This seems as straightforward as it gets. If there's something that's unclear, please pose specific questions.
Using subprocess.Popen, you can launch another process that will survive your current process...
In a python console run :
import subprocess
subprocess.Popen(["/bin/sh", "-c", "sleep 500"])
Kill your console, look at existing processes, sleep is alive...
My application creates subprocesses. Usually, these processeses run and terminate without any problems. However, sometimes, they crash.
I am currently using the python subprocess module to create these subprocesses. I check if a subprocess crashed by invoking the Popen.poll() method. Unfortunately, since my debugger is activated at the time of a crash, polling doesn't return the expected output.
I'd like to be able to see the debugging window(not terminate it) and still be able to detect if a process is crashed in the python code.
Is there a way to do this?
When your debugger opens, the process isn't finished yet - and subprocess only knows if a process is running or finished. So no, there is not a way to do this via subprocess.
I found a workaround for this problem. I used the solution given in another question Can the "Application Error" dialog box be disabled?
Items of consideration:
subprocess.check_output() for your child processes return codes
psutil for process & child analysis (and much more)
threading library, to monitor these child states in your script as well once you've decided how you want to handle the crashing, if desired
import psutil
myprocess = psutil.Process(process_id) # you can find your process id in various ways of your choosing
for child in myprocess.children():
print("Status of child process is: {0}".format(child.status()))
You can also use the threading library to load your subprocess into a separate thread, and then perform the above psutil analyses concurrently with your other process.
If you find more, let me know, it's no coincidence I've found this post.