Determining running programs in Python - python

How would I use Python to determine what programs are currently running. I am on Windows.

Thanks to #hb2pencil for the WMIC command! Here's how you can pipe the output without a file:
import subprocess
cmd = 'WMIC PROCESS get Caption,Commandline,Processid'
proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
for line in proc.stdout:
print line

import os
os.system('WMIC /OUTPUT:C:\ProcessList.txt PROCESS get Caption,Commandline,Processid')
f = open("C:\ProcessList.txt")
plist = f.readlines()
f.close()
Now plist contains a formatted whitespace-separated list of processes:
The first column is the name of the executable that is running
The second column is the command that represents the running process
The third column is the process ID
This should be simple to parse with python. Note that the first row of data are labels for the columns, and not actual processes.
Note that this method only works on windows!

Piping information from sub process commands is not ideal compared to an actual python tool meant for getting processes. Try the psutil module. To get a list of process numbers, do:
psutil.get_pid_list()
I'm afraid you have to download this module online, it is not included in python distributions, but this is a better way to solve your problem. To access the name of the process you have a number for, do:
psutil.Process(<number>).name
This should be what you are looking for. Also, here is a way to find if a specific process is running:
def process_exists(name):
i = psutil.get_pid_list()
for a in i:
try:
if str(psutil.Process(a).name) == name:
return True
except:
pass
return False
This uses the name of the executable file, so for example, to find a powershell window, you would do this:
process_exists("powershell.exe")

I was getting access denied with get_pid_list(). A newer method worked for me on windows and OSX:
import psutil
for proc in psutil.process_iter():
try:
if proc.name() == u"chrome.exe":
print(proc)
print proc.cmdline()
except psutil.AccessDenied:
print "Permission error or access denied on process"

Related

How to run multiple servers with a python script? [duplicate]

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

Getting exception information of sub process

I want to make a simple windows executable loading program
which simply implemented using os.system('./calc.exe') in python
or WinExec(...), CreateProcess(...) in Windows API...
This would be a VERY simple and easy task.
However, I want to receive the detailed error report if my child process crashes.
I know I can get the error number code as the return value of
functions such as Popen.call() in Python, or something...
But when windows binary crashes, I can see the detailed error report
which contains the name of crashed module, violation code(0xC0000005, etc)
offset of crashed module, time, etc...
How can I get these information from the parent process and what would be the most easy and simple way to implement this?
Thank you in advance.
I haven't tested this, but something like this should do the trick:
import logging
import subprocess
cmd = "ls -al /directory/that/does/not/exist" # <- or Windows equivalent
logging.info(cmd)
try:
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError, err:
logging.error(err.child_traceback)
(stdout, stderr) = process.communicate()
logging.debug(stdout)
if not stderr is None:
logging.error(stderr)

How to execute a command prompt command from python

I tried something like this, but with no effect:
command = "cmd.exe"
proc = subprocess.Popen(command, stdin = subprocess.PIPE, stdout = subprocess.PIPE)
proc.stdin.write("dir c:\\")
how about simply:
import os
os.system('dir c:\\')
You probably want to try something like this:
command = "cmd.exe /C dir C:\\"
I don't think you can pipe into cmd.exe... If you are coming from a unix background, well, cmd.exe has some ugly warts!
EDIT: According to Sven Marnach, you can pipe to cmd.exe. I tried following in a python shell:
>>> import subprocess
>>> proc = subprocess.Popen('cmd.exe', stdin = subprocess.PIPE, stdout = subprocess.PIPE)
>>> stdout, stderr = proc.communicate('dir c:\\')
>>> stdout
'Microsoft Windows [Version 6.1.7600]\r\nCopyright (c) 2009 Microsoft Corporatio
n. All rights reserved.\r\n\r\nC:\\Python25>More? '
As you can see, you still have a bit of work to do (only the first line is returned), but you might be able to get this to work...
Try:
import os
os.popen("Your command here")
Using ' and " at the same time works great for me (Windows 10, python 3)
import os
os.system('"some cmd command here"')
for example to open my web browser I can use this:
os.system(r'"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"')
(Edit)
for an easier way to open your browser I can use this:
import webbrowser
webbrowser.open('website or leave it alone if you only want to open the
browser')
Try adding a call to proc.stdin.flush() after writing to the pipe and see if things start behaving more as you expect. Explicitly flushing the pipe means you don't need to worry about exactly how the buffering is set up.
Also, don't forget to include a "\n" at the end of your command or your child shell will sit there at the prompt waiting for completion of the command entry.
I wrote about using Popen to manipulate an external shell instance in more detail at: Running three commands in the same process with Python
As was the case in that question, this trick can be valuable if you need to maintain shell state across multiple out-of-process invocations on a Windows machine.
Taking some inspiration from Daren Thomas's answer (and edit), try this:
proc = subprocess.Popen('dir C:\\', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = proc.communicate()
out will now contain the text output.
They key nugget here is that the subprocess module already provides you shell integration with shell=True, so you don't need to call cmd.exe directly.
As a reminder, if you're in Python 3, this is going to be bytes, so you may want to do out.decode() to convert to a string.
Why do you want to call cmd.exe ? cmd.exe is a command line (shell). If you want to change directory, use os.chdir("C:\\"). Try not to call external commands if Python can provide it. In fact, most operating system commands are provide through the os module (and sys). I suggest you take a look at os module documentation to see the various methods available.
It's very simple. You need just two lines of code with just using the built-in function and also it takes the input and runs forever until you stop it. Also that 'cmd' in quotes, leave it and don't change it. Here is the code:
import os
os.system('cmd')
Now just run this code and see the whole windows command prompt in your python project!
Here's a way to just execute a command line command and get its output using the subprocess module:
import subprocess
# You can put the parts of your command in the list below or just use a string directly.
command_to_execute = ["echo", "Test"]
run = subprocess.run(command_to_execute, capture_output=True)
print(run.stdout) # the output "Test"
print(run.stderr) # the error part of the output
Just don't forget the capture_output=True argument and you're fine. Also, you will get the output as a binary string (b"something" in Python), but you can easily convert it using run.stdout.decode().
In Python, you can use CMD commands using these lines :
import os
os.system("YOUR_COMMAND_HERE")
Just replace YOUR_COMMAND_HERE with the command you like.
From Python you can do directly using below code
import subprocess
proc = subprocess.check_output('C:\Windows\System32\cmd.exe /k %windir%\System32\\reg.exe ADD HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLUA /t REG_DWORD /d 0 /f' ,stderr=subprocess.STDOUT,shell=True)
print(str(proc))
in first parameter just executed User Account setting you may customize with yours.

What's the problem with executing commands in Windows CMD from Python?

I'm having huge trouble with passing commands to CMD from Python.
First, I open a CMD process:
cmdprocess = subprocess.Popen("cmd",
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
Then, I do something, for example:
for i in range(500):
#time.sleep(1)
command = ("dir > " + os.path.join("C:\\", str(i)) + "\r\n").encode("utf-8")
print(command)
cmdprocess.stdin.write(command)
So this is supposed to create 500 small text files in a folder. I tested it in Python 3.2 x64 and 3.2 x86 and the result for both is: it counts up to about 250-350 in the Python shell, and then just stops. No error, nothing. There are then the files 1-80 in the specified folder.
Now, I thought that maybe Python is too fast and so had it sleep(1) for 1 second between the commands. Now, it counts up to about 200 before the first file appears in the folder! and then stops at about 270.
What happens here and how can I force CMD to execute a command immediately?
Are you handling the output in the PIPES? They might be filling. If you fill up the stdout or stderror buffers from the process, it will stop execution.
I think you'd better to use pywin32 package. there are win32pipe and win32process modules.
I also had same issue but I could not resolve it without pywin32-site-package...
So now I am using them... If you need sample code and you're using windows, I will attach it.
if you mean linux... it's same but you need another one like IO select.

How to start a background process in Python?

I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
While jkp's solution works, the newer way of doing things (and the way the documentation recommends) is to use the subprocess module. For simple commands its equivalent, but it offers more options if you want to do something complicated.
Example for your case:
import subprocess
subprocess.Popen(["rm","-r","some.file"])
This will run rm -r some.file in the background. Note that calling .communicate() on the object returned from Popen will block until it completes, so don't do that if you want it to run in the background:
import subprocess
ls_output=subprocess.Popen(["sleep", "30"])
ls_output.communicate() # Will block for 30 seconds
See the documentation here.
Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Note: This answer is less current than it was when posted in 2009. Using the subprocess module shown in other answers is now recommended in the docs
(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.)
If you want your process to start in the background you can either use system() and call it in the same way your shell script did, or you can spawn it:
import os
os.spawnl(os.P_DETACH, 'some_long_running_command')
(or, alternatively, you may try the less portable os.P_NOWAIT flag).
See the documentation here.
You probably want the answer to "How to call an external command in Python".
The simplest approach is to use the os.system function, e.g.:
import os
os.system("some_command &")
Basically, whatever you pass to the system function will be executed the same as if you'd passed it to the shell in a script.
I found this here:
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS Process Creation Flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
Use subprocess.Popen() with the close_fds=True parameter, which will allow the spawned subprocess to be detached from the Python process itself and continue running even after Python exits.
https://gist.github.com/yinjimmy/d6ad0742d03d54518e9f
import os, time, sys, subprocess
if len(sys.argv) == 2:
time.sleep(5)
print 'track end'
if sys.platform == 'darwin':
subprocess.Popen(['say', 'hello'])
else:
print 'main begin'
subprocess.Popen(['python', os.path.realpath(__file__), '0'], close_fds=True)
print 'main end'
Both capture output and run on background with threading
As mentioned on this answer, if you capture the output with stdout= and then try to read(), then the process blocks.
However, there are cases where you need this. For example, I wanted to launch two processes that talk over a port between them, and save their stdout to a log file and stdout.
The threading module allows us to do that.
First, have a look at how to do the output redirection part alone in this question: Python Popen: Write to stdout AND log file simultaneously
Then:
main.py
#!/usr/bin/env python3
import os
import subprocess
import sys
import threading
def output_reader(proc, file):
while True:
byte = proc.stdout.read(1)
if byte:
sys.stdout.buffer.write(byte)
sys.stdout.flush()
file.buffer.write(byte)
else:
break
with subprocess.Popen(['./sleep.py', '0'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc1, \
subprocess.Popen(['./sleep.py', '10'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc2, \
open('log1.log', 'w') as file1, \
open('log2.log', 'w') as file2:
t1 = threading.Thread(target=output_reader, args=(proc1, file1))
t2 = threading.Thread(target=output_reader, args=(proc2, file2))
t1.start()
t2.start()
t1.join()
t2.join()
sleep.py
#!/usr/bin/env python3
import sys
import time
for i in range(4):
print(i + int(sys.argv[1]))
sys.stdout.flush()
time.sleep(0.5)
After running:
./main.py
stdout get updated every 0.5 seconds for every two lines to contain:
0
10
1
11
2
12
3
13
and each log file contains the respective log for a given process.
Inspired by: https://eli.thegreenplace.net/2017/interacting-with-a-long-running-child-process-in-python/
Tested on Ubuntu 18.04, Python 3.6.7.
You probably want to start investigating the os module for forking different threads (by opening an interactive session and issuing help(os)). The relevant functions are fork and any of the exec ones. To give you an idea on how to start, put something like this in a function that performs the fork (the function needs to take a list or tuple 'args' as an argument that contains the program's name and its parameters; you may also want to define stdin, out and err for the new thread):
try:
pid = os.fork()
except OSError, e:
## some debug output
sys.exit(1)
if pid == 0:
## eventually use os.putenv(..) to set environment variables
## os.execv strips of args[0] for the arguments
os.execv(args[0], args)
You can use
import os
pid = os.fork()
if pid == 0:
Continue to other code ...
This will make the python process run in background.
I haven't tried this yet but using .pyw files instead of .py files should help. pyw files dosen't have a console so in theory it should not appear and work like a background process.

Categories

Resources