Making a process wait for command - python

I'm making a program which executes commands I type in to linux. For instance:
~> Python myProgram start
~> cd Music (or some other linux command)
~/Music> Python myProgram doSomething
~/Music> cd ..
~>Python myProgram doSomethingElse
I guess the program must look something like this:
if sys.argv == "start":
get processID
echo processID >> /dev/shm/ID
while True:
wait for command
method(argument)
if sys.argv == "doSomething":
processID = read("/dev/shm/ID")
tell process to run method(doSomething)
def Method()
def read()
My question is: Where do I start? Do I have to use Thread, Multiprocessing, Subprocess or Popen?
Any help is appreciated!

Here's a near interface for creating command line tools with python, this may be a good point to start https://docs.python.org/2/library/cmd.html

Related

linux python application pid exisisting check

I have strange problem with auto run my python application. As everybody know to run this kind of app I need run command:
python app_script.py
Now I try to run this app by cronetab using one simple script to ensure that this app isn't running. If answer is no, script run application.
#!/bin/bash
pidof appstart.py >/dev/null
if [[ $? -ne 0 ]] ; then
python /path_to_my_app/appstart.py &
fi
Bad side of this approach is that script during checking pid, checks only first word from command of ps aux table and in this example it always will be python and skip script name (appstart). So when i run another app based on python language the script will failed... Maybe somebody know how to check this out in a proper way?
This might be a question better suited for Unix & Linux Stack Exchange.
However, it's common to use pgrep instead of pidof for applications like yours:
$ pidof appstart.py # nope
$ pidof python # works, but it can be different python
16795
$ pgrep appstart.py # nope, it would match just 'python', too
$ pgrep -f appstart.py # -f is for 'full', it searches the whole commandline (so it finds appstart.py)
16795
From man pgrep: The pattern is normally only matched against the process name. When -f is set, the full command line is used.
Maybe you should better check for pid-file created in your application?
This will help you track even different instances of same script if needed. Something just like this:
#!/usr/bin/env python3
import os
import sys
import atexit
PID_file = "/tmp/app_script.pid"
PID = str(os.getpid())
if os.path.isfile(PID_file):
sys.exit('{} already exists!'.format(PID_file))
open(PID_file, 'w').write(PID)
def cleanup():
os.remove(PID_file)
atexit.register(cleanup)
# DO YOUR STUFF HERE
After that you'll be able to check if file exists, and if it exists you'll be able to retrieve PID of your script.
[ -f /tmp/app_script.pid ] && ps up $(cat /tmp/app_script.pid) >/dev/null && echo "Started" || echo "Not Started"
you could also do the whole thing in python without the bash-script around it by creating a pidfile somewhere writeable.
import os
import sys
pidpath = os.path.abspath('/tmp/myapp.pid')
def myfunc():
"""
Your logic goes here
"""
return
if __name__ == '__main__':
# check for existing pidfile and fail if true
if os.path.exists(pidpath):
print('Script already running.')
sys.exit(1)
else:
# otherwise write current pid to file
with open(pidpath,'w') as _f:
_f.write(str(os.getpid()))
try:
# call your function
myfunc()
except Exception, e:
# clean up after yourself in case something breaks
os.remove(pidpath)
print('Exception: {}'.format(e))
sys.exit(1)
finally:
# also clean up after yourself in case everything's fine...
os.remove(pidpath)

How to run a background process and do *not* wait?

My goal is simple: kick off rsync and DO NOT WAIT.
Python 2.7.9 on Debian
Sample code:
rsync_cmd = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1)
rsync_cmd2 = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3} &".format(remote_user, remote_server, file1, file1)
rsync_path = "/usr/bin/rsync"
rsync_args = shlex.split("-a -e 'ssh -i /home/mysuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1))
#subprocess.call(rsync_cmd, shell=True) # This isn't supposed to work but I tried it
#subprocess.Popen(rsync_cmd, shell=True) # This is supposed to be the solution but not for me
#subprocess.Popen(rsync_cmd2, shell=True) # Adding my own shell "&" to background it, still fails
#subprocess.Popen(rsync_cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) # This doesn't work
#subprocess.Popen(shlex.split(rsync_cmd)) # This doesn't work
#os.execv(rsync_path, rsync_args) # This doesn't work
#os.spawnv(os.P_NOWAIT, rsync_path, rsync_args) # This doesn't work
#os.system(rsync_cmd2) # This doesn't work
print "DONE"
(I've commented out the execution commands only because I'm actually keeping all of my trials in my code so that I know what I've done and what I haven't done. Obviously, I would run the script with the right line uncommented.)
What happens is this...I can watch the transfer on the server and when it's finished, then I get a "DONE" printed to the screen.
What I'd like to have happen is a "DONE" printed immediately after issuing the rsync command and for the transfer to start.
Seems very straight-forward. I've followed details outlined in other posts, like this one and this one, but something is preventing it from working for me.
Thanks ahead of time.
(I have tried everything I can find in StackExchange and don't feel like this is a duplicate because I still can't get it to work. Something isn't right in my setup and need help.)
Here is verified example for Python REPL:
>>> import subprocess
>>> import sys
>>> p = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(100)'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT); print('finished')
finished
How to verify that via another terminal window:
$ ps aux | grep python
Output:
user 32820 0.0 0.0 2447684 3972 s003 S+ 10:11PM 0:00.01 /Users/user/venv/bin/python -c import time; time.sleep(100)
Popen() starts a child process—it does not wait for it to exit. You have to call .wait() method explicitly if you want to wait for the child process. In that sense, all subprocesses are background processes.
On the other hand, the child process may inherit various properties/resources from the parent such as open file descriptors, the process group, its control terminal, some signal configuration, etc—it may lead to preventing ancestors processes to exit e.g., Python subprocess .check_call vs .check_output or the child may die prematurely on Ctrl-C (SIGINT signal is sent to the foreground process group) or if the terminal session is closed (SIGHUP).
To disassociate the child process completely, you should make it a daemon. Sometimes something in between could be enough e.g., it is enough to redirect the inherited stdout in a grandchild so that .communicate() in the parent would return when its immediate child exits.
I encountered a similar issue while working with qnx devices and wanted a sub-process that runs independently of the main process and even runs after the main process terminates.
Here's the solution I found that actually works 'creationflags=subprocess.DETACHED_PROCESS':
import subprocess
import time
pid = subprocess.Popen(["python", "path_to_script\turn_ecu_on.py"], creationflags=subprocess.DETACHED_PROCESS)
time.sleep(15)
print("Done")
Link to the doc: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
In Ubuntu the following commands keep working even if python app exits.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)

Launch a single python script as different processes differing by command line arguments

I have python script that takes command line arguments. The way I get the command line arguments is by reading a mongo database. I need to iterate over the mongo query and launch a different process for the single script with different command line arguments from the mongo query.
Key is, I need the launched processes to be:
separate processes share nothing
when killing the process, I need to be able to kill them all easily.
I think the command killall -9 script.py would work and satisfies the second constraint.
Edit 1
From the answer below, the launcher.py program looks like this
def main():
symbolPreDict = initializeGetMongoAllSymbols()
keys = sorted(symbolPreDict.keys())
for symbol in keys:
# Display key.
print(symbol)
command = ['python', 'mc.py', '-s', str(symbol)]
print command
subprocess.call(command)
if __name__ == '__main__':
main()
The problem is that mc.py has a call that blocks
receiver = multicast.MulticastUDPReceiver ("192.168.0.2", symbolMCIPAddrStr, symbolMCPort )
while True:
try:
b = MD()
data = receiver.read() # This blocks
...
except Exception, e:
print str(e)
When I run the launcher, it just executes one of the mc.py (there are at least 39). How do I modify the launcher program to say "run the launched script in background" so that the script returns to the launcher to launch more scripts?
Edit 2
The problem is solved by replacing subprocess.call(command) with subprocess.Popen(command)
One thing I noticed though, if I say ps ax | grep mc.py, the PID seem to be all different. I don't think I care since I can kill them all pretty easily with killall.
[Correction] kill them with pkill -f xxx.py
There are several options for launching scripts from a script. The easiest are probably to use the subprocess or os modules.
I have done this several times to launch things to separate nodes on a cluster. Using os it might look something like this:
import os
for i in range(len(operations)):
os.system("python myScript.py {:} {:} > out.log".format(arg1,arg2))
using killall you should have no problem terminating processes spawned this way.
Another option is to use subprocess which has got a wide range of features and is much more flexible than os.system. An example might look like:
import subprocess
for i in range(len(operations)):
command = ['python','myScript.py','arg1','arg2']
subprocess.call(command)
In both of these methods, the processes are independent and share nothing other than a parent PID.

python calling shell command (open) and waiting for command finish. How to get a signal once the job has finished?

I'm using an app called shoebox.app with can split a psd file to several png files.
The app provides a CMD interface and I tried it in termial which works fine:
open /Applications/ShoeBox.app --args \"files=xxx.psd\"
and now I want to write a python script to do the work:
import os
if __name__ == '__main__':
CMD = "open /Applications/ShoeBox.app --args \"files=xxx.psd\""
os.system(CMD)
this also works fine.
Now what I want to do something after the "spliting psd to png job" finishes:
import os
if __name__ == '__main__':
CMD = "open /Applications/ShoeBox.app --args \"files=xxx.psd\""
os.system(CMD)
do_something_to_pngs_func()
The problem is the "split job" takes several seconds and I need to wait till it ends and then call my
do_something_to_pngs_func function.
But, how could I know the os.system(CMD) actually ends?
Now my interim solution is to add a "time.sleep(10)" before calling my function, but that's not the best solution, of course.
Any advice will be appreciated, thanks :)
By default, the open command returns immediately. To make open wait for the completion of the application, use the -W and -n switches:
CMD = "open -W -n /Applications/ShoeBox.app --args \"files=xxx.psd\""
Reference:
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/open.1.html

Is it possible to output text from a Python script to the terminal as an executable command?

To be specific, I want a Python script that accepts a string from the user and interprets that string as a command in the terminal. In other words, my script should be able to be used as follows:
python testScript.py "command -arg1 -arg2 -arg3"
And the output should be as follows:
command -arg1 -arg2 -arg3
which executes the command with 3 arguments: arg1, arg2, and arg3.
i.e.,
python testScript.py "ls -lah"
Outputs the permissions of the current directory.
Likewise,
python testScript.py "/testarea ls -lah"
Would output the permissions of the directory, "/testarea"
Any suggestions or modules?
Running arbitrary user input can be generally considered a Bad Idea©, but if you really want to do it:
#testScript.py
import sys, os
if __name__ == "__main__":
os.system(" ".join(sys.argv[1:]))
The most robust way of doing this is to use the subprocess module. Take a look at all the possible options.
https://docs.python.org/2/library/subprocess.html
Sure...
The most basic way is to use os:
import os, sys
os.system(sys.argv[1])
If you want do have better control over the calls, have a look at
the subprocess module though. With that module you can do the same
as above, but do a lot more, like capturing the output of the
of the command and use it inside your program
This is the best answer I came up with. I upvoted anyone who said to use the subprocess module or had a good alternative, as well.
import subprocess, threading
class Command(object):
def __init__(self, cmd):
self.cmd = cmd
self.process = None
def run(self, timeout):
def target():
print 'Thread started'
self.process = subprocess.Popen(self.cmd, shell=True)
self.process.communicate()
print 'Thread finished'
thread = threading.Thread(target=target)
thread.start()
thread.join(timeout)
if thread.is_alive():
print 'Terminating process'
self.process.terminate()
thread.join()
print self.process.returncode
#This will run one command for 5 seconds:
command = Command("ping www.google.com")
command.run(timeout=5)
This will run the ping www.google.com command for 5 seconds and then timeout. You can add an arbitrary number of arguments to the list when you create command, separated by spaces.
This is an example of the command ls -lah:
command = Command("ls -lah")
command.run(timeout=5)
And an example of multiple commands in a single run:
command = Command("echo 'Process started'; sleep 2; echo 'Process finished'")
command.run(timeout=5)
Easy and robust, just how I like it!

Categories

Resources