I have strange problem with auto run my python application. As everybody know to run this kind of app I need run command:
python app_script.py
Now I try to run this app by cronetab using one simple script to ensure that this app isn't running. If answer is no, script run application.
#!/bin/bash
pidof appstart.py >/dev/null
if [[ $? -ne 0 ]] ; then
python /path_to_my_app/appstart.py &
fi
Bad side of this approach is that script during checking pid, checks only first word from command of ps aux table and in this example it always will be python and skip script name (appstart). So when i run another app based on python language the script will failed... Maybe somebody know how to check this out in a proper way?
This might be a question better suited for Unix & Linux Stack Exchange.
However, it's common to use pgrep instead of pidof for applications like yours:
$ pidof appstart.py # nope
$ pidof python # works, but it can be different python
16795
$ pgrep appstart.py # nope, it would match just 'python', too
$ pgrep -f appstart.py # -f is for 'full', it searches the whole commandline (so it finds appstart.py)
16795
From man pgrep: The pattern is normally only matched against the process name. When -f is set, the full command line is used.
Maybe you should better check for pid-file created in your application?
This will help you track even different instances of same script if needed. Something just like this:
#!/usr/bin/env python3
import os
import sys
import atexit
PID_file = "/tmp/app_script.pid"
PID = str(os.getpid())
if os.path.isfile(PID_file):
sys.exit('{} already exists!'.format(PID_file))
open(PID_file, 'w').write(PID)
def cleanup():
os.remove(PID_file)
atexit.register(cleanup)
# DO YOUR STUFF HERE
After that you'll be able to check if file exists, and if it exists you'll be able to retrieve PID of your script.
[ -f /tmp/app_script.pid ] && ps up $(cat /tmp/app_script.pid) >/dev/null && echo "Started" || echo "Not Started"
you could also do the whole thing in python without the bash-script around it by creating a pidfile somewhere writeable.
import os
import sys
pidpath = os.path.abspath('/tmp/myapp.pid')
def myfunc():
"""
Your logic goes here
"""
return
if __name__ == '__main__':
# check for existing pidfile and fail if true
if os.path.exists(pidpath):
print('Script already running.')
sys.exit(1)
else:
# otherwise write current pid to file
with open(pidpath,'w') as _f:
_f.write(str(os.getpid()))
try:
# call your function
myfunc()
except Exception, e:
# clean up after yourself in case something breaks
os.remove(pidpath)
print('Exception: {}'.format(e))
sys.exit(1)
finally:
# also clean up after yourself in case everything's fine...
os.remove(pidpath)
Related
prompt question generator
class SynthesisPromptGenerator:
def wait_key(self):
''' Wait for a key press on the console and return it. '''
result = None
for singlePrompt in ["questionCat", "questionDog"]:
try:
result = raw_input(singlePrompt)
print 'input is: ', result
except IOError:
pass
return result
I have a PromptGenerator that will generate multiple terminal prompt questions, and after answering first question, then second will pop up, as
questionCat
(and wait for keyboard input)
questionDog
(and wait for keyboard input)
my goal is to automatically and dynamically answer to the questions
class PromptResponder:
def respond(self):
generator = SynthesisPromptGenerator()
child = pexpect.spawn(generator.wait_key())
child.expect("\*Cat\*")
child.sendline("yes")
child.expect("\*Dog\*")
child.sendline("no")
child.expect(pexpect.EOF)
if __name__ == "__main__":
responder = PromptResponder()
responder.respond()
if the prompt question contains Cat then answer yes
if the prompt question contains Dog then answer no
So it comes to:
how to get the prompt string from terminal and filter based on it?
how to answer multiple prompt questions in python?
I did some search but found most questions are for shell script echo yes | ./script, not much doing it in python
thank you very much
As suggested in the comments, use pexpect.
See pexpect on github, the official docs and this handy python for beginners walkthrough on pexpect.
For an example. Let's say this is your x.sh file:
#!/bin/bash
echo -n "Continue? [Y/N]: "
read answer
if [ "$answer" != "${answer#[Yy]}" ]; then
echo -n "continuing.."
else
echo -n "exiting.."
fi
You can do this:
import os, sys
import pexpect
# It's probably cleaner to use an absolute path here
# I just didn't want to include my directories..
# This will run x.sh from your current directory.
child = pexpect.spawn(os.path.join(os.getcwd(),'x.sh'))
child.logfile = sys.stdout
# Note I have to escape characters here because
# expect processes regular expressions.
child.expect("Continue\? \[Y/N\]: ")
child.sendline("Y")
child.expect("continuing..")
child.expect(pexpect.EOF)
print(child.before)
Result of the python script:
Continue? [Y/N]: Y
Y
continuing..
Although I have to say that it's a bit unusal to use pexpect with a bash script if you have the ability to edit it. It would be simpler to edit the script so that it no longer prompts:
#!/bin/bash
echo -n "Continue? [Y/N]: "
answer=y
if [ "$answer" != "${answer#[Yy]}" ]; then
echo "continuing.."
else
echo "exiting.."
fi
Then you're free to just use subprocess to execute it.
import os
import subprocess
subprocess.call(os.path.join(os.getcwd(),"x.sh"))
Or if you want the output as a variable:
import os
import subprocess
p = subprocess.Popen(os.path.join(os.getcwd(),"x.sh"), stdout=subprocess.PIPE)
out, error = p.communicate()
print(out)
I realise this might not be possible for you but it's worth noting.
I have a Java program that uses video from a framegrabber card. This program is launched through a python launcher.py.
The easiest way to read the video stream I found, is to make Java read on a named pipe, and this works perfectly. So my session is like:
$ mkfifo videopipe
$ cat /dev/video1>videopipe
and in a second terminal (since the cat command is blocking):
$ python launcher.py
I would like to automate this process. Unfortunately, the result is always the same: the Java application starts (confirmed through a print statement in the java program), but then the terminal stalls and nothing appears, exception or else.
Since this process works manually, I guess I am doing something wrong in the python program. To simplify things, I isolated the piping part:
from subprocess import call, Popen, PIPE, check_call
BASH_SWITCHTO_WINTV = ['v4l2-ctl', '-d /dev/video1', '-i 2', '--set-standard=4']
BASH_CREATE_FIFO_PIPE = ['mkfifo', 'videopipe']
BASH_PIPE_VIDEO = 'cat /dev/video1>videopipe'
def run():
try:
print('running bash commands...')
call(BASH_SWITCHTO_WINTV)
call(BASH_CREATE_FIFO_PIPE)
Popen(['cat', '/dev/video1'], stdout=open('videopipe', 'w'))
except:
raise RuntimeError('An error occured while piping the video')
if __name__ == '__main__':
run()
which when run, outputs:
running bash commands...
Failed to open /dev/video1: No such file or directory
A little help would be very much appreciated :-)
If you're using shell=True, just pass a string:
BASH_PIPE_VIDEO = 'cat /dev/video1 > videopipe'
Currently, cat is passed to the shell as your script, and /dev/video>videopipe is passed to that shell as a literal argument -- not parsed as part of the script text at all, and having no effect since the script (just calling cat) doesn't look at its arguments.
Alternately, to avoid needless shell use (and thus shell-related bugs such as shellshock, and potential for injection attacks if you were accepting any argument from a non-hardcoded source):
Popen(['cat', '/dev/video1'], stdout=open('videopipe, 'w'))
On a note unrelated to your "cat to named pipe" question -- be sure you get your spaces correct.
BASH_SWITCHTO_WINTV = ['v4l2-ctl', '-d /dev/video1', ...]
...uses the name <space>/dev/video1, with a leading space, as the input device; it's the same as running v4l2-ctl "-d /dev/video1" in shell, which would cause the same problem.
Be sure that you split your arguments correctly:
BASH_SWITCHTO_WINTV = ['v4l2-ctl', '-d', '/dev/video1', ...]
I'm making a program which executes commands I type in to linux. For instance:
~> Python myProgram start
~> cd Music (or some other linux command)
~/Music> Python myProgram doSomething
~/Music> cd ..
~>Python myProgram doSomethingElse
I guess the program must look something like this:
if sys.argv == "start":
get processID
echo processID >> /dev/shm/ID
while True:
wait for command
method(argument)
if sys.argv == "doSomething":
processID = read("/dev/shm/ID")
tell process to run method(doSomething)
def Method()
def read()
My question is: Where do I start? Do I have to use Thread, Multiprocessing, Subprocess or Popen?
Any help is appreciated!
Here's a near interface for creating command line tools with python, this may be a good point to start https://docs.python.org/2/library/cmd.html
[EDITED]
I have a python app in a remote server that i need to debug, when I run the app locally it prints some debug information (including python tracebacks) that i need to monitor.
Thanks to jeremy i got to monitor the output file using tail -F and studying his code I got here where i found the following variation of his command:
ssh root#$IP 'nohup python /root/python/run_dev_server.py &>> /var/log/myapp.log &'
This gets me almost exactly what i want, loggin information and python tracebacks, but i do not get any of the information displayed using print from python which i need.
so I also tried his command:
ssh root#$IP 'nohup python /root/python/run_dev_server.py 2>&1 >> /var/log/myapp.log &'
it logs in the file the output of the program from print and also the logging information, but all the tracebacks are lost so i cannot debug the python exceptions.
Is there a way I can capture all the information produced by the app?
Thanks in advance for any suggestion.
I would suggest doing something like this:
/usr/bin/nohup COMMAND ARGS 2>&1 >> /var/log/COMMAND.log &
/bin/echo $! > /var/run/COMMAND.pid
nohup keeps the process alive after your terminal/ssh session is closed, >> will save all stdout and stderr to /var/log/COMMAND.log for you to "tail -f" later on.
To get the stacktrace output (which you could print to stdout, or do something fancy like email it) add the following lines to your python code.
import sys
import traceback
_old_excepthook = sys.excepthook
def myexcepthook(exctype, value, tb):
# if exctype == KeyboardInterrupt: # handle keyboard events
# Print to stdout for now, but could email or anything here.
print traceback.print_tb(tb);
_old_excepthook(exctype, value, traceback)
sys.excepthook = myexcepthook
This will catch all exceptions (Including keyboard interrupts, so be careful)
I am using python's multiprocessing module to spawn new process
as follows :
import multiprocessing
import os
d = multiprocessing.Process(target=os.system,args=('iostat 2 > a.txt',))
d.start()
I want to obtain pid of iostat command or the command executed using multiprocessing
module
When I execute :
d.pid
it gives me pid of subshell in which this command is running .
Any help will be valuable .
Thanks in advance
Similar to #rakslice, you can use psutil:
import signal, psutil
def kill_child_processes(parent_pid, sig=signal.SIGTERM):
try:
parent = psutil.Process(parent_pid)
except psutil.NoSuchProcess:
return
children = parent.children(recursive=True)
for process in children:
process.send_signal(sig)
Since you appear to be using Unix, you can use a quick ps command to get the details of the child processes, like I did here (this is Linux-specific):
import subprocess, os, signal
def kill_child_processes(parent_pid, sig=signal.SIGTERM):
ps_command = subprocess.Popen("ps -o pid --ppid %d --noheaders" % parent_pid, shell=True, stdout=subprocess.PIPE)
ps_output = ps_command.stdout.read()
retcode = ps_command.wait()
assert retcode == 0, "ps command returned %d" % retcode
for pid_str in ps_output.split("\n")[:-1]:
os.kill(int(pid_str), sig)
For your example you may use the subprocess package. By default it executes the command without shell (like os.system()) and provides a PID:
from subprocess import Popen
p = Popen('iostat 2 > a.txt', shell=True)
processId = p.pid
p.communicate() # to wait until the end
The Popen also provides ability to connect to standard input and output of the process.
note: before using shell=True be aware of the security considerations.
I think with the multiprocess module you might be out of luck since you are really forking python directly and are given that Process object instead of the process you are interested in at the bottom of the process tree.
An alternative way, but perhaps not optimal way, to get that pid is to use the psutil module to look it up using the pid obtained from your Process object. Psutil, however, is system dependent and will need to be installed separately on each of your target platforms.
Note: I'm not currently at a machine I typically work from, so I can't provide working code nor play around to find a better option, but will edit this answer when I can to show how you might be able to do this.
[me#localhost ~]$ echo $$
30399
[me#localhost ~]$ cat iostat.py
#!/usr/bin/env python3.4
import multiprocessing
import os
d = multiprocessing.Process(target=os.system,args=('iostat 2 > a.txt',))
d.start()
[me#localhost ~]$ ./iostat.py &
[1] 31068
[me#localhost ~]$ watch -n 3 'pstree -p 30399'
[me#localhost ~]$
This gave me the PID of iostat See image.