I am developing some Python (version 3.6.1) code to install an application in Windows 7. The code used is this:
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
The application is installed successfully. The problem is that it always requires a reboot after it is finished (a popup with a message "You must restart your system for the configuration changes made to to take effect. Click Yes to restart now or No if you plan to restart later.).
I tried to insert parameter "/forcerestart" (source here) in the installation command but it still stops to request the reboot:
def installApp():
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /forcerestart /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
Another attempt was to create a following command like this one below, although since the previous command is not finished yet (as per my understanding) I realized it will never be called:
rebootSystem = 'shutdown -t 0 /r /f'
subprocess.Popen(rebootSystem, stdout=subprocess.PIPE, shell=True)
Does anyone had such an issue and could solve it?
As an ugly workaround, if you're not time-critical but you want to emphasise the "automatic" aspect, why not
run the installCMD in a thread
wait sufficiently long to be sure that the command has completed
perform the shutdown
like this:
import threading,time
def installApp():
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
t = threading.Thread(target=installApp)
t.start()
time.sleep(1800) # half-hour should be enough
rebootSystem = 'shutdown -t 0 /r /f'
subprocess.Popen(rebootSystem, stdout=subprocess.PIPE, shell=True)
Another (safer) way would be to find out which file is created last in the installation, and monitor for its existence in a loop like this:
while not os.path.isfile("somefile"):
time.sleep(60)
time.sleep(60) # another minute for safety
# perform the reboot
To be clean, you'd have to use subprocess.Popen for the installation process, export it as global and call terminate() on it in the main process, but since you're calling a shutdown that's not necessary.
(to be clean, we wouldn't have to do that hack in the first place)
Related
I have two python scripts that use two different cameras for a project I am working on and I am trying to run them both inside a different script or within each other, either way is fine.
import os
os.system('python 1.py')
os.system('python 2.py')
My problem however is that they don't run at the same time, I have to quit the first one for the next to open. I also tried doing it with bash as well with the & shell operator
python 1.py &
python 2.py &
And this does in fact make them both run however the issue is that they both run endlessly in the background and I need to close them rather easily. Any suggestion what I can do to avoid the issues with these implementations
You could do it with multiprocessing
import os
import time
import psutil
from multiprocessing import Process
def run_program(cmd):
# Function that processes will run
os.system(cmd)
# Initiating Processes with desired arguments
program1 = Process(target=run_program, args=('python 1.py',))
program2 = Process(target=run_program, args=('python 2.py',))
# Start our processes simultaneously
program1.start()
program2.start()
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
# Wait 5 seconds and kill first program
time.sleep(5)
kill(program1.pid)
program1.join()
# Wait another 1 second and kill second program
time.sleep(1)
kill(program2.pid)
program2.join()
# Print current status of our programs
print('1.py alive status: {}'.format(program1.is_alive()))
print('2.py alive status: {}'.format(program2.is_alive()))
One possible method is to use systemd to control your process (i.e. treat them as daemons).
This is how I control my Python servers since they need to run in the background and be completely detached from the current tty so I can exit my connection to the machine and the continue processes continue. You can then also stop the server later using systemctl, as explained below.
Instructions:
Create a .service file and save it in /etc/systemd/system, with contents along the lines of:
[Unit]
Description=daemon one
[Service]
ExecStart=/path/to/1.py
and repeat with one going to 2.py.
Then you can use systemctl to control your daemons.
First reload all config files with:
systemctl daemon-reload
then start either of your daemons (where my_daemon.service is one of your unit files):
systemctl start my_daemon
it should now be running and you should find it in:
systemctl list-units
You can also check its status with:
systemctl status my_daemon
and stop/restart them with:
systemctl stop|restart my_daemon
Use subprocess.Popen. This will create a child process and return its pid.
pid = Popen("python 1.py").pid
And then check out these functions for communicating with the child process and checking if it is still running.
Generically, this is a well answered question. Linux doesn't allow non-privileged users to lower a PID's niceness, and running things as root is its own can of worms.
That said, here are my specifics: I've got a user account that manages a few processes which has passwordless sudo privileges for renice and a few other commands it uses. I also have a script that is the common entry point for all users on this system. This script can both run regular user programs as well as the processes managed by the special account. So the script, when run with a specific option, should renice if it can, but fail silently at the if it cannot.
The code I've got for this looks like:
subprocess.Popen(["sudo", "renice", "-20", str(process.pid)],
# shell = True,
stdout = subprocess.DEVNULL,
stderr = subprocess.STDOUT)
If I have shell = True commented out, the process gets its new niceness, but if I'm running as an unprivileged user, sudo kicks out its password prompt and wrecks my terminal output. Keystrokes become invisible and everything gets stupid looking. If I uncomment shell = True, I get no terminal output. However, the process doesn't get its niceness changed, even if I run it as root.
The corrupted terminal output may well be down to the terminal emulator I'm using (haven't tried it with another one) but I want to merge these behaviors. Silence from sudo no matter what, but a niceness change if the user can sudo successfully.
Any pointers?
I think it's because sudo requires a TTY, even when password is not necessary.
Try providing one:
import os
import pty
import subprocess
master, slave = pty.openpty()
p = subprocess.Popen(
["sudo", "id", "-a"],
stdin=slave, stdout=slave, stderr=slave
)
os.close(slave)
output = os.read(master, 1026)
os.close(master)
print(output)
The code above should print something like uid=0(root) gid=0(root) groups=0(root). If it does, then replace id with renice, remove unnecessary os.read and you should be good.
Update: in OP's case, it had failed for another reason. Adding start_new_session=True to Popen had it fail silently for unprivileged users and succeed as root.
When running a secondary python script:
Is it possible to run a subprocess.Popen, or subprocess.call or even execfile in a new terminal? (as in simply a different terminal than the current terminal where the script is run).
Alternatively, if before running my program (main), I open two terminals first, can I then point the secondary script to the second terminal? (so somehow getting the ID of open terminals, and then using a specific one among them, to perform the subprocess).
An example, two subprocesses to be run, first.py should be called first, only then the second is called, second.py. Because the two scripts first.py and second.py are interdependent (as in first.py goes to wait mode, until second.py is run, then first.py resumes, and I don't know how to make this communication work between them in terms of subprocesses.)
import subprocess
command = ["python", "first.py"]
command2 = ["python", "second.py"]
n = 5
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
p2 = subprocess.Popen(command2, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
if output == 'stop':
print 'success'
p.terminate()
p2.terminate()
break
Framework (Ubuntu, python 2.7)
I guess you want something like
subprocess.call(['xterm','-e','python',script])
Good old xterm has almost no frills; on a Freedesktop system, maybe run xdg-terminal instead. On Debian, try x-terminal-emulator.
However, making your program require X11 is in most cases a mistake. A better solution is to run the subprocesses with output to a log file (or a socket, or whatever) and then separately run tail -f on those files (in a different terminal, or from a different server over ssh, or with output to a logger which supports rsyslog, or or or ...) which keeps your program simple and modular, free from "convenience" dependencies.
If you're using tmux, you can specify which target you want the command to run in:
tmux send -t foo.0 ls ENTER
So, if you've created a tmux session foo.0, you should be able to do:
my_command = 'ls'
tmux_cmd = ['tmux', 'send', '-t', 'foo.0', my_command]
p = subprocess.Popen(tmux_cmd)
You can specify the tty of the terminal window you wish the command to be carried out in:
ls > /dev/ttys004
However, I would recommend going for the tmux approach for greater control (see my other answer).
I am writing a program that does the submission and bookkeeping of tasks to an external computing grid. In order to submit such a task, I need to setup the correct environment (read: execute a bash setup script) and then execute the bash command to submit the task. To complicate the matter, the task being submitted can rely on customized code which needs to be compiled locally in order to be tested, before being uploaded to grid. The compilation takes a certain amount of time, and the compiler produces output to the bash shell at unpredictable and variable intervals. You'll see how this is relevant by looking at my attempt to implement a solution:
## ---------------------------------------------------------
def shell_command(poll, shell, command):
"""
Sends a command to the shell
"""
output = ''
## Send command
shell.stdin.write(command + '\n')
shell.stdin.flush()
## Wait for output
while poll.poll(500):
result = shell.stdout.readline()
## Print and record output
print result,
output += result
return output
## ---------------------------------------------------------
def start_shell():
"""
Starts the shell associated to this job
"""
## Start the shell
shell = subprocess.Popen(['bash'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
## Associate poll to shell
poll = select.poll()
poll.register(shell.stdout.fileno(), select.POLLIN)
## Setup environment
shell_command(poll, shell, 'export ENVIRONMENT_VARIABLE=/path/to/stuff')
shell_command(poll, shell, 'source $ENVIRONMENT_VARIABLE/setup.sh')
return poll, shell
## Main code
poll, shell = start_shell()
shell_command(poll, shell, 'compile local code')
[...] do some testing on the compiled code [...]
shell_command(poll, shell, 'submit task on the grid')
So the issue I encounter is that the correct execution of the code depends on the timeout I give to poll.poll(timeout). I can always give a ridiculously long timeout, and then the code never fails, but it takes a correspondingly long time before the code finishes. With a short timeout, the execution of the code will be interrupted as soon as the compiler provides no output for longer than timeout.
I tried using subprocess.Popen.communicate(), but it doesn't seem to allow me to pass multiple commands to the same shell (and allow me to keep the shell alive for later), and I don't want to have to setup the environment every time I need to issue a new command.
It seems to me that select.poll can only detect when output is produced on stdout, but what I would really like to do is detect the prompt return. Is this possible in this context? Any other ideas?
Turns out pexpect was present in the environment in which I wish my code to run, so I tried it but with minimal success, and it would take too long to explain why. I'm satisfied for the moment with modifying the shell_command function like this:
## ---------------------------------------------------------
def shell_command(poll, shell, command):
"""
Sends a command to the shell
"""
output = ''
## Send command
shell.stdin.write(command + '; echo awesomeapplesauce\n')
shell.stdin.flush()
## Wait for end of process, signaled by awesomeapplesauce
print 'executing \'{0}\''.format(command)
while True:
if poll.poll(500):
result = shell.stdout.readline()
output += result
if 'awesomeapplesauce' in result: break
print result,
return output
It's a bit of a hack I guess, but it's sufficiently robust for my purposes. In words, at the end of every command sent to the shell, chain an echo command that echoes a unique string, and then wait for that string to be printed out to terminate the polling loop and move on. If anyone has a more fundamental way of spotting a prompt return, I would still be delighted to learn about it!
Does looking at the return code like this work?
from subprocess import call
out = open('/tmp/out.log', 'w')
err = open('/tmp/err.log', 'w')
ret = call('export FOO=foobar;echo $FOO', stdout=log, stderr=err, shell=True)
print ret
ret = call('test -e /tmp/whoosh', stdout=log, stderr=err, shell=True)
print ret
I have written a Python application, using Flask, that serves a simple website that I can use to start playback of streaming video on my Raspberry Pi (microcomputer). Essentially, the application allows be to use my phone or tablet as a remote control.
I tested the application on Mac OS, and it works fine. After deploying it to the Raspberry Pi (with the Raspbian variant of Debian installed), it serves the website just fine, and starting playback also works as expected. But, stopping the playback fails.
Relevant code is hosted here: https://github.com/lcvisser/mlbviewer-remote/blob/master/remote/mlbviewer-remote.py
The subprocess is started like this:
cmd = 'python2.7 mlbplay.py v=%s j=%s/%s/%s i=t1' % (team, mm, dd, yy)
player = subprocess.Popen(cmd, shell=True, bufsize=-1, cwd=sys.argv[1])
This works fine.
The subprocess is supposed to stop after this:
player.send_signal(signal.SIGINT)
player.communicate()
This does work on Mac OS, but it does not work on the Raspberry Pi: the application hangs until the subprocess (started as cmd) is finished by itself. It seems like SIGINT is not sent or not received by the subprocess.
Any ideas?
(I have posted this question also here: https://unix.stackexchange.com/questions/133946/application-becomes-non-responsive-to-requests-on-raspberry-pi as I don't know if this is an OS problem or if it a Python/Flask-related problem.)
UPDATE:
Trying to use player.communicate() as suggested by Jan Vlcinsky below (and after finally seeing the warning here) did not help.
I'm thinking about using the proposed solution by Jan Vlcinsky, but if Flask does not even receive the request, I don't think that would receive the issue.
UPDATE 2:
Yesterday night I was fortunate to have a situation in which I was able to exactly pinpoint the issue. Updated the question with relevant code.
I feel like the solution of Jan Vlcinsky will just move the problem to a different application, which will keep the Flask application responsive, but will let the new application hang.
UPDATE 3:
I edited the original part of the question to remove what I now know not to be relevant.
UPDATE 4: After the comments of #shavenwarthog, the following information might be very relevant:
On Mac, mlbplay.py starts something like this:
rmtpdump <some_options_and_url> | mplayer -
When sending SIGINT to mlbplay.py, it terminates the process group created by this piped command (if I understood correctly).
On the Raspberry Pi, I'm using omxplayer, but to avoid having to change the code of mlbplay.py (which is not mine), I made a script called mplayer, with the following content:
#!/bin/bash
MLBTV_PIPE=mlbpipe
if [ ! -p $MLBTV_PIPE ]
then
mkfifo $MLBTV_PIPE
fi
cat <&0 > $MLBTV_PIPE | omxplayer -o hdmi $MLBTV_PIPE
I'm now guessing that this last line starts a new process group, which is not terminated by the SIGINT signal and thus making my app hang. If so, I should somehow get the process group ID of this group to be able to terminate it properly. Can someone confirm this?
UPDATE 5: omxplayer does handle SIGINT:
https://github.com/popcornmix/omxplayer/blob/master/omxplayer.cpp#L131
UPDATE 6: It turns out that somehow my SIGINT transforms into a SIGTERM somewhere along the chain of commands. SIGTERM is not handled properly by omxplayer, which appears to be the problem why things keep hanging. I solved this by implementing a shell script that manages the signals and translates them to proper omxplayer commands (sort-of a lame version of what Jan suggested).
SOLUTION: The problem was in player.send_signal(). The signal was not properly handled along the chain of commands, which caused the parent app to hang. The solution is to implement wrappers for commands that don't handle the signals well.
In addition: used Popen(cmd.split()) rather than using shell=True. This works a lot better when sending signals!
The problem is marked in following snippet:
#app.route('/watch/<year>/<month>/<day>/<home>/<away>/')
def watch(year, month, day, home, away):
global session
global watching
global player
# Select video stream
fav = config.get('favorite')
if fav:
fav = fav[0] # TODO: handle multiple favorites
if fav in (home, away):
# Favorite team is playing
team = fav
else:
# Use stream of home team
team = home
else:
# Use stream of home team
team = home
# End session
session = None
# Start mlbplay
mm = '%02i' % int(month)
dd = '%02i' % int(day)
yy = str(year)[-2:]
cmd = 'python2.7 mlbplay.py v=%s j=%s/%s/%s' % (team, mm, dd, yy)
# problem is here ----->
player = subprocess.Popen(cmd, shell=True, cwd=sys.argv[1])
# < ------problem is here
# Render template
game = {}
game['away_code'] = away
game['away_name'] = TEAMCODES[away][1]
game['home_code'] = home
game['home_name'] = TEAMCODES[home][1]
watching = game
return flask.render_template('watching.html', game=game)
You are starting up new process for executing shell command, but do not wait until it completes. You seem to rely on a fact, that the command line process itself is single one, but your frontend is not taking care of it and can easily start another one.
Another problem could be, you do not call player.communicate() and your process could block if stdout or stderr get filled by some output.
Proposed solution - split process controller from web app
You are trying to create UI for controlling a player. For this purpose, it would be practical splitting your solution into frontend and backend. Backend would serve as player controller and would offer methods like
start
stop
nowPlaying
To integrate front and backend, multiple options are available, one of them being zerorpc as shown here: https://stackoverflow.com/a/23944303/346478
Advantage would be, you could very easily create other frontends (like command line one, even remote one).
One more piece of the puzzle: proc.terminate() vs send_signal.
The following code forks a 'player' (just a shell with sleep in this case), then prints its process information. It waits a moment, terminates the player, then verifies that the process is no more, it has ceased to be.
Thanks to #Jan Vlcinsky for adding the proc.communicate() to the code.
(I'm running Linux Mint LMDE, another Debian variation.)
source
# pylint: disable=E1101
import subprocess, time
def show_procs(pid):
print 'Process Details:'
subprocess.call(
'ps -fl {}'.format(pid),
shell=True,
)
cmd = '/bin/sleep 123'
player = subprocess.Popen(cmd, shell=True)
print '* player started, PID',player.pid
show_procs(player.pid)
time.sleep(3)
print '\n*killing player'
player.terminate()
player.communicate()
show_procs(player.pid)
output
* player started, PID 20393
Process Details:
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
0 S johnm 20393 20391 0 80 0 - 1110 wait 17:30 pts/4 0:00 /bin/sh -c /bin/sleep 123
*killing player
Process Details:
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD