I'm developing a ADB client for python, i am planning to invoke adb binary with sub process to get the information.
Here is how i tried to invoke it, to start the adb server.
check_output([ 'adb.exe','start-server'],stderr=STDOUT)
I do see the adb running, but the program is getting stuck after that.
I have tried with shell=True, but that didn't affect it.
When i kill adb from task manager, the program does exit, and prints the right ouput.
How can i fix this, I assume that the command doesn't exit since the daemon is running ?
I was able to overcome this by starting the command in a separate thread, and using the current thread with other adb commands, as they return immediately.
Is there a more elegant solution ?
You can do that with subprocess.Popen.
import subprocess
adb = subprocess.Popen(['adb.exe', 'start-server'])
# Do some other stuff while adb is running...
adb.terminate() # Kill the process once you're done
This also has some advantages, like the possibility of giving input to the process through stdin, by using Popen.communicate()
Related
I am using pexpect to run a start command on an in-house application. The start command starts a number of processes. As the processes are starting one by one in the background everything looks good, but when the 'start' process finishes and the pexpect process ends, the processes that have been started also die.
child = pexpect.spawn('foo start')
child.logfile = log
child.wait()
For this scenario, I can use nohup and it works as expected.
child = pexpect.spawn('bash -c "nohup foo start"')
However, there is also an installer for the same in-house application that has the same issue, part of the installation is to start the processes. The installer is interactive and requires input, so nohup will not work.
How can I prevent the processes that are started by the installer from dying when the pexpect session ends?
Note: The start and install processes work fine when executed from a standard terminal session. They are not tied to the session in any way.
I couldn't find much in the documentation about it, but including the "ignore_sighup=True" option in the spawn command fixed my issue.
child = pexpect.spawn('foo start', ignore_sighup=True)
I have a very simple problem but cant seem to get a simple solution anywhere.
I have an application running on my Pi which I start by typing into terminal and passing some arguments. For example:
sudo $HOME/Projects/myExampleApp MD_DWNC2378
This results in the console application starting and as expected, can take keyboard inputs.
Now, what I want to do is repeat the process described so far from a python application. My python application should be able to open the myExampleApp in terminal, get a reference to the console window and then direct any commands from my Python application as a keyboard press to myExampleApp.
On a windows platform, pywinauto library does the job.
What is the best option for doing what I described on linux running on my Pi 3?
Any suggestions would be very helpful.
Have a look at https://docs.python.org/2/library/cmd.html for receiving commands. Your application can be run with subprocess as suggested by another user:
import cmd
import sys
import subprocess
class Controller(cmd.Cmd):
cmd = 'while true; do cat /dev/stdin;sleep 1;done'
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE, shell=True)
def default(self, cmd):
self.p.stdin.write(cmd+"\n")
a=self.p.stdout.readline()
sys.stdout.write("subprocess returned {}".format(a))
sys.stdout.flush()
if __name__ == '__main__':
try:
Controller().cmdloop()
except:
print('Exit')
Running this script will present you with a CLI interface. Any command sent in will be forwarded to the stdin of your application (simulated by the shell script shown).
If you don't need to process the returning stdout you can skip the self.p.stdout.readline() call. If you need the returning data, it may be beneficial to change the readline() call to read(), this will read until the application sends an EOF.
The sudo requirement of your application can probably be overcome by running the python script with sudo. I'm no security expert, but be aware of the security risks of running the script as sudo, as well as the shell=True parameter of the Popen call.
In a project I am working on, there is some code that starts up a long-running process using sudo:
subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
I would like to clean up this process when the parent exits. Currently, the subprocess keeps running when the parent exits (re-attached to init, of course).
I am not sure of the best solution to this problem. The code is limited to only running certain commands via sudo, and granting blanket authority to run sudo kill would be sketchy at best.
I don't have an open pipe to the child process that I can close (the child process is not reading from stdin), and I am not able to modify the code of the child process.
Are there any other mechanisms that might work in this situation?
First of all I just answer the question. Though I do not think it is a good thing to do, it is what you asked for. I would wrap that child process into a small program that can listen stdin. Then you may sudo that program, and it will be able to run the process without sudo, and will know its pid and have the rights needed to kill the process when you ask it through stdin to do so.
However, generally such a situation means sudo with no password and poor security. The most common technique is to use lowering your program's privileges, not elevating them. In such case you should create a runner program that is started by superuser, than it starts your main program with lowering of privileges and listens for a pipe to communicate. When it is necessary to run a command, your main program tells that to the runner program, and runner program does the job. When it is necessary to terminate command, you again tell this to a runner program via the pipe.
The common rules are:
If you need superuser rights, you should give them to the very parent process.
If a child process needs to do a privileged operation, it requests the top-level process to do that for him.
The top-level process should be kept as small as possible and do as little as possible. The larger it is, the more holes in security it creates.
That's what many applications do. The first example that comes into my mind is Apache web server (at least on *nix) that has a small top-level program and preforked working programs that are not run as root/wheel/whatever-else-is-the-superuser-username.
This will raise OSError: [Errno 1] Operation not permitted on the last line:
p = subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
print p.stdout.read()
p.terminate()
Assuming sudo will not ask for a password, one workaround is to make a shell script which calls sudo …
#!/bin/sh
sudo /usr/bin/somecommand
… and then do this in Python:
p = subprocess.Popen("/path/to/script.sh", cwd="/path/to")
print p.stdout.read()
p.terminate()
What is the recommended way to start long running (bash) scripts on several remote servers via fabric, so that you can later re-attach to the process for checking the status of the process, eventually sigterm it and get the exit code?
EDIT (10-Nov-2012):
In the mean-time I found a question going into the same direction: HOW TO use fabric use with dtach,screen,is there some example
It seems that the preferred way would be to use screen or tmux.
http://www.fabfile.org/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang
I'm obtaining a PID, using python, of a CGI script, however, the PID is not valid i.e. can't Taskkill it from CL. I get:
"Process: no process found with pid xxxx" where xxxx is the pid
I thought maybe I have to kill a parent python shell instance, but os.ppid doesn't work in windows.
So then I installed psutil python module and can now get parent PID, but it just shows the parent as the actual WebServer (Abyss), which I don't think I want to kill, since it is the http process that I notice runs constantly and not just a CGI interpreter instance.
Using psutil I CAN get the process status of the actual script, using the pid returned by os.getpid(), and see that it is running. So the pid works for purposes of process information retrieval using psutil. But this gets me no further to obtaining the actual PID I need to kill the script, using EITHER Taskkill on the CL or via kill() from psutil!
What exactly is a cgi shell script from a process perspective, and if it is not a process, why is os.getpid() returning a pid?
Why are you assured that your CGI script are still working when you try to kill it? Web server starts one instance of CGI script for one request and when script finishes it... just finishes.