I try to send command from python shell to Ubuntu OS to define process existed on particular port and kill it:
port = 8000
os.system("netstat -lpn | grep %s" % port)
Output:
tcp 0 0 127.0.0.1.8000 0.0.0.0:* LISTEN 22000/python
Then:
os.system("kill -SIGTERM 22000")
but got following trace
sh: 1: kill: Illegal option -S
For some reason command can not be transferred to OS with full signal -SIGTERM, but only with -S. I can simply kill this process directly from Terminal, so seems that it's Python or os issue... How can I run kill command using Python?
Any help is appreciated
os.system uses sh to execute the command, not bash which you get in a terminal. The kill builtin in sh requires giving the signal names without the SIG prefix. Change your os.system command line to kill -TERM 22000.
[EDIT] As #DJanssens suggested, using os.kill is a better option than calling the shell for such a simple thing.
You could try using
import signal
os.kill(process.pid, signal.SIGKILL)
documentation can be found here.
you could also use signal.CTRL_C_EVENT, which corresponds to the CTRL+C keystroke event.
Related
I am writing a python script that calls a bash script.
from subprocess import call
rc = call("./try_me.sh")
How can I exit the running bash file without exiting the running python script?
I need something like Ctrl + C.
There's different ways to approach this problem.
It appears that you're writing some sort of CLI tool since you referenced Ctrl+C. If that's the case you can use & and send a SIGINT signal to stop it when you need.
import os
os.system('nohup ./try_me.sh &')
If you want stricter control try using async subprocess management:
https://docs.python.org/3/library/asyncio-subprocess.html
After Some research, I found I should have used Popen to run the bash file as #AKX suggested.
from subprocess import Popen
r1 = Popen('printf "*Systempassword* \n" |sudo -S ./try_me.sh &', shell=True, preexec_fn=os.setsid))
when you need to stop running the bash file
import os
os.killpg(os.getpgid(r1.pid), signal.SIGTERM) # Send the signal to all the process groups
I'm trying to open Tcpdump to capture UDP packets from a Python script. Here is my code:
os.system("tcpdump -i wlp2s0 -n dst 8.8.8.8 -w decryptedpackets.pcap &")
testfile = urllib.URLopener()
s = socket(AF_INET, SOCK_DGRAM)
host = "8.8.8.8"
port = 5000
buf = 1024
addr = (host, port)
s.connect((host, port))
f = open("file.txt", "rb")
data = f.read(buf)
while (data):
if (s.sendto(data, addr)):
print "sending ..."
data = f.read(buf)
I am able to capture the packets (pcap file has content) if I manually execute this command in shell:
tcpdump -i wlp2s0 -n dst 8.8.8.8 -w decryptedpackets.pcap &
However, If I use os.system() I can't capture the packets. ( When I open the pcap file, I find it empty)
I have verified and found that there is a process that gets created when the Python script is executed:
root 10092 0.0 0.0 17856 1772 pts/19 S 10:25 0:00
tcpdump -i wlp2s0 -n dst 8.8.8.8 -w decryptedpackets.pcap
Also, I'm running this as a sudo user to avoid any permission problems.
Any suggestions what could be causing this problem ?
From python documentation.
os.system(command) Execute the command (a string) in a subshell. This
is implemented by calling the Standard C function system(), and has
the same limitations. Changes to sys.stdin, etc. are not reflected in
the environment of the executed command.
On Unix, the return value is the exit status of the process encoded in
the format specified for wait(). Note that POSIX does not specify the
meaning of the return value of the C system() function, so the return
value of the Python function is system-dependent.
On Windows, the return value is that returned by the system shell
after running command, given by the Windows environment variable
COMSPEC: on command.com systems (Windows 95, 98 and ME) this is always
0; on cmd.exe systems (Windows NT, 2000 and XP) this is the exit
status of the command run; on systems using a non-native shell,
consult your shell documentation.
The subprocess module provides more powerful facilities for spawning
new processes and retrieving their results; using that module is
preferable to using this function. See the Replacing Older Functions
with the subprocess Module section in the subprocess documentation for
some helpful recipes.
I think that os.system returns immediately and the script keeps going, there's no problem with the code but you probably need to create a separate thread and call os.system with the tcp-dump since I believe that it is returning immediately.
did you use the -w switch too when running from the command line instead of the script? If not your problem might be buffering and you should have a look at the -U option. Apart from that the -w switch should be used before the capture expression, i.e. the expression should be the last thing. In summary: tcpdump -i wlp2s0 -n -w out.pcap -U dst 8.8.8.8
– Steffen Ullrich
I am trying to kill a process tree using this shell command:
kill -TERM -- -3333
so in python I use subprocess:
subprocess.call(['kill', '-TERM', '--', '-3333'])
the process is terminated as expected but I get this message:
ERROR: garbage process ID "--".
Usage:
kill pid ... Send SIGTERM to every process listed.
kill signal pid ... Send a signal to every process listed.
kill -s signal pid ... Send a signal to every process listed.
kill -l List all signal names.
kill -L List all signal names in a nice table.
kill -l signal Convert between signal numbers and names.
Why do I get this message and what am I doing wrong?
I am using Python 2.6.5 on Ubuntu 10.04.
You are passing the kill command an argument it doesn't recognise. You could simply drop the --:
subprocess.call(['kill', '-TERM', '-3333'])
You probably should be passing in the PID without a dash as well, if -- is not supported, neither will a negative PID; at which point you'd be signalling just the single process.
Note that you are not executing this through a shell, while your shell probably has its own kill command implementation, Python instructs the OS to find the first kill binary executable on the path instead. The shell built-in may accept -- but that's not the command you are executing here.
If you must use the shell built-in, then you'll have to set shell=True and pass in a string command line:
subprocess.call('kill -TERM -- -3333', shell=True)
This uses /bin/sh; you can set a different shell to run the command through with the executable argument:
subprocess.call('kill -TERM -- -3333', shell=True, executable='/bin/bash')
Last but not least, you may not need the kill command at all. Python can send signals directly with the os.kill() function:
import os, signal
os.kill(3333, signal.SIGTERM)
and the os.killpg() function can send a signal to a process group:
import os, signal
os.killpg(3333, signal.SIGTERM)
I can't figure out how to close a bash shell that was started via Popen. I'm on windows, and trying to automate some ssh stuff. This is much easier to do via the bash shell that comes with git, and so I'm invoking it via Popen in the following manner:
p = Popen('"my/windows/path/to/bash.exe" | git clone or other commands')
p.wait()
The problem is that after bash runs the commands I pipe into it, it doesn't close. It stays open causing my wait to block indefinitely.
I've tried stringing an "exit" command at the end, but it doesn't work.
p = Popen('"my/windows/path/to/bash.exe" | git clone or other commands && exit')
p.wait()
But still, infinite blocking on the wait. After it finishes its task, it just sits at a bash prompt doing nothing. How do I force it to close?
Try Popen.terminate() this might help kill your process. If you have only synchronous executing commands try to use it directly with subprocess.call().
for example
import subprocess
subprocess.call(["c:\\program files (x86)\\git\\bin\\git.exe",
"clone",
"repository",
"c:\\repository"])
0
Following is an example of using a pipe but this is a little overcomplicated for most use cases and makes sense only if you talk with a service that needs interaction (at least in my opinion).
p = subprocess.Popen(["c:\\program files (x86)\\git\\bin\\git.exe",
"clone",
"repository",
"c:\\repository"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
print p.stderr.read()
fatal: destination path 'c:\repository' already exists and is not an empty directory.
print p.wait(
128
This can be applied to ssh as well
To kill the process tree, you could use taskkill command on Windows:
Popen("TASKKILL /F /PID {pid} /T".format(pid=p.pid))
As #Charles Duffy said, your bash usage is incorrect.
To run a command using bash, use -c parameter:
p = Popen([r'c:\path\to\bash.exe', '-c', 'git clone repo'])
In simple cases, you could use subprocess.check_call instead of Popen().wait():
import subprocess
subprocess.check_call([r'c:\path\to\bash.exe', '-c', 'git clone repo'])
The latter command raises an exception if bash process returns non-zero status (it indicates an error).
I am running the python script shown below. The script does a ssh to the remote machine and runs a c program in the background. But on running the python script I get the following output:
This above means that a.out was run and the pid is [1] 2115 .
However whn I login to the remote machine and check for a.out via 'ps' command I dont see it.
Another observation is that when i add the delay statement in the python script thread.sleep(20) like , and while the script is still runnuing,
if I check in the remote machine, a.out is active.
#!/usr/bin/python
import HostMod #where ssh function is wrote
COMMAND_PROMPT1 = '[#$] '
p = HostMod.HostModule()
obj1=p.HostLogin('10.200.2.197', 'root', 'newnet') #ssh connection to remote system
obj1.sendline ('./a.out > test.txt &') #sending program to remote machine to executethere
obj1.expect (COMMAND_PROMPT1)
print obj1.before
#a.out program
int main()
{
while(1)
{
sleep(10);
}
return 0;
}
please try giving absolute path of ./a.out
Try using nohup
...
obj1.sendline ('nohup ./a.out > test.txt &') #sending program to remote machine to executethere
but you should really not use a shell to invoke commands over ssh. The ssh protocol has builtin support for running commands. I am not sure how your HostMod module works, but you could try this from your shell (would be easy to port to use subprocess):
ssh somehost nohup sleep 20
<Ctrl+C>
ssh somehost
ps ax | grep sleep
And you should see your sleep process still running. This method does not instantiate a shell, which is much more reliable, since you may or may not have control over which shell is run, what is in your ~/.(...)rc files, etc. All in all, much more portable.