I am completely new to the subprocess module. And I was trying to automate the deauthentication attack commands. When I run airodump-ng wlan0mon as you know it looks for the APs nearby and the connected clients to it.
Now when I try to run this command using lets suppose p = subprocess.run(["airmon-ng","wlan0mon"], capture_output=True) in Python as you know this command runs until the user hits Ctrl+C, so it should save the last output when user hits Ctrl+C in the variable but instead I get error which is this:
Traceback (most recent call last):
File "Deauth.py", line 9, in <module>
p3 = subprocess.run(["airodump-ng","wlan0"], capture_output=True)
File "/usr/lib/python3.8/subprocess.py", line 491, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1024, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.8/subprocess.py", line 1866, in _communicate
ready = selector.select(timeout)
File "/usr/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt
What can I try to resolve this?
Just use Python's error handling. Catch any KeyboardInnterrupts (within your subprocess function) using try and except statements like so:
def stuff(things):
try:
# do stuff
except KeyboardInterrupt:
return last_value
Related
I have a python code as follows:
try:
print("Running code " + str(sub.id))
r = subprocess.call("node codes.js > outputs.txt", shell=True)
except:
print("Error running submission code id " + str(sub.id))
The code is running node command using subprocess.call. The node command is running codes.js file. Sometimes if there is error in code like if there is document. command then the code throws error.
With try and except it is not catching the error thrown when the node command fails.
The error thrown is as follows
There is document. line in the code so node cannot understand that line so it throws error.
/home/kofhearts/homework/codes.js:5
document.getElementById("outputalert").innerHTML = "Hacked";
^
ReferenceError: document is not defined
at solve (/home/kofhearts/homework/codes.js:5:3)
at Object.<anonymous> (/home/kofhearts/homework/codes.js:13:28)
at Module._compile (internal/modules/cjs/loader.js:1068:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:933:32)
at Function.Module._load (internal/modules/cjs/loader.js:774:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47
Traceback (most recent call last):
File "manage.py", line 22, in <module>
main()
File "manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/home/kofhearts/homework/assignments/management/commands/police.py", line 73, in handle
if isCorrect(data.strip()[:-1], sub.question.outputs, sub.question, sub.code):
File "/home/kofhearts/homework/assignments/views.py", line 566, in isCorrect
givenans = [json.loads(e.strip()) for e in received.split('|')]
File "/home/kofhearts/homework/assignments/views.py",
How is it possible to catch the error when subprocess.call fails? Thanks for the help!
How is it possible to catch the error when subprocess.call fails?
The 'standard' way to do this is to use subprocess.run:
from subprocess import run, CalledProcessError
cmd = ["node", "code.js"]
try:
r = run(cmd, check=True, capture_output=True, encoding="utf8")
with open("outputs.txt", "w") as f:
f.write(r.stdout)
except CalledProcessError as e:
print("oh no!")
print(e.stderr)
Note that I have dropped the redirect and done it in python. You might be able to redirect with shell=True, but it's a whole security hole you don't need just for sending stdout to a file.
check=True ensures it will throw with non-zero return state.
capture_output=True is handy, because stderr and stdout are passed through to the exception, allowing you to retrieve them there. Thank to #OlvinRoght for pointing that out.
Lastly, it is possible to check manually:
r = run(cmd, capture_output=True, encoding="utf8")
if r.returncode:
print("Failed", r.stderr, r.stdout)
else:
print("Success", r.stdout)
I would generally avoid this pattern as
try is free for success (and we expect this to succeed)
catching exceptions is how we normally handle problems, so it's the Right Way (TM)
but YMMV.
I am trying the below code to run command continuously and make pcaps of 5 seconds with different names but it is running for just one time generate just one pcap and then stops giving exception
from subprocess import run
command = 'tcpdump -i eno1 -w abc_{}.pcap'
file_counter = 0
while True:
output = run(command.format(str(file_counter)), capture_output=True, shell=True, timeout=5).stdout.decode()
file_counter += 1
print("Captured packet for 5 seconds")
Traceback (most recent call last):
File "one.py", line 17, in <module>
output = run(command.format(str(file_counter)), capture_output=True, shell=True,timeout=5).stdout.decode()
File "/usr/lib/python3.8/subprocess.py", line 491, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1024, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.8/subprocess.py", line 1867, in _communicate
self._check_timeout(endtime, orig_timeout, stdout, stderr)
File "/usr/lib/python3.8/subprocess.py", line 1068, in _check_timeout
raise TimeoutExpired(
subprocess.TimeoutExpired: Command 'tcpdump -i eno1 -w abc_0.pcap' timed out after 5 seconds
Thrown error is because of the timeout parameter. Essentially you are telling the subprocess that the process SHOULD finish BEFORE the timeout else raise error.
If you want to only let the process run for 5 minutes, you can close it after 5 minutes using
os.killpg(os.getpgid(process.pid), signal.SIGTERM)
Here you would need to keep track of the process PID and 5 minutes timer starting from launching the process. You can do that in your main using time.time()
In my python script I have:
os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
where cmd is something like ['mail', '-b', emails,...] which allows me to run mail interactively and go back to the python script after mail finishes.
The only problem is when I press Ctrl-C. It seems that "both mail and the python script react to it" (*), whereas I'd prefer that while mail is ran, only mail should react, and no exception should be raised by python. Is it possible to achieve it?
(*) What happens exactly on the console is:
^C
(Interrupt -- one more to kill letter)
Traceback (most recent call last):
File "./tutster.py", line 104, in <module>
cmd(cmd_run)
File "./tutster.py", line 85, in cmd
code = os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
File "/usr/lib/python3.4/os.py", line 868, in spawnvpe
return _spawnvef(mode, file, args, env, execvpe)
File "/usr/lib/python3.4/os.py", line 819, in _spawnvef
wpid, sts = waitpid(pid, 0)
KeyboardInterrupt
and then the mail is in fact sent (which is already bad because the intention was to kill it), but the body is empty and the content is sent as a attachment with a bin extension.
Wrap it with an try/except statement:
try:
os.spawnvpe(os.P_WAIT, cmd[0], cmd, os.environ)
except KeyboardInterrupt:
pass
OpenSolaris derivate (NexentaStor), python 2.5.5
I've seen numerous examples and many seem to indicate that the problem is a deadlock. I'm not writing to stdin so I think the problem is that one of the shell commands exits prematurely.
What's executed in Popen is:
ssh <remotehost> "zfs send tank/dataset#snapshot | gzip -9" | gzip -d | zfs recv tank/dataset
In other words, login to a remote host and (send a replication stream of a storage volume, pipe it to gzip) pipe it to zfs recv to write to a local datastore.
I've seen the explanation about buffers but Im definitely not filling up those, and gzip is bailing out prematurely so I think that the process.wait() never gets an exit.
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
process.wait()
if process.returncode == 0:
for line in process.stdout:
stdout_arr.append([line])
return stdout_arr
else:
return False
Here's what happens when I run and interrupt it
# ./zfs_replication.py
gzip: stdout: Broken pipe
^CKilled by signal 2.
Traceback (most recent call last):
File "./zfs_replication.py", line 155, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
process.wait()
File "/usr/lib/python2.5/subprocess.py", line 1184, in wait
pid, sts = self._waitpid_no_intr(self.pid, 0)
File "/usr/lib/python2.5/subprocess.py", line 1014, in _waitpid_no_intr
return os.waitpid(pid, options)
KeyboardInterrupt
I also tried to use the Popen.communicat() method but that too hangs if gzip bail out. In this case the last part of my command (zfs recv) exits because the local dataset has been modified so the incremental replication stream will not be applied, so even though that will be fixed there has got to be a way of dealing with gzips broken pipes?
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
dosomething()
else:
dosomethingelse()
And when run:
cannot receive incremental stream: destination tank/repl_test has been modified
since most recent snapshot
gzip: stdout: Broken pipe
^CKilled by signal 2.Traceback (most recent call last):
File "./zfs_replication.py", line 154, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
stdout, stderr = process.communicate()
File "/usr/lib/python2.5/subprocess.py", line 662, in communicate
stdout = self._fo_read_no_intr(self.stdout)
File "/usr/lib/python2.5/subprocess.py", line 1025, in _fo_read_no_intr
return obj.read()
KeyboardInterrupt
I would like to run an exe from this directory:/home/pi/pi_sensors-master/bin/Release/
This exe is then run by tying mono i2c.exe and it runs fine.
I would like to get this output in python which is in a completely different directory.
I know that I should use subprocess.check_output to take the output as a string.
I tried to implement this in python:
import subprocess
import os
cmd = "/home/pi/pi_sensors-master/bin/Release/"
os.chdir(cmd)
process=subprocess.check_output(['mono i2c.exe'])
print process
However, I received this error:
The output would usually be a data stream with a new number each time, is it possible to capture this output and store it as a constantly changing variable?
Any help would be greatly appreciated.
Your command syntax is incorrect, which is actually generating the exception. You want to call mono i2c.exe, so your command list should look like:
subprocess.check_output(['mono', 'i2c.exe']) # Notice the comma separation.
Try the following:
import subprocess
import os
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
print subprocess.check_output(['mono', executable])
The sudo is not a problem as long as you give the full path to the file and you are sure that running the mono command as sudo works.
I can generate the same error by doing a ls -l:
>>> subprocess.check_output(['ls -l'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 537, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
However when you separate the command from the options:
>>> subprocess.check_output(['ls', '-l'])
# outputs my entire folder contents which are quite large.
I strongly advice you to use the subprocess.Popen -object to deal with external processes. Use Popen.communicate() to get the data from both stdout and stderr. This way you should not run into blocking problems.
import os
import subprocess
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
proc = subprocess.Popen(['mono', executable])
try:
outs, errs = proc.communicate(timeout=15) # Times out after 15 seconds.
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
Or you can call the communicate in a loop if you want a 'data-stream' of sort, an answer from this question:
from subprocess import Popen, PIPE
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
p = Popen(["mono", executable], stdout=PIPE, bufsize=1)
for line in iter(p.stdout.readline, b''):
print line,
p.communicate() # close p.stdout, wait for the subprocess to exit