I am trying the below code to run command continuously and make pcaps of 5 seconds with different names but it is running for just one time generate just one pcap and then stops giving exception
from subprocess import run
command = 'tcpdump -i eno1 -w abc_{}.pcap'
file_counter = 0
while True:
output = run(command.format(str(file_counter)), capture_output=True, shell=True, timeout=5).stdout.decode()
file_counter += 1
print("Captured packet for 5 seconds")
Traceback (most recent call last):
File "one.py", line 17, in <module>
output = run(command.format(str(file_counter)), capture_output=True, shell=True,timeout=5).stdout.decode()
File "/usr/lib/python3.8/subprocess.py", line 491, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1024, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.8/subprocess.py", line 1867, in _communicate
self._check_timeout(endtime, orig_timeout, stdout, stderr)
File "/usr/lib/python3.8/subprocess.py", line 1068, in _check_timeout
raise TimeoutExpired(
subprocess.TimeoutExpired: Command 'tcpdump -i eno1 -w abc_0.pcap' timed out after 5 seconds
Thrown error is because of the timeout parameter. Essentially you are telling the subprocess that the process SHOULD finish BEFORE the timeout else raise error.
If you want to only let the process run for 5 minutes, you can close it after 5 minutes using
os.killpg(os.getpgid(process.pid), signal.SIGTERM)
Here you would need to keep track of the process PID and 5 minutes timer starting from launching the process. You can do that in your main using time.time()
Related
On my system(using python3.6.9) I got the Too many open Files Error.
I got the error while executing a subprocess in python.
Traceback:
File "/opt/KIDICAP/docengine/Objects/Watcher.py", line 99, in watch, self.check_ulimit()
File "/opt/KIDICAP/docengine/Objects/Watcher.py", line 469, in check_ulimit
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
File "/usr/lib/python3.6/subprocess.py", line 729, in __init__ restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1254, in _execute_child errpipe_read, errpipe_write = os.pipe()
OSError: [Errno 24] Too many open files
I tried to look it up but permanently setting the ulimit didn't work.
I even build a function, that sets the ulimit before executing a subprocess.
process = subprocess.Popen(comand1, shell=True, stdout=subprocess.PIPE)
output = process.stdout.readlines()
process.stdout.close()
process.terminate()
# print(output)
# logger.info(comand2)
process = subprocess.Popen(comand2, shell=True, stdout=subprocess.PIPE)
output = process.stdout.readlines()
process.stdout.close()
process.terminate()
# print(output)
# logger.info(comand3)
process = subprocess.Popen(comand3, shell=True, stdout=subprocess.PIPE)
output = process.stdout.readlines()
process.stdout.close()
process.terminate()
I didn't find the right solution on the Internet.
The solution for me was to edit the fs.file max
(/proc/sys/fs/file-max)
I put the value in the file on 1000000.
and to edit the bash.bashrc file to increase the ulimit in every session opened.
/etc/bash.bashrc
There i added:
ulimit -n 1000000
ulimit -s unlimited
I've a main process where I open a multiprocessing.Pipe(False) and send the writing end to a worker Process. Then, in the worker process, I run a Java program using subprocces.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE). I need to redirect the error of this subprocess to the writing end of multiprocessing.Pipe
For this I referred to this answer by Ilija as this is exactly what I want to achieve, but on my machine(Windows), it throws OSError: [Errno 9] Bad file descriptor
Machine details:
OS - Windows 10 (64bit)
Python version - 3.7.4
Code:
Method 1 (Ilija's answer)
def worker(w_conn):
os.dup2(w_conn.fileno(), 2)
sp = subprocess.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
sp.wait()
w_conn.close()
def main():
r_conn, w_conn = multiprocessing.Pipe(False)
process = multiprocessing.Process(target=worker, args=(w_conn,))
process.start()
while not r_conn.poll() and not w_conn.closed:
# Do stuff
else:
# Read error from r_conn, and handle it
r_conn.close()
process.join()
if __name__=='__main__':
main()
Error:
Process Process-1:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\User\Desktop\Workspace\Error.py", line 14, in worker
os.dup2(w_conn.fileno(), 2)
OSError: [Errno 9] Bad file descriptor
Method 2: In worker function, sending w_conn as argument to Popen
def worker(w_conn):
sp = subprocess.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=w_conn)
sp.wait()
w_conn.close()
Error:
Process Process-1:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\User\Desktop\Workspace\Error.py", line 13, in worker
sp = subprocess.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=w_conn)
File "C:\ProgramData\Anaconda3\lib\subprocess.py", line 728, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "C:\ProgramData\Anaconda3\lib\subprocess.py", line 1077, in _get_handles
errwrite = msvcrt.get_osfhandle(stderr.fileno())
OSError: [Errno 9] Bad file descriptor
Is there any workaround/alternate method to achive this on Windows?
I still don't know why "Method 1" is not working. Any information regarding this will be appreciated.
"Method 2" is wrong altogether as we can't use Connection object (returned by multiprocessing.Pipe()) as a file handle in subprocess.Popen.
What works is checking for data in stderr of subprocess sp and sending the data through w_conn to main process.
def worker(w_conn):
sp = subprocess.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sp.wait()
if sp.stderr.seek(0, io.SEEK_END)>0:
w_conn.send(sp.stderr.read())
w_conn.close()
I am completely new to the subprocess module. And I was trying to automate the deauthentication attack commands. When I run airodump-ng wlan0mon as you know it looks for the APs nearby and the connected clients to it.
Now when I try to run this command using lets suppose p = subprocess.run(["airmon-ng","wlan0mon"], capture_output=True) in Python as you know this command runs until the user hits Ctrl+C, so it should save the last output when user hits Ctrl+C in the variable but instead I get error which is this:
Traceback (most recent call last):
File "Deauth.py", line 9, in <module>
p3 = subprocess.run(["airodump-ng","wlan0"], capture_output=True)
File "/usr/lib/python3.8/subprocess.py", line 491, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1024, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.8/subprocess.py", line 1866, in _communicate
ready = selector.select(timeout)
File "/usr/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt
What can I try to resolve this?
Just use Python's error handling. Catch any KeyboardInnterrupts (within your subprocess function) using try and except statements like so:
def stuff(things):
try:
# do stuff
except KeyboardInterrupt:
return last_value
OpenSolaris derivate (NexentaStor), python 2.5.5
I've seen numerous examples and many seem to indicate that the problem is a deadlock. I'm not writing to stdin so I think the problem is that one of the shell commands exits prematurely.
What's executed in Popen is:
ssh <remotehost> "zfs send tank/dataset#snapshot | gzip -9" | gzip -d | zfs recv tank/dataset
In other words, login to a remote host and (send a replication stream of a storage volume, pipe it to gzip) pipe it to zfs recv to write to a local datastore.
I've seen the explanation about buffers but Im definitely not filling up those, and gzip is bailing out prematurely so I think that the process.wait() never gets an exit.
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
process.wait()
if process.returncode == 0:
for line in process.stdout:
stdout_arr.append([line])
return stdout_arr
else:
return False
Here's what happens when I run and interrupt it
# ./zfs_replication.py
gzip: stdout: Broken pipe
^CKilled by signal 2.
Traceback (most recent call last):
File "./zfs_replication.py", line 155, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
process.wait()
File "/usr/lib/python2.5/subprocess.py", line 1184, in wait
pid, sts = self._waitpid_no_intr(self.pid, 0)
File "/usr/lib/python2.5/subprocess.py", line 1014, in _waitpid_no_intr
return os.waitpid(pid, options)
KeyboardInterrupt
I also tried to use the Popen.communicat() method but that too hangs if gzip bail out. In this case the last part of my command (zfs recv) exits because the local dataset has been modified so the incremental replication stream will not be applied, so even though that will be fixed there has got to be a way of dealing with gzips broken pipes?
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
dosomething()
else:
dosomethingelse()
And when run:
cannot receive incremental stream: destination tank/repl_test has been modified
since most recent snapshot
gzip: stdout: Broken pipe
^CKilled by signal 2.Traceback (most recent call last):
File "./zfs_replication.py", line 154, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
stdout, stderr = process.communicate()
File "/usr/lib/python2.5/subprocess.py", line 662, in communicate
stdout = self._fo_read_no_intr(self.stdout)
File "/usr/lib/python2.5/subprocess.py", line 1025, in _fo_read_no_intr
return obj.read()
KeyboardInterrupt
I would like to run an exe from this directory:/home/pi/pi_sensors-master/bin/Release/
This exe is then run by tying mono i2c.exe and it runs fine.
I would like to get this output in python which is in a completely different directory.
I know that I should use subprocess.check_output to take the output as a string.
I tried to implement this in python:
import subprocess
import os
cmd = "/home/pi/pi_sensors-master/bin/Release/"
os.chdir(cmd)
process=subprocess.check_output(['mono i2c.exe'])
print process
However, I received this error:
The output would usually be a data stream with a new number each time, is it possible to capture this output and store it as a constantly changing variable?
Any help would be greatly appreciated.
Your command syntax is incorrect, which is actually generating the exception. You want to call mono i2c.exe, so your command list should look like:
subprocess.check_output(['mono', 'i2c.exe']) # Notice the comma separation.
Try the following:
import subprocess
import os
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
print subprocess.check_output(['mono', executable])
The sudo is not a problem as long as you give the full path to the file and you are sure that running the mono command as sudo works.
I can generate the same error by doing a ls -l:
>>> subprocess.check_output(['ls -l'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 537, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
However when you separate the command from the options:
>>> subprocess.check_output(['ls', '-l'])
# outputs my entire folder contents which are quite large.
I strongly advice you to use the subprocess.Popen -object to deal with external processes. Use Popen.communicate() to get the data from both stdout and stderr. This way you should not run into blocking problems.
import os
import subprocess
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
proc = subprocess.Popen(['mono', executable])
try:
outs, errs = proc.communicate(timeout=15) # Times out after 15 seconds.
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
Or you can call the communicate in a loop if you want a 'data-stream' of sort, an answer from this question:
from subprocess import Popen, PIPE
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
p = Popen(["mono", executable], stdout=PIPE, bufsize=1)
for line in iter(p.stdout.readline, b''):
print line,
p.communicate() # close p.stdout, wait for the subprocess to exit