OpenSolaris derivate (NexentaStor), python 2.5.5
I've seen numerous examples and many seem to indicate that the problem is a deadlock. I'm not writing to stdin so I think the problem is that one of the shell commands exits prematurely.
What's executed in Popen is:
ssh <remotehost> "zfs send tank/dataset#snapshot | gzip -9" | gzip -d | zfs recv tank/dataset
In other words, login to a remote host and (send a replication stream of a storage volume, pipe it to gzip) pipe it to zfs recv to write to a local datastore.
I've seen the explanation about buffers but Im definitely not filling up those, and gzip is bailing out prematurely so I think that the process.wait() never gets an exit.
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
process.wait()
if process.returncode == 0:
for line in process.stdout:
stdout_arr.append([line])
return stdout_arr
else:
return False
Here's what happens when I run and interrupt it
# ./zfs_replication.py
gzip: stdout: Broken pipe
^CKilled by signal 2.
Traceback (most recent call last):
File "./zfs_replication.py", line 155, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
process.wait()
File "/usr/lib/python2.5/subprocess.py", line 1184, in wait
pid, sts = self._waitpid_no_intr(self.pid, 0)
File "/usr/lib/python2.5/subprocess.py", line 1014, in _waitpid_no_intr
return os.waitpid(pid, options)
KeyboardInterrupt
I also tried to use the Popen.communicat() method but that too hangs if gzip bail out. In this case the last part of my command (zfs recv) exits because the local dataset has been modified so the incremental replication stream will not be applied, so even though that will be fixed there has got to be a way of dealing with gzips broken pipes?
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
dosomething()
else:
dosomethingelse()
And when run:
cannot receive incremental stream: destination tank/repl_test has been modified
since most recent snapshot
gzip: stdout: Broken pipe
^CKilled by signal 2.Traceback (most recent call last):
File "./zfs_replication.py", line 154, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
stdout, stderr = process.communicate()
File "/usr/lib/python2.5/subprocess.py", line 662, in communicate
stdout = self._fo_read_no_intr(self.stdout)
File "/usr/lib/python2.5/subprocess.py", line 1025, in _fo_read_no_intr
return obj.read()
KeyboardInterrupt
Related
I've a main process where I open a multiprocessing.Pipe(False) and send the writing end to a worker Process. Then, in the worker process, I run a Java program using subprocces.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE). I need to redirect the error of this subprocess to the writing end of multiprocessing.Pipe
For this I referred to this answer by Ilija as this is exactly what I want to achieve, but on my machine(Windows), it throws OSError: [Errno 9] Bad file descriptor
Machine details:
OS - Windows 10 (64bit)
Python version - 3.7.4
Code:
Method 1 (Ilija's answer)
def worker(w_conn):
os.dup2(w_conn.fileno(), 2)
sp = subprocess.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
sp.wait()
w_conn.close()
def main():
r_conn, w_conn = multiprocessing.Pipe(False)
process = multiprocessing.Process(target=worker, args=(w_conn,))
process.start()
while not r_conn.poll() and not w_conn.closed:
# Do stuff
else:
# Read error from r_conn, and handle it
r_conn.close()
process.join()
if __name__=='__main__':
main()
Error:
Process Process-1:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\User\Desktop\Workspace\Error.py", line 14, in worker
os.dup2(w_conn.fileno(), 2)
OSError: [Errno 9] Bad file descriptor
Method 2: In worker function, sending w_conn as argument to Popen
def worker(w_conn):
sp = subprocess.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=w_conn)
sp.wait()
w_conn.close()
Error:
Process Process-1:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\User\Desktop\Workspace\Error.py", line 13, in worker
sp = subprocess.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=w_conn)
File "C:\ProgramData\Anaconda3\lib\subprocess.py", line 728, in __init__
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
File "C:\ProgramData\Anaconda3\lib\subprocess.py", line 1077, in _get_handles
errwrite = msvcrt.get_osfhandle(stderr.fileno())
OSError: [Errno 9] Bad file descriptor
Is there any workaround/alternate method to achive this on Windows?
I still don't know why "Method 1" is not working. Any information regarding this will be appreciated.
"Method 2" is wrong altogether as we can't use Connection object (returned by multiprocessing.Pipe()) as a file handle in subprocess.Popen.
What works is checking for data in stderr of subprocess sp and sending the data through w_conn to main process.
def worker(w_conn):
sp = subprocess.Popen(['java', 'myprogram'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sp.wait()
if sp.stderr.seek(0, io.SEEK_END)>0:
w_conn.send(sp.stderr.read())
w_conn.close()
I am trying the below code to run command continuously and make pcaps of 5 seconds with different names but it is running for just one time generate just one pcap and then stops giving exception
from subprocess import run
command = 'tcpdump -i eno1 -w abc_{}.pcap'
file_counter = 0
while True:
output = run(command.format(str(file_counter)), capture_output=True, shell=True, timeout=5).stdout.decode()
file_counter += 1
print("Captured packet for 5 seconds")
Traceback (most recent call last):
File "one.py", line 17, in <module>
output = run(command.format(str(file_counter)), capture_output=True, shell=True,timeout=5).stdout.decode()
File "/usr/lib/python3.8/subprocess.py", line 491, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1024, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.8/subprocess.py", line 1867, in _communicate
self._check_timeout(endtime, orig_timeout, stdout, stderr)
File "/usr/lib/python3.8/subprocess.py", line 1068, in _check_timeout
raise TimeoutExpired(
subprocess.TimeoutExpired: Command 'tcpdump -i eno1 -w abc_0.pcap' timed out after 5 seconds
Thrown error is because of the timeout parameter. Essentially you are telling the subprocess that the process SHOULD finish BEFORE the timeout else raise error.
If you want to only let the process run for 5 minutes, you can close it after 5 minutes using
os.killpg(os.getpgid(process.pid), signal.SIGTERM)
Here you would need to keep track of the process PID and 5 minutes timer starting from launching the process. You can do that in your main using time.time()
I am completely new to the subprocess module. And I was trying to automate the deauthentication attack commands. When I run airodump-ng wlan0mon as you know it looks for the APs nearby and the connected clients to it.
Now when I try to run this command using lets suppose p = subprocess.run(["airmon-ng","wlan0mon"], capture_output=True) in Python as you know this command runs until the user hits Ctrl+C, so it should save the last output when user hits Ctrl+C in the variable but instead I get error which is this:
Traceback (most recent call last):
File "Deauth.py", line 9, in <module>
p3 = subprocess.run(["airodump-ng","wlan0"], capture_output=True)
File "/usr/lib/python3.8/subprocess.py", line 491, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1024, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.8/subprocess.py", line 1866, in _communicate
ready = selector.select(timeout)
File "/usr/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt
What can I try to resolve this?
Just use Python's error handling. Catch any KeyboardInnterrupts (within your subprocess function) using try and except statements like so:
def stuff(things):
try:
# do stuff
except KeyboardInterrupt:
return last_value
I am using subprocess in python to invoke another executable, write some data to it's stdin (close the stream once everything is written - which is how the sub process knows it's recieved everything) and then receive all of it's stdout data after it's terminated - which it will after some period of time.
In pseudo code:
Open subprocess
write to it's stdout
let program finish
retrieve anything it spat out to stdout
I have tried the following:
import subprocess
p = subprocess.Popen([cmd],
stdout=subprocess.PIPE,stdin=subprocess.PIPE)
p.stdin.write(str(data))
p.stdin.close()
p.wait()
result = p.communicate()[0]
However I get the following stack trace:
result = p.communicate()[0]
File "/usr/lib64/python2.7/subprocess.py", line 800, in communicate .
return self._communicate(input)
File "/usr/lib64/python2.7/subprocess.py", line 1396, in _communicate
self.stdin.flush()
ValueError: I/O operation on closed file
Please advise
Use communicate:
import subprocess
p = subprocess.Popen([cmd], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
result = p.communicate(data)[0]
Given the function
def get_files_from_sha(sha, files):
from subprocess import Popen, PIPE
import tarfile
if 0 == len(files):
return {}
p = Popen(["git", "archive", sha], bufsize=10240, stdin=PIPE, stdout=PIPE, stderr=PIPE)
tar = tarfile.open(fileobj=p.stdout, mode='r|')
p.communicate()
contents = {}
doall = files == '*'
if not doall:
files = set(files)
for entry in tar:
if (isinstance(files, set) and entry.name in files) or doall:
tf = tar.extractfile(entry)
contents[entry.name] = tf.read()
if not doall:
files.discard(entry.name)
if not doall:
for fname in files:
contents[fname] = None
tar.close()
return contents
which is called in a loop for some values of sha, after a while (in my case, 4 iterations) it starts to fail at the call to tf.read(), with the message:
Traceback (most recent call last):
File "../yap-analysis/extract.py", line 243, in <module>
commits, identities, identities_by_name, identities_by_email, identities_freq = build_commits(commits)
File "../yap-analysis/extract.py", line 186, in build_commits
commit = get_commit(commit)
File "../yap-analysis/extract.py", line 84, in get_commit
contents = get_files_from_sha(commit['sha'], files)
File "../yap-analysis/extract.py", line 42, in get_files_from_sha
contents[entry.name] = tf.read()
File "/usr/lib/python2.7/tarfile.py", line 817, in read
buf += self.fileobj.read()
File "/usr/lib/python2.7/tarfile.py", line 737, in read
return self.readnormal(size)
File "/usr/lib/python2.7/tarfile.py", line 746, in readnormal
return self.fileobj.read(size)
File "/usr/lib/python2.7/tarfile.py", line 573, in read
buf = self._read(size)
File "/usr/lib/python2.7/tarfile.py", line 581, in _read
return self.__read(size)
File "/usr/lib/python2.7/tarfile.py", line 606, in __read
buf = self.fileobj.read(self.bufsize)
ValueError: I/O operation on closed file
I suspect there is some parallelization that subprocess attempts to make (?).
What is the actual cause and how to solve it in a clean and robust way on python2?
Do not use .communicate() on the Popen instance; it'll read the stdout stream until it is finished. From the documentation:
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached.
The code for .communicate() even adds an explicit .close() call on the stdout of the pipe.
Simply removing the call to .communicate() should be enough, but do also add a .wait() after reading the tarfile contents:
tar.close()
p.stdout.close()
p.wait()
It could be that tar.close() also closes p.stdout, but an extra .close() there should not hurt.
I think your problem is the p.communicate(). This method sends to stdin, reads from stdout and stderr (which you are not capturing) and waits for the process to terminate.
tarfile is trying to read from the processes stdout, and by the time it does then the process is finished, hence the error.
I have not tried running your code (I don't have access to git) but you probably don't want the p.communicate at all, try commenting it out.