Subprocess Timeout in Python - python

I am trying to check the header of a website and the code works perfectly fine. However when the website does not respond within a reasonable amount of time, I added a timeout and that works too.
Unfortunately the command is not taking parameters and am struck over there. Any suggestions would be highly appreciated
import subprocess
from threading import Timer
kill = lambda process: process.kill()
c1='curl -H'
cmd = [c1, 'google.com']
p = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
my_timer = Timer(10, kill, [p])
try:
my_timer.start()
stdout, stderr = p.communicate()
print stdout
finally:
print stderr
my_timer.cancel()
Error while running :
OSError: [Errno 2] No such file or directory
However if I change c1 as shown below, it works fine.
c1='curl'

With
c1='curl'
use
cmd = [c1, '-H','google.com']

Related

Python parallel subprocess commands while suppressing output

I am doing a simple ip scan using ping in python. I can run commands in parallel as demonstrated in this answer. However, I cannot suppress the output since it uses Popen, and I can't use check_output since the process returns with a exit status of 2 if a host is down at a certain ip address, which is the case for most addresses. Using a Pipe is also out of the question since too many processes are running concurrently.
Is there a way to run these child processes in python concurrently while suppressing output? Here is my code for reference:
def ICMP_scan(root_ip):
host_list = []
cmds = [('ping', '-c', '1', (root_ip + str(block))) for block in range(0,256)]
try:
res = [subprocess.Popen(cmd) for cmd in cmds]
for p in res:
p.wait()
except Exception as e:
print(e)
How about piping the process output to /dev/null.
Basing on this answer:
import os
devnull = open(os.devnull, 'w')
subproc = subprocess.Popen(cmd, stdout=devnull, stderr=devnull)

capture stdout of subprocess.Popen when there is exception raised

How to capture the stdout/stderr from subprocess.Popen CMD call when there is exception raised?
code snippet:
p_cmd = subprocess.Popen(CMD, bufsize=0, shell=True, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(cmd_stdo, cmd_stde) = p_cmd.communicate(timeout=60)
If the CMD is timeout which run over 60 secs, how could I get the CMD stdout output in cmd_stdo or cmd_stde?
I try to get it in try, exception block it's NULL.
And, I am pretty sure that there is OUTPUT when running CMD.
In case of time-out, the variables cmd_stdo and cmd_stde are never assigned to, because the exception happens before the assignment.
To make sure stdout and stderr are captured even in case of the exception, I'd capture to (temporary) files and read them into variables afterwards.
import subprocess
from tempfile import TemporaryFile as Tmp
CMD = [ 'echo "before sleep" ; sleep 7 ; echo "after sleep"' ]
with Tmp() as out, Tmp() as err:
p_cmd = subprocess.Popen(CMD, bufsize=0, shell=True, stdin=None, stdout=out, stderr=err)
timed_out = False
try:
p_cmd.wait(timeout=5)
except subprocess.TimeoutExpired:
timed_out = True
out.seek(0)
err.seek(0)
str_out = out.read()
str_err = err.read()
print('Has timed out:', timed_out)
print('Stdout:', str_out)
print('Stderr:', str_err)
(When trying this out, play with sleep and timeout times in the code to make time-out happen or not)
Since I don't have enough reputation I have to ask for clarification this way.
Are you trying to run it on Windows or Linux?

output is not going to stdout

I have some executable which is a command processor. I'm trying to capture its response to commands. I use python subprocesses.
Below is my script:
import subprocess
# Open the subprocess
proc = subprocess.Popen('comproc.exe', \
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr = subprocess.STDOUT)
# Write a command
proc.stdin.write('help cfg_open\n')
# Close the input stream so the subprocess knows to exit
proc.stdin.close()
data = 0
while data!="":
data = proc.stdout.readline()
print data
# Wait for subprocess to exit
exit_status = proc.wait()
The output ("Welcome to command proc (c) 2015 ...") before I issue the "help cfg_open" is captured, while the response to "help cfg_open" is not.
The stderr is captured correctly if I issue some non-existing command.
Redirecting via cmd is working excellent:
c:\>comproc.exe >1.txt
help cfg_open
exit
c:\>
I get the whole output in 1.txt.
Would be grateful for any help!

Python subprocess.Popen() followed by time.sleep

I want to make a python script that will convert a TEX file to PDF and then open the output file with my document viewer.
I first tried the following:
import subprocess
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
This way, the conversion from TEX to PDF works all right, but, as it takes some time, the second command (open file with Document Viewer) is executed before the output file is created.
So, I tried do make the program wait some seconds before executing the second command. Here's what I've done:
import subprocess
import time
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(10)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
But, when I do so, the output PDF file is not created. I can't understand why. The only change was the time.sleep command. Why does it affect the Popen process?
Could anyone give me some help?
EDIT:
I've followed the advice from Faust and Paulo Bu and in both cases the result is the same.
When I run this command...
subprocess.call('xelatex --output-directory=Alunos/ Alunos/{}_pratica.tex'.format(aluno), shell=True)
... or this...
p = subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
...the Xelatex program is run but doesn't make the conversion.
Strangely, when I run the command directly in the shell...
$ xelatex --output-directory=Alunos/ Alunos/name_pratica.tex
... the conversion works perfectly.
Here's what I get when I run the subprocess.call() command:
$ python my_file.py
Enter name:
name
This is XeTeX, Version 3.1415926-2.4-0.9998 (TeX Live 2012/Debian)
restricted \write18 enabled.
entering extended mode
(./Alunos/name_pratica.tex
LaTeX2e <2011/06/27>
Babel <v3.8m> and hyphenation patterns for english, dumylang, nohyphenation, loaded.
)
*
When I write the command directly in the shell, the output is the same, but it followed automatically by the conversion.
Does anyone know why it happens this way?
PS: sorry for the bad formating. I don't know how to post the shell output properly.
If you need to wait the termination of the program and you are not interested in its output you should use subprocess.call
import subprocess
subprocess.call(['xelatex', '--output-directory=Alunos/', 'Alunos/{}_pratica.tex'.format(aluno)])
subprocess.call([('gnome-open', 'Alunos/{}_pratica.pdf'.format(aluno)])
EDIT:
Also it is generally a good thing to use English when you have to name variables or functions.
If xelatex command works in a shell but fails when you call it from Python then xelatex might be blocked on output in your Python code. You do not read the pipes despite setting stdout/stderr to PIPE. On my machine the pipe buffer is 64KB therefore if xelatex output size is less then it should not block.
You could redirect the output to os.devnull instead:
import os
import webbrowser
from subprocess import STDOUT, check_call
try:
from subprocess import DEVNULL # py3k
except ImportError:
DEVNULL = open(os.devnull, 'w+b')
basename = aluno + '_pratica'
output_dir = 'Alunos'
root = os.path.join(output_dir, basename)
check_call(['xelatex', '--output-directory', output_dir, root+'.tex'],
stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
webbrowser.open(root+'.pdf')
check_call is used to wait for xelatex and raise an exception on error.

Failing to capture stdout from application

I have the following script:
import subprocess
arguments = ["d:\\simulator","2332.txt","2332.log", "-c"]
output=subprocess.Popen(arguments, stdout=subprocess.PIPE).communicate()[0]
print(output)
which gives me b'' as output.
I also tried this script:
import subprocess
arguments = ["d:\\simulator","2332.txt","atp2332.log", "-c"]
process = subprocess.Popen(arguments,stdout=subprocess.PIPE)
process.wait()
print(process.stdout.read())
print("ERROR:" + str(process.stderr))
which gives me the output: b'', ERROR:None
However when I run this at the cmd prompt I get a 5 lines of text.
d:\simulator atp2332.txt atp2332.log -c
I have added to simulator a message box which pops up when it launches. This is presented for all three cases. So I know that I sucessfully launch the simulator. However the python scripts are not caturing the stdout.
What am I doing wrong?
Barry.
If possible (not endless stream of data) you should use communicate() as noted on the page.
Try this:
import subprocess
arguments = ["d:\\simulator","2332.txt","atp2332.log", "-c"]
process = subprocess.Popen(arguments, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = process.communicate()
print(sout)
print(serr)
The following code gives me text output on stdout.
Perhaps you could try it, and then substitute your command for help
import subprocess
arguments = ["help","2332.txt","atp2332.log", "-c"]
process = subprocess.Popen(arguments,stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.wait()
print 'Return code', process.returncode
print('stdout:', process.stdout.read())
print("stderr:" + process.stderr.read())

Categories

Resources