Use wget from python with Popen - python

I am writing a python (2.7) script that checks if some files are missing and downloads them via wget. Everything works fine, but after the download has finished and the script should exit, the bash (where I started the python script from) is not showing up correctly.
I have the cursor and can enter things, but the standard prompt is not showing up. I have to resize the terminal window to make the prompt display correctly. What might be the reason for this?
tilenames = ['File1', 'File2', ...]
web_url = http://...
for t in tilenames:
try:
open(t, 'r')
except IOError:
print 'file %s not found.' % (t)
command = ['wget', '-P', './SRTM/', web_url + t ]
output = Popen(command, stdout=subprocess.PIPE)
print "Done"
I think it has something to do with the way the wget process is invoked. The last command print "Done" is actually done before wget writes all of its output into the shell.

Just add a .communicate() after output, like this:
tilenames = ['File1', 'File2', ...]
web_url = http://...
for t in tilenames:
try:
open(t, 'r')
except IOError:
print 'file %s not found.' % (t)
command = ['wget', '-P', './SRTM/', web_url + t ]
p = Popen(command, stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
print "Done"
communicate will return the output written to stdout and None for stderr, because it's not forwarded to a PIPE (you will see it on the terminal instead).
Btw. you should close opened file objects (to check if a file exists you can use the functions in os.path e.g. os.path.exists)

wget writes its statistics to stderr, which is why it scrambles your terminal. stdout and stderr are flushed and queried at different intervals, so it is possible that your Done shows up before the output from wget.
A fix would be to call wget with -q or to also redirect stderr using stderr=open("/dev/null", "w") or something similar.
Additionally, you should probably use .communicate() to avoid pipe issues.

You could use os.system (but see http://docs.python.org/release/2.5.2/lib/node536.html). Basically Popen is intended to ALLOW your python process to read from the command output. You don't seem to need to do that, so the fragment below should get you what you want:
import os
import subprocess
p = subprocess.Popen(['wget','http://www.aol.com'],stdout=subprocess.PIPE)
os.waitpid(p.pid,0)
print "done"

If you add the -q option to wget it works too (quite mode)

Related

Run cmd file using python

I have a cmd file "file.cmd" containing 100s of lines of command.
Example
pandoc --extract-media -f docx -t gfm "sample1.docx" -o "sample1.md"
pandoc --extract-media -f docx -t gfm "sample2.docx" -o "sample2.md"
pandoc --extract-media -f docx -t gfm "sample3.docx" -o "sample3.md"
I am trying to run these commands using a script so that I don't have to go to a file and click on it.
This is my code, and it results in no output:
file1 = open('example.cmd', 'r')
Lines = file1.readlines()
# print(Lines)
for i in Lines:
print(i)
os.system(i)
You don't need to read the cmd file line by line. you can simply try the following:
import os
os.system('myfile.cmd')
or using the subprocess module:
import subprocess
p = subprocess.Popen(['myfile.cmd'], shell = True, close_fds = True)
stdout, stderr = proc.communicate()
Example:
myfile.cmd:
#ECHO OFF
ECHO Grettings From Python!
PAUSE
script.py:
import os
os.system('myfile.cmd')
The cmd will open with:
Greetings From Python!
Press any key to continue ...
You can debug the issue by knowing the return exit code by:
import os
return_code=os.system('myfile.cmd')
assert return_code == 0 #asserts that the return code is 0 indicating success!
Note: os.system works by calling system() in C can only take up to 65533 arguments after a command (so it is a 16 bit issue). Giving one more argument will result in the return code 32512 (which implies the exit code 127).
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function (os.system('command')).
since it is a command file (cmd), and only the shell can run it, then shell argument must set to be true. since you are setting the shell argument to true, the command needs to be string form and not a list.
use the Popen method for spawn a new process and the communicte for waiting on that process (you can time it out as well). if you whish to communicate with the child process, provide the PIPES (see mu example, but you dont have to!)
the code below for python 3.3 and beyond
import subprocess
try:
proc=subprocess.Popen('myfile.cmd', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
outs, errs = proc.communicate(timeout=15) #timing out the execution, just if you want, you dont have to!
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
for older python versions
proc = subprocess.Popen('myfile.cmd', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still waiting')
time.sleep(1)
t -= 1
proc.kill()
In both cases (python versions) if you dont need the timeout feature and you dont need to interact with the child process, then just, use:
proc = subprocess.Popen('myfile.cmd', shell=True)
proc.communicate()

Redirecting shell command output to a file does not work using subprocess.Popen in Python

I am using Python 2.6.6 and failed to re-direct the Beeline(Hive) SQL query output returning multiple rows to a file on Unix using ">". For simplicity's sake, I replaced the SQL query with simple "ls" command on current directory and outputting to a text file.
Please ignore syntax of function sendfile. I want help to tweak the function "callcmd" to pipe the stdout onto the text file.
def callcmd(cmd, shl):
logging.info('> '+' '.join(map(str,cmd)))
#return 0;
start_time = time.time()
command_process = subprocess.Popen(cmd, shell=shl, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
command_output = command_process.communicate()[0]
logging.info(command_output)
elapsed_time = time.time() - start_time
logging.info(time.strftime("%H:%M:%S",time.gmtime(elapsed_time))+' = time to complete (hh:mm:ss)')
if (command_process.returncode != 0):
logging.error('ERROR ON COMMAND: '+' '.join(map(str,cmd)))
logging.error('ERROR CODE: '+str(ret_code))
return command_process.returncode
cmd=['ls', ' >', '/home/input/xyz.txt']
ret_code = callcmd(cmd, False)
Your command (i.e. cmd) could be ['sh', '-c', 'ls > ~/xyz.txt']. That would mean that the output of ls is never passed to Python, it happens entirely in the spawned shell – so you can't log the output. In that case, I'd have used return_code = subprocess.call(cmd), no need for Popen and communicate.
Equivalently, assuming you use bash or similar, you can simply use
subprocess.call('ls > ~/test.txt', shell=True)
If you want to access the output, e.g. for logging, you could use
s = subprocess.check_output(['ls'])
and then write that to a file like you would regularly in Python. To check for a non-zero exit code, handle the CalledProcessError that is raised in such cases.
Here the stdout in command_output is written to a file. You don't need to use any redirection although an alternative might be to have the python print to stdout, and then you would redirect that in your shell to a file.
#!/usr/bin/python
import subprocess
cmd=['ls']
command_process = subprocess.Popen(
cmd,
shell='/bin/bash',
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True
)
command_output = command_process.communicate()[0]
if (command_process.returncode != 0):
logging.error('ERROR ON COMMAND: '+' '.join(map(str,cmd)))
logging.error('ERROR CODE: '+str(ret_code))
f = open('listing.txt','w')
f.write(command_output)
f.close()
I added this piece of code to my code and It works fine.Thanks to #Snohdo
f = open('listing.txt','w')
f.write(command_output)
f.close()

Redirecting subprocess.call output not working

I'm attempting to write a python script that acts as a simple UNIX bash command shell. I need to block all output from my script and redirect it to one of two files (errors or output). Unfortunately, when running my bash command on a directory I get the output "grep: test_dir: Is a directory". This is after redirecting my sys.stdout to a temporary file. My theory is that grep itself is resetting the output and printing this error as opposed to simply returning the output. Is there a way where I can redirect any and all output to a file? Or am I just misunderstanding how to redirect in python?
Here is my current code.
try:
temp_out = sys.stdout
sys.stdout = open('./temp.txt', 'w+')
output = subprocess.call(['grep', search[0], search[1]])
sys.stdout = temp_out
except subprocess.CalledProcessError as err:
print output
err_output = err.returncode
print "In error"
Do not replace sys.stdout; open a normal file and pass it as the stdout parameter.
with open('destination.txt', 'w') as redirected_output:
p = subprocess.Popen(['ls', '-l'], stdout=redirected_output)
p.communicate()
Just worked for me.
From Python doc,
stdout is used for the output of print() and expression statements and for the prompts of input();
My guess is it doesn't work for sub process.
One way to do this:
p = subprocess.Popen(['grep', search[0], search[1]], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
sys.stdout.write(out) # you have to decode byte to str if it's Python3
sys.stderr.write(err)

Python subprocess.Popen() followed by time.sleep

I want to make a python script that will convert a TEX file to PDF and then open the output file with my document viewer.
I first tried the following:
import subprocess
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
This way, the conversion from TEX to PDF works all right, but, as it takes some time, the second command (open file with Document Viewer) is executed before the output file is created.
So, I tried do make the program wait some seconds before executing the second command. Here's what I've done:
import subprocess
import time
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(10)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
But, when I do so, the output PDF file is not created. I can't understand why. The only change was the time.sleep command. Why does it affect the Popen process?
Could anyone give me some help?
EDIT:
I've followed the advice from Faust and Paulo Bu and in both cases the result is the same.
When I run this command...
subprocess.call('xelatex --output-directory=Alunos/ Alunos/{}_pratica.tex'.format(aluno), shell=True)
... or this...
p = subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
...the Xelatex program is run but doesn't make the conversion.
Strangely, when I run the command directly in the shell...
$ xelatex --output-directory=Alunos/ Alunos/name_pratica.tex
... the conversion works perfectly.
Here's what I get when I run the subprocess.call() command:
$ python my_file.py
Enter name:
name
This is XeTeX, Version 3.1415926-2.4-0.9998 (TeX Live 2012/Debian)
restricted \write18 enabled.
entering extended mode
(./Alunos/name_pratica.tex
LaTeX2e <2011/06/27>
Babel <v3.8m> and hyphenation patterns for english, dumylang, nohyphenation, loaded.
)
*
When I write the command directly in the shell, the output is the same, but it followed automatically by the conversion.
Does anyone know why it happens this way?
PS: sorry for the bad formating. I don't know how to post the shell output properly.
If you need to wait the termination of the program and you are not interested in its output you should use subprocess.call
import subprocess
subprocess.call(['xelatex', '--output-directory=Alunos/', 'Alunos/{}_pratica.tex'.format(aluno)])
subprocess.call([('gnome-open', 'Alunos/{}_pratica.pdf'.format(aluno)])
EDIT:
Also it is generally a good thing to use English when you have to name variables or functions.
If xelatex command works in a shell but fails when you call it from Python then xelatex might be blocked on output in your Python code. You do not read the pipes despite setting stdout/stderr to PIPE. On my machine the pipe buffer is 64KB therefore if xelatex output size is less then it should not block.
You could redirect the output to os.devnull instead:
import os
import webbrowser
from subprocess import STDOUT, check_call
try:
from subprocess import DEVNULL # py3k
except ImportError:
DEVNULL = open(os.devnull, 'w+b')
basename = aluno + '_pratica'
output_dir = 'Alunos'
root = os.path.join(output_dir, basename)
check_call(['xelatex', '--output-directory', output_dir, root+'.tex'],
stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
webbrowser.open(root+'.pdf')
check_call is used to wait for xelatex and raise an exception on error.

Calling external program through Python and passing the output to stdout

How can I call an external program which is written in bash script in such a way that the output produced by that script is available in sys.stdout so that I can log the output in a file through python.
For example. I now call them through the following snippet
if os.name == 'nt':
path = module_dir_run+'/run.bat'
else:
path = module_dir_run+'/run.sh'
if os.path.isfile(path):
if (splitargs.arg):
try:
call([path, splitargs.arg])
except:
pass
else:
try:
call([path])
except:
pass
else:
print "Not found : " + path
when I store the value of sys.stdout = file(filename, "w") it stores whatever which the python outputs, but not what the script outputs.
NOTE: The script which i am trying to run is an interactive script, so after the call has ended, and the control has come back to python, how can i get all what is written in the terminal?
Any suggestions?
I always use subprocess.Popen() to run another program from inside a Python script. Example:
import subprocess
print "Starting process..."
process = subprocess.Popen(["echo", "a"], shell=False)
process.wait()
print "Done"
You can redirect the output of process to another file like this:
import subprocess
print "Starting process..."
with open("./out.log", "w") as f:
process = subprocess.Popen(["echo", "a"], shell=False, stdout=f)
process.wait()
print "Done"
Redirecting stderr is also possible through 'stderr' parameter.
When you have the sys.stdout in current script redirected to write to your own file, you can do this to redirect it in your subprocess too:
import subprocess
import sys
sys.stdout = open("./out.log", "w")
print "Starting process..."
sys.stdout.flush()
process = subprocess.Popen(["echo", "a"], shell=False, stdout=sys.stdout)
process.wait()
print "Done"
You can write the outputs of such a call into a variable using e.g.:
variable = subprocess.Popen(
['ls', 'mydir/'], # note that the command is split on whitespace
stdout=subprocess.PIPE).communicate()[0]
As Martijn Pieters already states, you'll have to provide an alternative stdout for your call() call.
There are essentially 2 ways:
* either provide an already opened file (as you have changed your sys.stdout, this will probably be fine)
* use stdout=subprocess.PIPE and read and treat the stdout yourself. (Not via subprocess.call() then, but via the object returned by subprocess.Popen().)
Note that changing sys.stdout won't really change your process's stdout from the view of the OS, but instead will order your program to output everything on a non-usual file descriptor.
This might be of use. Particularly because most of the other answers don't understand that you need to redirect your parent shell's stdin to the child process's stdin because the programs/scripts you're calling are "interactive".
Python and subprocess input piping

Categories

Resources