Checking whether a command produced output - python

I am using the following call for executing the 'aspell' command on some strings in Python:
r,w,e = popen2.popen3("echo " +str(m[i]) + " | aspell -l")
I want to test the success of the function looking at the stdout File Object r. If there is no output the command is successful.
What is the best way to test that in Python?
Thanks in advance.

Best is to use the subprocess module of the standard Python library, see here -- popen2 is old and not recommended.
Anyway, in your code, if r.read(1): is a fast way to test if there's any content in r (if you don't care about what that content might specifically be).

Why don't you use aspell -a?
You could use subprocess as indicated by Alex, but keep the pipe open. Follow the directions for using the pipe API of aspell, and it should be pretty efficient.
The upside is that you won't have to check for an empty line. You can always read from stdout, knowing that you will get a response. This takes care of a lot of problematic race conditions.

Related

subprocess.Popen and relative directories

I am writing a script to open notepad.exe using subprocess.Popen()
import subprocess
command = '%windir%\system32\\notepad.exe'
process = subprocess.Popen(command)
output = process.communicate()
print(output[0])
This throws a FileNotFoundError
Is it possible to change/add to the above code to make it work with relative paths?
I did try to run the script from C:\Windows> after moving it there, which again failed. Also set the shell=True, but failed as well.
Writing a similar script using os.popen() works ok with relative paths, regardless which directory the script is run from, but as far as I understand popen is not the way forward..
Early steps in the world of programming/Python. Any input much appreciated.
Use os.path.expandvars to expand %windir%:
command = os.path.expandvars('%windir%\\system32\\notepad.exe')
The result is a path that then can be passed to subprocess.Popen.
subprocess.Popen does not expand environment variables such as %windir%. The shell might but you really should not depend on shell=True to do that.
Pro tip: whenever you get an error asking the system to execute a command, print the command (and, if applicable, the current working directory). The results will often surprise you.
In your case, I suspect you're just missing a backslash. Use this instead:
command = '%windir%\\system32\\notepad.exe'
Before you make that change, try printing the value of command immediately after assignment. I think you'll find the leading "s" in "system" is missing, and that the mistake is obvious.
HTH.
You could use raw strings to avoid having to double-up your backslashes.
command = r'%windir%\system32\notepad.exe'

Get the output from python tests in ruby script

I have a Python project with some tests in different folders and files. I wanted to write a script which executes all tests for the project and outputs some results. I want it to be a ruby script (because I know Ruby more than Python and for now I enjoy it more) and I was thinking to get the output from the tests and to parse them with ruby and to output something like "48 tests run in total, all ok" instead of the the output from python.
Long story short - I want I way to get the output from python test_something.py in a variable or in a file and I want nothing from it on the screen.
Here are my tries:
tests = Dir.glob("**/test_*")
wd = Dir.pwd
output = ''
tests.each do |test|
Dir.chdir(File.dirname(test))
# output += `python #{File.basename(test)}`
# system("python #{File.basename(test)} >> f.txt")
Dir.chdir(wd)
end
I tried both things which are commented, but both of them print the result on the standard exit and in the first one output variable is empty, in the second one the file is created but is empty again :(
Any ideas? Thank you very much in advance! :)
The test framework might have send the result to STDERR. Try use Open3.capture3 to capture the standard error.
require 'open3'
...
stdout, stderr, status = Open3.capture3(%{python "#{File.basename(test)}"})
and write the standard output and standard error to the destination:
File.write("f.txt", stdout + stderr)
You may check status.success? to see if you write the external command right. However, the test framework may return non-zero exit code on failed tests. In that case, you should check the stderr to see the actual error output.
Use Open3.capture2 as below:
output, _ = Open3.capture2("python #{File.basename(test)")
To write output to a file do as below:
File.write("f.txt", output)

Calling a subprocess with mixed data type arguments in Python

I am a bit confused as to how to get this done.
What I need to do is call an external command, from within a Python script, that takes as input several arguments, and a file name.
Let's call the executable that I am calling "prog", the input file "file", so the command line (in Bash terminal) looks like this:
$ prog --{arg1} {arg2} < {file}
In the above {arg1} is a string, and {arg2} is an integer.
If I use the following:
#!/usr/bin/python
import subprocess as sbp
sbp.call(["prog","--{arg1}","{arg2}","<","{file}"])
The result is an error output from "prog", where it claims that the input is missing {arg2}
The following produces an interesting error:
#!/usr/bin/python
import subprocess as sbp
sbp.call(["prog","--{arg1} {arg2} < {file}"])
all the spaces seem to have been removed from the second string, and equal sign appended at the very end:
command not found --{arg1}{arg2}<{file}=
None of this behavior seems to make any sense to me, and there isn't much that one can go by from the Python man pages found online. Please note that replacing sbp.call with sbp.Popen does not fix the problem.
The issue is that < {file} isn’t actually an argument to the program, but is syntax for the shell to set up redirection. You can tell Python to use the shell, or you can setup the redirection yourself.
from subprocess import *
# have shell interpret redirection
check_call('wc -l < /etc/hosts', shell=True)
# set up redirection in Python
with open('/etc/hosts', 'r') as f:
check_call(['wc', '-l'], stdin=f.fileno())
The advantage of the first method is that it’s faster and easier to type. There are a lot of disadvantages, though: it’s potentially slower since you’re launching a shell; it’s potentially non-portable because it depends on the operating system shell’s syntax; and it can easily break when there are spaces or other special characters in filenames.
So the second method is preferred.

How can I get a file to autorun before I run any command in ipython?

I have a python file that holds a bunch of functions that I'm continually modifying and then testing in ipython. My current workflow is to run "%run myfile.py" before each command. However, ideally, I'd like that just to happen automatically. Is that possible?
If you really want to use rlwrap for this, write a filter! Just define an input_handler that adds %run myfile.py to the input, and an echo_handler to echo your original input so that you won't see this happening (man RlwrapFilter tells you all you ever wanted to know about filter writing, and then some).
But isn't it more elegant to solve this within ipython itself, using IPython.hooks.pre_runcode_hook?
import os
import IPython
ip = IPython.ipapi.get()
def runMyFile(self):
ip.magic('%run myFile.py')
raise IPython.ipapi.TryNext()
ip.set_hook('pre_runcode_hook', runMyFile)
I can't find any elegant way. This is the ugly way. Run:
rlwrap awk '{print "%run myfile.py"} {print} {fflush()}' |ipython
This reads from STDIN, but prints the command you wanted before each command. fflush is there to disable buffering and pass things to ipython immediately. rlwrap is there to keep the readline bindings; you can remove it if you don't have it, but this will be less convenient (no arrow keys, etc.).
Mind that you will have to type your commands before the ipython prompt appears. There might be other more annoying things which break, I haven't tested thoroughly.

Python subprocess: why won't the list of arguments work analogous to the full shell string?

Thanks in advance for any help. I am new to python, but not particularly new to scripting. I am trying to run a simple, automated email program, but the email module seems to be installed incorrectly on our system (I don't have 75% of the functions described in the python examples, only "message_from_string" and "message_from_file") and smtplib is overly complicated for what I need.
In fact, in simple bash terms, all I need is:
/bin/email -s "blah" "recipients" < file.with.body.info.txt
or,
echo "my body details" | /bin/email -s "blah" "recipients"
so that I can avoid having to write to a file just to send a message.
I tried using subprocess, either call or Popen, and the only way I could eventually get things to work is if I used:
subprocess.call('/bin/mail -s "blah" "recipients" < file.with.body.info.txt', shell=True)
A few things I specifically don't like about this method:
(1) I couldn't break things into a list or tuple, as it is supposed to work, so that I lost the whole advantage of subprocess, as I understand it, in keeping things secure. If I tried:
subprocess.call(['/bin/mail', '-s', subjVariable, recipVariable, '<', 'file.with.body.info.txt'], shell=True)
it would fail. Similarly, if I tried to use the pipe, '|', instead of reading from a file, it would fail. It was also failing if I used '-cmd' instead of a pipe. The "fail" was usually that it would read '<' and 'file.with.body.info.txt' as if they were further recipients. In other words, whether I said "shell = True" or not, subprocess was not able to interpret the special characters in the call as the special characters that they are. '<' wasn't recognized as an input from a file, etc., unless I kept everything in one large call.
What I would ideally like to be able to do, because it seems more secure, as well as more flexible, is something like this:
subprocess.call(['/bin/echo', varWithBody, '|', '/bin/mail', '-s', subjVariable, recipVariable,])
but it seems that pipes are not understood at all with subprocess and I cannot figure out how to pipe things together while stuck behind python.
Any suggestions? All help is welcome, except attempts to explain how to use the 'email' or 'smtplib' modules. Regardless of this particular application, I really want to learn how to use subprocess better, so that I can tie together disparate programs. My understanding is that python should be fairly decent at that.
Thanks! Mike
The Python docs seem to cover this situation.
What I'd probably do is something like the following
from subprocess import *
readBody = Popen(["/bin/echo", varWithBody], stdout=PIPE)
mail = Popen(["/bin/mail", "-s", subjVariable, recipVariable], stdin=readBody.stdout, stdout=PIPE)
output = mail.communicate()[0]
| and < are not arguments; they are shell redirections. To replace the | in your code, see these instructions.
To replace <, use:
subprocess.Popen(["command", "args"], stdin=open("file.txt", 'r'))
eg.
subprocess.Popen(["cat"], stdin=open("file.txt", 'r')) is the same as cat < file.txt
<, | etc are features of the shell, not the operating system. Therefore something like subprocess won't know anything about them - internally it's just passing the list of arguments to the equivalent OS functions. The way to do input/output redirection using subprocess is using the stdin, stdout and strerr parameters. You can pass in a file object (it has to contain a file descriptor, though, but normally opened files always do) or a naked file descriptor. Or a pipe object.
The manual has an example for replacing a pipeline, just replace the pipe with a file object and you should be all set.
You need to run the command through the shell using the shell argument:
>>> import subprocess
>>> subprocess.call('ls -a | cat', shell=True)
.
..
.git
.gitignore
doc
generate_rands.py
infile1
infile2
infile3
matrix.pyc
matrix.py~
median.py
problems
simple_median.py
test
test_matrix.py
test_matrix.py~
test_median.py

Categories

Resources