passing sentence as a argument in subprocess command - python

I am executing python script inside another script and want to pass two arguments to it
lines = [line.strip('\n') for line in open('que.txt')]
for l in lines:
print 'my sentence : '
print l
#os.system("find_entity.py") //this also does not work
subprocess.call(" python find_entity.py l 1", shell=True) //this works but l does not considered as sentence which was read
what is the correct approach?
update:
lines = [line.strip('\n') for line in open('q0.txt')]
for line_num, line in enumerate(lines):
cmd = ["python", "find_entity.py", line]
subprocess.call(cmd, shell=True)
then it goes to python terminal

You can use one of string substitution mechanics:
C-style string formatting
in your case it would looks like
subprocess.call("python find_entity.py %s %d" % (line, line_num))
C#-style string formatting
subprocess.call("python find_entity.py {} {}".format(line, line_num))
or templates
Or, in case with subprocess library you should pass arguments as list to call function:
subprocess.call(["python", "find_entity.py", line, str(line_num)])
Look at line and line_num variables — they pass without any quotes, so they would be passed by value.
This solution is recommended, because it provides more clean and obvious code and provide correct parameter's processing(such as whitespace escaping, etc).
However, if you want to use shell=True flag for subprocess.call, solution with list of args will not work instead of string substitution solutions. BTW, subprocess and os provides all shell powerful options: such as script piping, expanding user home directory(~), etc. So, if you will code big and complicated script you should use python libraries instead of using shell=True.

If you already have the command name and its arguments in separate variables, or already in a list, you almost never want to use shell=True. (It's not illegal, but its behavior is undocumented and generally not what is wanted.)
cmd = ["python", "find_entity.py", line]
subprocess.call(cmd)

you need the contents of variable l (I renamed it to line), not the string literal "l"
for line_num, line in enumerate(lines):
cmd = ["python",
"find_entity.py",
line,
str(line_num)]
subprocess.call(cmd, shell=True)

Related

subprocess.call with command having embedded spaces and quotes

I would like to retrieve output from a shell command that contains spaces and quotes. It looks like this:
import subprocess
cmd = "docker logs nc1 2>&1 |grep mortality| awk '{print $1}'|sort|uniq"
subprocess.check_output(cmd)
This fails with "No such file or directory". What is the best/easiest way to pass commands such as these to subprocess?
The absolutely best solution here is to refactor the code to replace the entire tail of the pipeline with native Python code.
import subprocess
from collections import Counter
s = subprocess.run(
["docker", "logs", "nc1"],
text=True, capture_output=True, check=True)
count = Counter()
for line in s.stdout.splitlines():
if "mortality" in line:
count[line.split()[0]] += 1
for count, word in count.most_common():
print(count, word)
There are minor differences in how Counter objects resolve ties (if two words have the same count, the one which was seen first is returned first, rather than by sort order), but I'm guessing that's unimportant here.
I am also ignoring standard output from the subprocess; if you genuinely want to include output from error messages, too, just include s.stderr in the loop driver too.
However, my hunch is that you don't realize your code was doing that, which drives home the point nicely: Mixing shell script and Python raises the mainainability burden, because now you have to understand both shell script and Python to understand the code.
(And in terms of shell script style, I would definitely get rid of the useless grep by refactoring it into the Awk script, and probably also fold in the sort | uniq which has a trivial and more efficient replacement in Awk. But here, we are replacing all of that with Python code anyway.)
If you really wanted to stick to a pipeline, then you need to add shell=True to use shell features like redirection, pipes, and quoting. Without shell=True, Python looks for a command whose file name is the entire string you were passing in, which of course doesn't exist.

How do I input strings in Linux terminal that points to file path using subprocess.call command?

I'm using Ubuntu and have been trying to do a simple automation that requires me to input the [name of website] and the [file path] onto a list of command lines. I'm using subprocess and call function. I tried something simpler first using the "ls" command.
from subprocess import call
text = raw_input("> ")
("ls", "%s") % (text)
These returned as "buffsize must be an integer". I tried to found out what it was and apparently I had to pass the command as a list. So I tried doing it on the main thing im trying to code.
from subprocess import call
file_path = raw_input("> ")
site_name = raw_input("> ")
call("thug", -FZM -W "%s" -n "%s") % (site_name, file_path)
These passed as an invalid syntax on the first "%s". Can anyone point me to the correct direction?
You cannot use % on a tuple.
("ls", "%s") % text # Broken
You probably mean
("ls", "%s" % text)
But just "%s" % string is obviously going to return simply string, so there is no need to use formatting here.
("ls", text)
This still does nothing useful; did you forget the call?
You also cannot have unquoted strings in the argument to call.
call("thug", -FZM -W "%s" -n "%s") % (site_name, file_path) # broken
needs to have -FZM and -W quoted, and again, if you use format strings, the formatting needs to happen adjacent to the format string.
call(["thug", "-FZM", "-W", site_name, "-n", file_path])
Notice also how the first argument to call() is either a proper list, or a long single string (in which case you need shell=True, which you want to avoid if you can).
If you are writing new scripts, you most definitely should be thinking seriously about targetting Python 3 (in which case you want to pivot to subprocess.run() and input() instead of raw_input() too). Python 2 is already past its originally announced end-of-life date, though it was pushed back a few years because Py3k adoption was still slow a few years ago. It no longer is, and shouldn't be -- you want to be in Py3, that's where the future is.
Here is a complete example of how you would call a executable python file with subprocess.call Using argparse to properly parse the input.
Your python file to be called (sandboxArgParse.py):
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--filePath", help="Just A test", dest='filePath')
parser.add_argument("--siteName", help="Just A test", dest='siteName')
args = parser.parse_args()
print args.siteName
print args.filePath
Your calling python file:
from subprocess import call
call(["python","/users/dev/python/sandboxArgParse.py", "--filePath", "abcd.txt", "--siteName", "www.google.com"])

Running grep through Python - doesn't work

I have some code like this:
f = open("words.txt", "w")
subprocess.call(["grep", p, "/usr/share/dict/words"], stdout=f)
f.close()
I want to grep the MacOs dictionary for a certain pattern and write the results to words.txt. For example, if I want to do something like grep '\<a.\>' /usr/share/dict/words, I'd run the above code with p = "'\<a.\>'". However, the subprocess call doesn't seem to work properly and words.txt remains empty. Any thoughts on why that is? Also, is there a way to apply regex to /usr/share/dict/words without calling a grep-subprocess?
edit:
When I run grep '\<a.\>' /usr/share/dict/words in my terminal, I get words like: aa
ad
ae
ah
ai
ak
al
am
an
ar
as
at
aw
ax
ay as results in the terminal (or a file if I redirect them there). This is what I expect words.txt to have after I run the subprocess call.
Like #woockashek already commented, you are not getting any results because there are no hits on '\<a.\>' in your input file. You are probably actually hoping to find hits for \<a.\> but then obviously you need to omit the single quotes, which are messing you up.
Of course, Python knows full well how to look for a regex in a file.
import re
rx = re.compile(r'\ba.\b')
with open('/usr/share/dict/words', 'Ur') as reader, open('words.txt', 'w') as writer:
for line in reader:
if rx.search(line):
print(line, file=writer, end='')
The single quotes here are part of Python's string syntax, just like the single quotes on the command line are part of the shell's syntax. In neither case are they part of the actual regular expression you are searching for.
The subprocess.Popen documentation vaguely alludes to the frequently overlooked fact that the shell's quoting is not necessary or useful when you don't have shell=True (which usually you should avoid anyway, for this and other reasons).
Python unfortunately doesn't support \< and \> as word boundary operators, so we have to use (the functionally equivalent) \b instead.
The standard input and output channels for the process started by call() are bound to the parent’s input and output. That means the calling programm cannot capture the output of the command. Use check_output() to capture the output for later processing:
import subprocess
f = open("words.txt", "w")
output = subprocess.check_output(['grep', p ,'-1'])
file.write(output)
print output
f.close()
PD: I hope it works, i cant check the answer because i have not MacOS to try it.

Using Tshark in Python Subprocess is giving syntax error

I am trying to develop a script to read pcap file and extract some field from it but using tshark as a subprocess. However i am getting syntax error regarding cmd. Can anyone help me out on this?
def srcDestDport (filename):
cmd = r"tshark -o column.format:"Source","%s", "Destination","%d", "dstport"," %uD"' -r %s"%(filename)
subcmd = cmd.split(' ')
lines = subprocess.Popen(subcmd,stdout=subprocess.PIPE)
return lines
As far as Python is concerned, you appear to be missing some commas in your cmd definition:
cmd = r"tshark -o column.format:"Source","%s", "Destination","%d", "dstport"," %uD"' -r %s"%(filename)
# -- no comma here -^ ----^ ----^ --^
because the first string ends when the first " is encountered at "Source"; a raw string does not preclude you from escaping embedded quotes.
If you wanted to produce a list of arguments, just make it a list directly, saves you interpolating the filename too:
cmd = ["tshark", "-o",
'column.format:"Source","%s","Destination","%d","dstport"," %uD"',
"-r", filename]
Note the single quotes around the 3rd argument to preserve the quotes in the command line argument.
This eliminates the need to split as well and preserves whitespace in the filename.

Python subprocess to call Unix commands, a question about how output is stored

I am writing a python script that reads a line/string, calls Unix, uses grep to search a query file for lines that contain the string, and then prints the results.
from subprocess import call
for line in infilelines:
output = call(["grep", line, "path/to/query/file"])
print output
print line`
When I look at my results printed to the screen, I will get a list of matching strings from the query file, but I will also get "1" and "0" integers as output, and line is never printed to the screen. I expect to get the lines from the query file that match my string, followed by the string that I used in my search.
call returns the process return code.
If using Python 2.7, use check_output.
from subprocess import check_output
output = check_output(["grep", line, "path/to/query/file"])
If using anything before that, use communicate.
import subprocess
process = subprocess.Popen(["grep", line, "path/to/query/file"], stdout=subprocess.PIPE)
output = process.communicate()[0]
This will open a pipe for stdout that you can read with communicate. If you want stderr too, you need to add "stderr=subprocess.PIPE" too.
This will return the full output. If you want to parse it into separate lines, use split.
output.split('\n')
I believe Python takes care of line-ending conversions for you, but since you're using grep I'm going to assume you're on Unix where the line-ending is \n anyway.
http://docs.python.org/library/subprocess.html#subprocess.check_output
The following code works with Python >= 2.5:
from commands import getoutput
output = getoutput('grep %s path/to/query/file' % line)
output_list = output.splitlines()
Why would you want to execute a call to external grep when Python itself can do it? This is extra overhead and your code will then be dependent on grep being installed. This is how you do simple grep in Python with "in" operator.
query=open("/path/to/query/file").readlines()
query=[ i.rstrip() for i in query ]
f=open("file")
for line in f:
if "line" in query:
print line.rstrip()
f.close()

Categories

Resources