Python subprocess.run not able to handle large argument string - python

I need to invoke a powershell script and capture the output as generated from it.
Since I need to capture the output, I chose to use subprocess.run()
Powershell invocation
powershell DeleteResults -resultscsv '1111,2222,3333,4444'
Python(Python 3.5.2 :: Anaconda 4.1.1 (64-bit)) code
command = "powershell DeleteResults -resultscsv '{}'".format(resultscsv)
output = subprocess.run(command, stdout=subprocess.PIPE).stdout.decode('utf-8')
All goes fine as long as the length of command is less than 33K(approx)
However, subprocess.call() throws error when the length exceeds 33K
(There is no issue on the powershell side as it works perfectly fine when invoked directly)
ERROR: [WinError 206] The filename or extension is too long
Traceback (most recent call last):
File "D:\path\to\python\wrapper.py", line 154, in <module>
output = subprocess.run(command, stdout=subprocess.PIPE).stdout.decode('utf-8')
File "D:\Anaconda3\lib\subprocess.py", line 693, in run
with Popen(*popenargs, **kwargs) as process:
File "D:\Anaconda3\lib\subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "D:\Anaconda3\lib\subprocess.py", line 1224, in _execute_child
startupinfo)
Any pointer will be great help.
Not sure if relevant - the python script is invoked via Control-M on a windows environment.
--Edit--
Adding this section to add more details in response to answer by Alex.
We don't own the ps script DeleteResults. So, we can't modify it. We just consume it.
As it is done today,
The resultscsv(80K chars) is stored in a results.ini file
A small piece of ps inline code parses .ini file and then invokes DeleteResults. Note: There is powershell call inside the outer powershell invocation(invocation below).
This approach works perfectly fine even if chars >80K.
However, we don't want the inline ini parser to be part of invocation - looks ugly.
So, the idea is to write a python wrapper which will parse .ini file and invoke the powershell
powershell -NoLogo -NonInteractive -Command "Get-Content 'results.ini' | foreach-object -begin {$h=#{}} -process { $k = [regex]::split($_,'='); if(($k[0].compareTo(' ') -ne 0) -and ($k[0].startswith('[') -ne $True)) {$h.Add($k[0], $k[1]) }}; powershell DeleteResults -resultscsv $h.Get_Item('resultscsv')"
So, wondering why the above ps 1-liner not hitting the char length limit ? Is it that the line powershell DeleteResults -resultscsv $h.Get_Item('resultscsv') is NOT actually expanded inline - thereby not hitting the char length limit ?

There is command line string limitation, it's value depends on OS version.
It is not possible to pass large data through command line arguments. Pass a filename instead.
Documentation and workaround https://support.microsoft.com/en-us/help/830473/command-prompt-cmd-exe-command-line-string-limitation

Related

Unable to capture result of ls -la with subprocess.Popen

I am trying to capture the output when I execute a custom command using Popen:
import subprocess
def exec_command():
command = "ls -la" # will be replaced by my custom command
result = subprocess.Popen(command, stdout=subprocess.PIPE).communicate()[0]
print(result)
exec_command()
I get an OSError with following stacktrace:
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1327, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Please let me know what I would need to use.
Note: The stacktrace shows the code was executed in Python 2.7, but I got the same error running with Python 2.6
When running without shell=True (which you are doing, correctly; shell=True is dangerous), you should pass your command as a sequence of the command and the arguments, not a single string. Fixed code is:
def exec_command():
command = ["ls", "-la"] # list of command and argument
... rest of code unchanged ...
If you had user input involved for some of the arguments, you'd just insert it into the list:
def exec_command(somedirfromuser):
command = ["ls", "-la", somedirfromuser]
Note: If your commands are sufficiently simple, I'd recommend avoiding subprocess entirely. os.listdir and os.stat (or on Python 3.5+, os.scandir alone) can get you the same info as ls -la in a more programmatically usable form without the need to parse it, and likely faster than launching an external process and communicating with it via a pipe.

"No child processes" error thrown by OCaml Sys.command function

I am trying to use Frama-c via python application. This python application sets some env variables and system path. From this application, I am calling Frama-c as a python process as following:
cmd = ['/usr/local/bin/frama-c', '-wp', '-wp-print', '-wp-out', '/home/user/temp','/home/user/project/test.c']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
When this code is executed from Python application I am getting following error:
[kernel] Parsing FRAMAC_SHARE/libc/__fc_builtin_for_normalization.i (no preprocessing)
[kernel] warning: your preprocessor is not known to handle option ` -nostdinc'. If pre-processing fails because of it, please add -no-cpp-gnu-like option to Frama-C's command-line. If you do not want to see this warning again, use explicitely -cpp-gnu-like option.
[kernel] warning: your preprocessor is not known to handle option `-dD'. If pre-processing fails because of it, please add -no-cpp-gnu-like option to Frama-C's command-line. If you do not want to see this warning again, use explicitely -cpp-gnu-like option.
[kernel] Parsing 2675891095.c (with preprocessing)
[kernel] System error: /usr/bin/gcc -c -C -E -I. -dD -nostdinc -D__FC_MACHDEP_X86_32 -I/usr/local/share/frama-c/libc -o '/tmp/2675891095.cc8bf16.i' '/home/user/project/test.c': No child processes
I am finding it hard to debug what is causing the error:
System error: /usr/bin/gcc -c -C -E -I. -dD -nostdinc -D__FC_MACHDEP_X86_32 -I/usr/local/share/frama-c/libc -o '/tmp/2675891095.cc8bf16.i' '/home/user/project/test.c': No child
processes
Is there a way to generate more error log from Frama-c that might help me figure out the issue?
Note that this error only occur when I start the process(to execute Frama-c) from my application, and not if I start it from a python console. And it happens only on Linux machine and not on Windows machine.
Any help is appreciated. Thanks!!
Update :
I realized that by using -kernel-debug flag I can obtain stack trace. So I tried the option and get the following information:
Fatal error: exception Sys_error("gcc -E -C -I. -dD -D__FRAMAC__
-nostdinc -D__FC_MACHDEP_X86_32 -I/usr/local/share/frama-c/libc -o '/tmp/2884428408.c2da79b.i'
'/home/usr/project/test.c': No
child processes")
Raised by primitive operation at file
"src/kernel_services/ast_queries/file.ml", line 472, characters 9-32
Called from file "src/kernel_services/ast_queries/file.ml", line 517,
characters 14-26
Called from file "src/kernel_services/ast_queries/file.ml", line 703,
characters 46-59
Called from file "list.ml", line 84, characters 24-34
Called from file "src/kernel_services/ast_queries/file.ml", line 703,
characters 17-76
Called from file "src/kernel_services/ast_queries/file.ml", line 1587,
characters 24-47
Called from file "src/kernel_services/ast_queries/file.ml", line 1667,
characters 4-27
Called from file "src/kernel_services/ast_data/ast.ml", line 108,
characters 2-28
Called from file "src/kernel_services/ast_data/ast.ml", line 116,
characters 53-71
Called from file "src/kernel_internals/runtime/boot.ml", line 29,
characters 6-20
Called from file "src/kernel_services/cmdline_parameters/cmdline.ml",
line 787, characters 2-9
Called from file "src/kernel_services/cmdline_parameters/cmdline.ml",
line 817, characters 18-64
Called from file "src/kernel_services/cmdline_parameters/cmdline.ml",
line 228, characters 4-8
Re-raised at file "src/kernel_services/cmdline_parameters/cmdline.ml",
line 244, characters 12-15
Called from file "src/kernel_internals/runtime/boot.ml", line 72,
characters 2-127
And I looked at the file "src/kernel_services/ast_queries/file.ml", line 472 and the code executed is Sys.command cpp_command.
I am not sure why "No Child Processes" error is thrown when trying to execute execute gcc.
Update: I have Ocaml version: 4.02.3, Python version: 2.7.8 and Frama-C version: Silicon-20161101
I know nothing about Frama-C. However, the error message is coming from somebody's (OCaml's? Python's?) runtime, indicating that a system call failed with the ECHILD error. The two system calls that system() makes are fork() and waitpid(). It's the latter system call that can return ECHILD. What it means is that there's no child process to wait for. One good possibility is that the fork() failed. fork() fails when the system is full of processes (unlikely) or when a per-user process limit has been reached. You could check whether you're running up against a limit of this kind.
Another possibility that occurs to me is that some other part of the code is already handling child processes using signal handling (SIGCHLD). So the reason there's no child process to wait for is that it has already been handled elsewhere. This gets complicated pretty fast, so I would hope this isn't the problem.

File not found error when launching a subprocess containing piped commands

I need to run the command date | grep -o -w '"+tz+"'' | wc -w using Python on my localhost. I am using subprocess module for the same and using the check_output method as I need to capture the output for the same.
However it is throwing me an error :
Traceback (most recent call last):
File "test.py", line 47, in <module>
check_timezone()
File "test.py", line 40, in check_timezone
count = subprocess.check_output(command)
File "/usr/lib/python2.7/subprocess.py", line 537, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception-
OSError: [Errno 2] No such file or directory
You have to add shell=True to execute a shell command. check_output is trying to find an executable called: date | grep -o -w '"+tz+"'' | wc -w and he cannot find it. (no idea why you removed the essential information from the error message).
See the difference between:
>>> subprocess.check_output('date | grep 1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/subprocess.py", line 603, in check_output
with Popen(*popenargs, stdout=PIPE, **kwargs) as process:
File "/usr/lib/python3.4/subprocess.py", line 848, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.4/subprocess.py", line 1446, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'date | grep 1'
And:
>>> subprocess.check_output('date | grep 1', shell=True)
b'gio 19 giu 2014, 14.15.35, CEST\n'
Read the documentation about the Frequently Used Arguments for more information about the shell argument and how it changes the interpretation of the other arguments.
Note that you should try to avoid using shell=True since spawning a shell can be a security hazard (even if you do not execute untrusted input attacks like Shellshock can still be performed!).
The documentation for the subprocess module has a little section about replacing the shell pipeline.
You can do so by spawning the two processes in python and use subprocess.PIPE:
date_proc = subprocess.Popen(['date'], stdout=subprocess.PIPE)
grep_proc = subprocess.check_output(['grep', '1'], stdin=date_proc.stdout, stdout=subprocess.PIPE)
date_proc.stdout.close()
output = grep_proc.communicate()[0]
You can write some simple wrapper function to easily define pipelines:
import subprocess
from shlex import split
from collections import namedtuple
from functools import reduce
proc_output = namedtuple('proc_output', 'stdout stderr')
def pipeline(starter_command, *commands):
if not commands:
try:
starter_command, *commands = starter_command.split('|')
except AttributeError:
pass
starter_command = _parse(starter_command)
starter = subprocess.Popen(starter_command, stdout=subprocess.PIPE)
last_proc = reduce(_create_pipe, map(_parse, commands), starter)
return proc_output(*last_proc.communicate())
def _create_pipe(previous, command):
proc = subprocess.Popen(command, stdin=previous.stdout, stdout=subprocess.PIPE)
previous.stdout.close()
return proc
def _parse(cmd):
try:
return split(cmd)
except Exception:
return cmd
With this in place you can write pipeline('date | grep 1') or pipeline('date', 'grep 1') or pipeline(['date'], ['grep', '1'])
The most common cause of FileNotFound with subprocess, in my experience, is the use of spaces in your command. If you have just a single command (not a pipeline, and no redirection, wildcards, etc), use a list instead.
# Wrong, even with a valid command string
subprocess.run(['grep -o -w "+tz+"'])
# Fixed; notice also
subprocess.run(["grep", "-o", "-w", '"+tz+"'])
This change results in no more FileNotFound errors, and is a nice solution if you got here searching for that exception with a simpler command.
If you need a pipeline or other shell features, the simple fix is to add shell=True:
subprocess.run(
'''date | grep -o -w '"+tz+"'' | wc -w''',
shell=True)
However, if you are using python 3.5 or greater, try using this approach:
import subprocess
a = subprocess.run(["date"], stdout=subprocess.PIPE)
print(a.stdout.decode('utf-8'))
b = subprocess.run(["grep", "-o", "-w", '"+tz+"'],
input=a.stdout, stdout=subprocess.PIPE)
print(b.stdout.decode('utf-8'))
c = subprocess.run(["wc", "-w"],
input=b.stdout, stdout=subprocess.PIPE)
print(c.stdout.decode('utf-8'))
You should see how one command's output becomes another's input just like using a shell pipe, but you can easily debug each step of the process in python. Using subprocess.run is recommended for python > 3.5, but not available in prior versions.
The FileNotFoundError happens because - in the absence of shell=True - Python tries to find an executable whose file name is the entire string you are passing in. You need to add shell=True to get the shell to parse and execute the string, or figure out how to rearticulate this command line to avoid requiring a shell.
As an aside, the shell programming here is decidedly weird. On any normal system, date will absolutely never output "+tz+" and so the rest of the processing is moot.
Further, using wc -w to count the number of output words from grep is unusual. The much more common use case (if you can't simply use grep -c to count the number of matching lines) would be to use wc -l to count lines of output from grep.
Anyway, if you can, you want to avoid shell=True; if the intent here is to test the date command, you should probably replace the rest of the shell script with native Python code.
Pros:
The person trying to understand the program only needs to understand Python, not shell script.
The script will have fewer external dependencies (here, date) rather than require a Unix-like platform.
Cons:
Reimplementing standard Unix tools in Python is tiresome and sometimes rather verbose.
With that out of the way, if the intent is simply to count how wany times "+tz+" occurs in the output from date, try
p = subprocess.run(['date'],
capture_output=True, text=True,
check=True)
result = len(p.stdout.split('"+tz+"'))-1
The keyword argument text=True requires Python 3.7; for compatibility back to earlier Python versions, try the (misnomer) legacy synonym universal_newlines=True. For really old Python versions, maybe fall back to subprocess.check_output().
If you really need the semantics of the -w option of grep, you need to check if the characters adjacent to the match are not alphabetic, and exclude those which are. I'm leaving that as an exercise, and in fact would assume that the original shell script implementation here was not actually correct. (Maybe try re.split(r'(?<=^|\W)"\+tz\+"(?=\W|$)', p.stdout).)
In more trivial cases (single command, no pipes, wildcards, redirection, shell builtins, etc) you can use Python's shlex.split() to parse a command into a correctly quoted list of arguments. For example,
>>> import shlex
>>> shlex.split(r'''one "two three" four\ five 'six seven' eight"'"nine'"'ten''')
['one', 'two three', 'four five', 'six seven', 'eight\'nine"ten']
Notice how the regular string split() is completely unsuitable here; it simply splits on every whitespace character, and doesn't support any sort of quoting or escaping. (But notice also how it boneheadedly just returns a list of tokens from the original input:
>>> shlex.split('''date | grep -o -w '"+tz+"' | wc -w''')
['date', '|', 'grep', '-o', '-w', '"+tz+"', '|', 'wc', '-w']
(Even more parenthetically, this isn't exactly the original input, which had a superfluous extra single quote after '"+tz+"').
This is in fact passing | and grep etc as arguments to date, not implementing a shell pipeline! You still have to understand what you are doing.)
The question already has an answer above but just in case the solutions are not working for you; Please check the path itself and if all the environment variables are set for the process to locate the path.
what worked for me on python 3.8.10 (inspired by #mightypile solution here: https://stackoverflow.com/a/49986004/12361522), was removed splits of parametres and i had to enable shell, too:
this:
c = subprocess.run(["wc -w"], input=b.stdout, stdout=subprocess.PIPE, shell=True)
instead of:
c = subprocess.run(["wc", "-w"], input=b.stdout, stdout=subprocess.PIPE)
if anyone wanted to try my solution (at least for v3.8.10), here is mine:
i have directory with multiple files of at least 2 file-types (.jpg and others). i needed to count specific file type (.jpg) and not all files in the directory, via 1 pipe:
ls *.jpg | wc -l
so eventually i got it working like here:
import subprocess
proc1 = subprocess.run(["ls *.jpg"], stdout=subprocess.PIPE, shell=True)
proc2 = subprocess.run(['wc -l'], input=proc1.stdout, stdout=subprocess.PIPE, shell=True)
print(proc2.stdout.decode())
it would not work with splits:
["ls", "*.jpg"] that would make ls to ignore contraint *.jpg
['wc', '-l'] that would return correct count, but will all 3 outputs and not just one i was after
all that would not work without enabled shell shell=True
I had this error too and what worked for me was setting the line endings of the .sh file - that I was calling with subprocess - to Unix (LF) instead of Windows CRLF.

Using python subprocess to fake running a cmd from a terminal

We have a vendor-supplied python tool ( that's byte-compiled, we don't have the source ). Because of this, we're also locked into using the vendor supplied python 2.4. The way to the util is:
source login.sh
oupload [options]
The login.sh just sets a few env variables, and then 2 aliases:
odownload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_download_command.pyc "$#"
}
oupload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_upload_command.pyc "$#"
}
Now, when I run it their way - works fine. It will prompt for a username and password, then do it's thing.
I'm trying to create a wrapper around the tool to do some extra steps after it's run and provide some sane defaults for the utility. The problem I'm running into is I cannot, for the life of me, figure out how to use subprocess to successfully do this. It seems to realize that the original command isn't running directly from the terminal and bails.
I created a '/usr/local/bin/oupload' and copied from the original login.sh. Only difference is instead of doing an alias at the end, I actually run the command.
Then, in my python script, I try to run my new shell script:
if os.path.exists(options.zipfile):
try:
cmd = string.join(cmdargs,' ')
p1 = Popen(cmd, shell=True, stdin=PIPE)
But I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 112, in getAuth
Empty Username not legal
Unknown Error Encountered
SUMMARY:
Name: Empty Username not legal
Description: None
So it seemed like an extra carriage return was getting sent ( I tried rstripping all the options, didn't help ).
If I don't set stdin=PIPE, I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 109, in getAuth
IOError: [Errno 5] Input/output error
Unknown Error Encountered
I've tried other variations of using p1.communicate, p1.stdin.write() along with shell=False and shell=True, but I've had no luck in trying to figure out how to properly send along the username and password. As a last result, I tried looking at the byte code for the utility they provided - it didn't help - once I called the util's main routine with the proper arguments, it ended up core dumping w/ thread errors.
Final thoughts - the utility doesn't want to seem to 'wait' for any input. When run from the shell, it pauses at the 'Username' prompt. When run through python's popen, it just blazes thru and ends, assuming no password was given. I tried to lookup ways of maybe preloading the stdin buffer - thinking maybe the process would read from that if it was available, but couldn't figure out if that was possible.
I'm trying to stay away from using pexpect, mainly because we have to use the vendor's provided python 2.4 because of the precompiled libraries they provide and I'm trying to keep distribution of the script to as minimal a footprint as possible - if I have to, I have to, but I'd rather not use it ( and I honestly have no idea if it works in this situation either ).
Any thoughts on what else I could try would be most appreciated.
UPDATE
So I solved this by diving further into the bytecode and figuring out what I was missing from the compiled command.
However, this presented two problems -
The vendor code, when called, was doing an exit when it completed
The vendor code was writing to stdout, which I needed to store and operate on ( it contains the ID of the uploaded pkg ). I couldn't just redirect stdout, because the vendor code was still asking for the username/password.
1 was solved easy enough by wrapping their code in a try/except clause.
2 was solved by doing something similar to: https://stackoverflow.com/a/616672/677373
Instead of a log file, I used cStringIO. I also had to implement a fake 'flush' method, since it seems the vendor code was calling that and complaining that the new obj I had provided for stdout didn't supply it - code ends up looking like:
class Logger(object):
def __init__(self):
self.terminal = sys.stdout
self.log = StringIO()
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
self.terminal.flush()
self.log.flush()
if os.path.exists(options.zipfile):
try:
os.environ['OCLI_CODESET'] = 'ISO-8859-1'
backup = sys.stdout
sys.stdout = output = Logger()
# UploadCommand was the command found in the bytecode
upload = UploadCommand()
try:
upload.main(cmdargs)
except Exception, rc:
pass
sys.stdout = backup
# now do some fancy stuff with output from output.log
I should note that the only reason I simply do a 'pass' in the except: clause is that the except clause is always called. The 'rc' is actually the return code from the command, so I will probably add handling for non-zero cases.
I tried to lookup ways of maybe preloading the stdin buffer
Do you perhaps want to create a named fifo, fill it with username/password info, then reopen it in read mode and pass it to popen (as in popen(..., stdin=myfilledbuffer))?
You could also just create an ordinary temporary file, write the data to it, and reopen it in read mode, again, passing the reopened handle as stdin. (This is something I'd personally avoid doing, since writing username/passwords to temporary files is often of the bad. OTOH it's easier to test than FIFOs)
As for the underlying cause: I suspect that the offending software is reading from stdin via a non-blocking method. Not sure why that works when connected to a terminal.
AAAANYWAY: no need to use pipes directly via Popen at all, right? I kinda laugh at the hackishness of this, but I'll bet it'll work for you:
# you don't actually seem to need popen here IMO -- call() does better for this application.
statuscode = call('echo "%s\n%s\n" | oupload %s' % (username, password, options) , shell=True)
tested with status = call('echo "foo\nbar\nbar\nbaz" |wc -l', shell = True) (output is '4', naturally.)
The original question was solved by just avoiding the issue and not using the terminal and instead importing the python code that was being called by the shell script and just using that.
I believe J.F. Sebastian's answer would probably work better for what was originally asked, however, so I'd suggest people looking for an answer to a similar question look down the path of using the pty module.

Can't seem to get fortran executable to run correctly through python

I have read a bunch of different topics on SO and other sites and cannot get a direct answer to my question/problem. Currently I have this python script that runs completely fine, with the exception of no calls made to run a fortran program are working correctly. I have tried using subprocess commands, os.system commands, opening bash script files that are opened through python, and no luck. Here are some examples and errors I'm getting.
One attmept:
subprocess.Popen(["sh", "{0}{1}".format(SCRIPTS,"qlmtconvertf.sh"), "qlmt"], shell=False, stdout=subprocess.PIPE)
This gives an error that the program has trouble reading the file correctly.
forrtl: severe (24): end-of-file during read, unit 1, file /home/akoufos/lapw/Ar/lda/bcc55_mt1.5_lo_e8_o4/DOS/lat70/qlmt
Another attempt:
subprocess.Popen(["./{0}{1}".format(SOURCE,"qlmtconvertf"), "qlmt"], shell=False, stdout=subprocess.PIPE)
This gives an error of not finding the file.
File "/home/akoufos/lapw/Scripts_Plots/LAPWanalysis.py", line 59, in DOS
subprocess.Popen(["./{0}{1}".format(SOURCE,"qlmtconvertf"), "qlmt"], shell=False, stdout=subprocess.PIPE)
File "/usr/lib64/python2.7/subprocess.py", line 672, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1202, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Yet another attempt:
os.system("{0}{1}".format(SOURCE,"qlmtconvertf qlmt"))
This gives an error equivalent to the first example. In all examples SOURCE="/home/myusername/lapw/Source/", where the fortran source files are, SCRIPTS="/home/myusername/lapw/Scripts_Plots/", where I have other files and the python scripts in, qlmtconvertf is a compiled fortran program, and qlmt is a file the qlmtconvertf reads. This source code works completely fine if I call it in the shell, like I have done countless times, but I'm trying to automate calling these codes. I have written a bash script as well, that does what I need, but I'm trying to do everything through python instead. Any ideas, suggestions, or answers to what I am doing incorrectly and what is going on would be greatly appreciated. Thank you all in advance.
EDIT: I got it working with the suggestion given below by Francis. I had to keep the complete paths (i.e. /home/username/etc) and the os.path.join to call the program correctly.
import os.path
LAPW = "/home/myusername/lapw/"
SOURCE = os.path.join(LAPW,'Source')
SCRIPTS = os.path.join(LAPW,'Scripts_Plots')
QLMTCONVERT = os.path.join(SOURCE,'qlmtconvertf')
qargs = [QLMTCONVERT,'qlmt']
#CALLING PROGRAM
subprocess.Popen(qargs, stdout=subprocess.PIPE).communicate(input=None)
To get it to work correctly I had to also close the 'qlmt' file I had created during the python script. Also I am working in the directory that contains the 'qlmt' file.
(edit Also added .communicate(input=None) to the end of the subprocess. This was unnecessary for this process call, but it was important for a latter one I made in the script that tried to use a file the process was creating. From my understanding the .communicate talks to the process and basically waits for it to finish before the next python line is executed. Similar to .wait(), but more advanced. If someone who understands this more wants to elaborate, please feel free. edit)
I'm not exactly sure why this method worked, but using strings as inputs for the subprocess was giving errors. If any one has any insight on this I would be very thankful if you could pass on your knowledge. Thank you everyone for the help.
I think you forgot a slash in your filenames:
"{0}{1}".format(SOURCE,"qlmtconvertf qlmt") == '/home/myusername/lapw/Sourceqlmtconvertf qlmt'
I assume you mean this?
"{0}/{1}".format(SOURCE,"qlmtconvertf qlmt") == '/home/myusername/lapw/Source/qlmtconvertf qlmt'
I recommend using os.path.join rather than direct string construction for pathname creation:
import os.path
executable = os.path.join(SOURCE, 'qlmtconvertf')
args = ['qlmt']
subprocess.Popen(executable+args, stdout=subprocess.PIPE)

Categories

Resources