I want to ask about a command in Python that performs exactly like the dos() command in MATLAB. For instance, I have the following code block in MATLAB and I want to do exactly the same in Python.
**DIANA = '"C:\Program Files\Diana 10.3\\bin\\DianaIE.exe"';
MODEL = 'Input.dcp';
INPUTDIR = 'C:\Users\pc\Desktop\Thesis\PSO\';
OUTPUTDIR = 'C:\Users\pc\Desktop\Thesis\PSO\';
OUTPUT = 'FILE_OUTPUT.out';
WORKDIRINPUT = sprintf('%s%s',INPUTDIR,MODEL);
WORKDIROUTPUT = sprintf('%s%s',OUTPUTDIR,OUTPUT);
%
NAMES = sprintf('%s %s %s',DIANA,WORKDIRINPUT,WORKDIROUTPUT);
disp('Start DIANA');
dos(NAMES);
disp('End DIANA');**
To execute a block of code and get the output in python inside the code you can use a function called exec() and pass the expression or the code to be executed as a string.
This accepts the code as a string. Example..
code = 'x=100\nprint(x)'
exec(code)
Output:
100
And if you want to use the command prompt or power shell commands in python you should you a library named os in python
import os
os.system('cd document')
you can use os.path module for manipulation path and a lot more to know more about this go through this documentation OS-Documentation
import subprocess
subprocess.call(['C:\Program Files\Diana 10.3\bin\DianaIE.exe',
'C:\Users\pc\Desktop\Thesis\PSO\Input.py'])
Related
I currently have the following piece of code in bash, now I want to do this in python as well. However the python script being called is very long and changing that to a function would be a very tedious task. How can I do this in python without modifying the script being called?
gfs15_to_am10.py $LAT $LON $ALT $GFS_CYCLE $FORECAST_HOUR \
> layers.amc 2>layers.err
you can use os lib.
import os
os.system("bash commands")
Two option to use parameters:
option A:
import os
LAT = ''
os.system(f"echo {LAT}")
option B:
use argparse lib to get parameters as script arguments:
argparse python
I have seen plenty examples of running a python script from inside a bash script and either passing in variables as arguments or using export to give the child shell access, I am trying to do the opposite here though.
I am running a python script and have a separate file, lets call it myGlobalVariables.bash
myGlobalVariables.bash:
foo_1="var1"
foo_2="var2"
foo_3="var3"
My python script needs to use these variables.
For a very simple example:
myPythonScript.py:
print "foo_1: {}".format(foo_1)
Is there a way I can import them directly? Also, I do not want to alter the bash script if possible since it is a common file referenced many times elsewhere.
If your .bash file is formatted as you indicated - you might be able to just import it direct as a Python module via the imp module.
import imp
bash_module = imp.load_source("bash_module, "/path/to/myGlobalVariables.bash")
print bash_module.foo_1
You can also use os.environ:
Bash:
#!/bin/bash
# works without export as well
export testtest=one
Python:
#!/usr/bin/python
import os
os.environ['testtest'] # 'one'
I am very new to python, so I would welcome suggestions for more idiomatic ways to do this, but the following code uses bash itself to tell us which values get set by first calling bash with an empty environment (env -i bash) to tell us what variables are set as a baseline, then I call it again and tell bash to source your "variables" file, and then tell us what variables are now set. After removing some false-positives and an apparently-blank line, I loop through the "additional" output, looking for variables that were not in the baseline. Newly-seen variables get split (carefully) and put into the bash dictionary. I've left here (but commented-out) my previous idea for using exec to set the variables natively in python, but I ran into quoting/escaping issues, so I switched gears to using a dict.
If the exact call (path, etc) to your "variables" file is different than mine, then you'll need to change all of the instances of that value -- in the subprocess.check_output() call, in the list.remove() calls.
Here's the sample variable file I was using, just to demonstrate some of the things that could happen:
foo_1="var1"
foo_2="var2"
foo_3="var3"
if [[ -z $foo_3 ]]; then
foo_4="test"
else
foo_4="testing"
fi
foo_5="O'Neil"
foo_6='I love" quotes'
foo_7="embedded
newline"
... and here's the python script:
#!/usr/bin/env python
import subprocess
output = subprocess.check_output(['env', '-i', 'bash', '-c', 'set'])
baseline = output.split("\n")
output = subprocess.check_output(['env', '-i', 'bash', '-c', '. myGlobalVariables.bash; set'])
additional = output.split("\n")
# these get set when ". myGlobal..." runs and so are false positives
additional.remove("BASH_EXECUTION_STRING='. myGlobalVariables.bash; set'")
additional.remove('PIPESTATUS=([0]="0")')
additional.remove('_=myGlobalVariables.bash')
# I get an empty item at the end (blank line from subprocess?)
additional.remove('')
bash = {}
for assign in additional:
if not assign in baseline:
name, value = assign.split("=", 1)
bash[name]=value
#exec(name + '="' + value + '"')
print "New values:"
for key in bash:
print "Key: ", key, " = ", bash[key]
Another way to do it:
Inspired by Marat's answer, I came up with this two-stage hack. Start with a python program, let's call it "stage 1", which uses subprocess to call bash to source the variable file, as my above answer does, but it then tells bash to export all of the variables, and then exec the rest of your python program, which is in "stage 2".
Stage 1 python program:
#!/usr/bin/env python
import subprocess
status = subprocess.call(
['bash', '-c',
'. myGlobalVariables.bash; export $(compgen -v); exec ./stage2.py'
]);
Stage 2 python program:
#!/usr/bin/env python
# anything you want! for example,
import os
for key in os.environ:
print key, " = ", os.environ[key]
As stated in #theorifice answer, the trick here may be that such formatted file may be interpreted by both as bash and as python code. But his answer is outdated. imp module is deprecated in favour of importlib.
As your file has extension other than ".py", you can use the following approach:
from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
spec = spec_from_loader("foobar", SourceFileLoader("foobar", "myGlobalVariables.bash"))
foobar = module_from_spec(spec)
spec.loader.exec_module(foobar)
I do not completely understand how this code works (where there are these foobar parameters), however, it worked for me. Found it here.
I am wanting to run an executable that would normally be run directly on the command line but ultimately via a Python script.
I used subprocess.Popen after reading through here and multiple Google results to achieve some limited success.
>>>import subprocess
>>>exe_path = sys.argv[1]
>>>dir_path_in = sys.argv[2]
>>>dir_path_out = sys.argv[3]
>>>subprocess.Popen([exe_path])
It then displays
<subprocess.Popen object at 0x021B7B30>
Followed by
>>>usage: <path to exe> [options] <dir_path> <dir_path_out>
But if I enter what you would normally expect to on the command line if used exclusively it returns:
>>>SyntaxError: invalid token
I have tested what is entered exclusively on the command line with the exe and it works fine just not via Python
I have had a look through StackOverFlow and the best kind of comparison I found was here How to handle an executable requiring interactive responses?
Ultimately the "usage" part will not even be required in the end as the declared sys.argvs will provide all the information the executable requires to run automatically.
The subprocess.call() achieved the desired result by declaring the argv variables and then concatenating the variables and using that final variable in a subprocess.call() as opposed to using shlex.split() which I first tried but it struggled with paths even with the '\' escaped for Windows
import subprocess
exe_path = sys.argv[1]
dir_path_in = sys.argv[2]
dir_path_out = sys.argv[3]
command = exe_path, dir_path_in, dir_path_out
p = subprocess.call(command)
I have 1 python 3 script. I need to use another script via command line. What function should i use?
I mean something like that:
res = execute('C:\python32\python Z:\home\192.168.0.15\www\start.pyw start=1 module=server > Z:\home\192.168.0.15\www\test.html')
Use the subprocess module. That gives you the most flexibility.
Check out the Process Management section of the os module
http://docs.python.org/3/library/os.html#module-os
os.popen will work well if you are interested in i/o with the process
This is a python program you want to start. It would be better to import the module, run the method you want and write the output to a file.
However, this would be how you can do it via shell execution:
from subprocess import *
command_stdout = Popen(['C:\python32\python', 'Z:\home\192.168.0.15\www\start.pyw', 'start=1', 'module=server'], stdout=PIPE).communicate()[0]
res = command_stdout.decode("utf-8")
fd = open('Z:\home\192.168.0.15\www\test.html',"w")
fd.write(res)
I am working on a Python project that includes a lot of simple example scripts to help new users get used to the system. As well as the source code for each example, I include the output I get on my test machine so users know what to expect when all goes well.
It occured to me that I could use this as a crude form of unit testing. Automatically run all the example scripts and do a load of diffs against the expected output.
All of my example scripts end with extension .py so I can get their filenames easily enough with something like
pythonfiles=[filename for filename in os.listdir(source_directory) if filename[-3:]=='.py']
So, pythonfiles contains something like ['example1.py', 'cool_example.py'] and so on.
What syntax can I use to actually run the scripts referenced in this list?
You could leverage doctest to help you get this done. Write a method that executes each script, and in the docstring for each method you paste the expected output:
def run_example1():
"""
This is example number 1. Running it should give you the following output:
>>> run_example1()
"This is the output from example1.py"
"""
os.system('python example1.py') # or you could use subprocess here
if __name__ == "__main__":
import doctest
doctest.testmod()
Note I haven't tested this.
Alternatively, as Shane mentioned, you could use subprocess. Something like this will work:
import subprocess
cmd = ('example1.py', 'any', 'more', 'arguments')
expected_out = """Your expected output of the script"""
exampleP = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = exampleP.communicate() # out and err are stdout and stderr, respectively
if out != expected_out:
print "Output does not match"
You want to use the subprocess module.
If they are similarly structured (All are executed with a run function for example), you can import the them as python scripts, and call thier run function.
import sys
import os
import imp
pythonfiles = [filename for filename in os.listdir(source_directory) if filename[-3:]=='.py']
for py_file in pythonfiles:
mod_name = os.path.splitext(py_file)[0]
py_filepath = os.path.join(source_directory, py_file)
py_mod = imp.load_source(mod_name, py_filepath)
if hasattr(py_mod, "run"):
py_mod.run()
else:
print '%s has no "run"' % (py_filepath)