I am currently using subprocess to run a Python script inside of my current Python but it is keep giving me an error:
for dir in os.listdir(os.path.join(DIR2,dirname)):
temp = os.path.join(os.path.join(DIR2,dirname),dir)
files = [os.path.join(temp, f) for f in os.listdir(temp) if f.endswith("json")]
for lists in files:
subprocess.Popen(["python", DIR4, os.path.join(temp,lists)])
Above is what I am currently using.
DIR4 is the path of the python that I want to run.
Problem is, the python that I want to run can only take one file at a time.
However this subprocess looks like it tries to execute ALL at ONCE.
I want to run ONE at a time, instead of ALL at ONCE.
Because it is running ALL at ONCE, my python that I want to run does not work the way it is..
What do I need to do to change this?
If you want to wait first for the subprocess to terminate, before going ahead, I think you could use Popen.wait():
...
p = subprocess.Popen(["python", DIR4, os.path.join(temp,lists)])
p.wait()
...
To actually do what you're asking, and not hack it together through subprocess, you can use exec which allows you to run python code with your own provided globals and locals.
In older versions of Python (meaning pre-3), you can use execfile to achieve the same thing.
Related
I had this script working for me, before I decided I'm gonna rewrite everything and make it portable.
Without delving too much into the details, there's a central Bash script, which calls 5 other Bash scripts in their own respective folders. I have no intention of porting to Windows anytime soon, as of current this is just for Linux.
The execution path of the central Bash script is:
dos.1/1-init.sh dos.1/
dos.2/1-trace-to-file.sh dos.2/ dos.1/
dos.3/1-recognize-categories.sh dos.3/
dos.4/1-ping-in-groups.sh dos.4/ dos.3/
dos.5/init.sh dos.5/ dos.4/
I run with ./init.sh
Before the script was 'portable' I was using explicit file paths inside each respective script. All was well and good. The program itself is a combination of Bash and Python, and writes to files in one directory, so that they can be manipulated in various ways, before being read back into different parts of the program.
I understand that the fastest way to do this would be to write a monolithic Python script, using subprocess calls for the Bash side of things... However, I am doing it this way to ease maintenance, and (before I started making it 'portable') it was lightning fast.
My issue now is this: each time I have to read text into Python (either from SQL or from file) there's always this added garbage. Up until this point, I have been using sed, awk and Python's .rstrip() function to manage this... Which is all well and good, but this one damn function will not play nice... And I feel there must be a better way.
In bash I call it with:
$prog_dir=$1
$data_dir=$2
$prog_dir/2fast-ping.py $data_dir/group0.txt > $prog_dir/group0_averages.txt
$prog_dir/2fast-ping.py $data_dir/group1.txt > $prog_dir/group1_averages.txt
...
Now I know that I could write to file from within Python, but in this instance I have other reasons not to.
The issue, is that when the 2fast-ping.py script is ran, it reads the text file in with commas and a newline char. I have vigorously checked and I can confirm that the group#.txt files 100% do not contain commas. Here's the Python:
import sys
import subprocess
import select
from concurrent.futures import ThreadPoolExecutor
filename = sys.argv[1]
f = open(filename, "r")
ips = [elem.rstrip('\n') for elem in f]
print(ips)
f.close()
The script goes on to do some work on the IPs afterwards, but this is the painful part. If I call the script direct from CLI: ./2fast-ping.py ../dos.3/group0.txt, the text is processed PROPERLY and the superseding instructions actually function. But, when called from the first init script, the program basically sh*ts itself because each line is read in with commas. It works until the point where it starts to use the processed info, then:
<actual IP would be here>
ping: ('##.###.###.###',): Name or service not known
Of course, the issue is the ('',) But, Python is adding that in, and I don't know how to stop it :(
Any ideas?
Python code was okay, just passing an additional / with the argument :(
There is a python script start_test.py.
There is a second python script siple_test.py.
# pseudo code:
start_test.py --calls--> subprocess(python.exe simple_test.py, args_simple_test[])
The python interpreter for both scripts is the same. So instead of opening a new instance, I want to run simple_test.py directly from start_test.py. I need to preserve the sys.args environment. A nice to have would be to actually enter following code section in simple_test.py:
# file: simple_test.py
if __name__ == '__main__':
some_test_function()
Most important is, that the way should be a universal one, not depending on the content of the simple_test.py.
This setup would provide two benefits:
The call is much less resource intensive
The whole stack of simple_test.py can be debugged with pycharm
So, how do I execute the call of a python script, from a python script, without starting a new subprocess?
"Executing a script" is a somewhat blurry term.
Typically the if __name__== "__main__": part does the argument (sys.argv) decoding and then calls a worker function with explicit parameters. For clarity: It should not do anything else, since this additional work can't be called without creating a new process causing all the overhead you are trying to avoid.
You simply bypass that and call this implementing routine directly.
So you end up with start_test.py containing something like:
from simple_test import worker
# ...
worker(typed_arg1, typed_arg2)
I have read way to many threads now and really lost.
Just trying to do something basic before I make it complicated.
so i have a script test.py
I want to call the script from within runme.py but without waiting so it will process the other chunk of code, but then when it gets to the end wait for test.py code to finish before continuing on.
I cant seem to figure out the correct syntax for the p = subprocess.Popen (I have tried so many)
and do I need the that to the test.py if its in the same directory?
here is what i have but cant get to work.
import subprocess
p = subprocess.Popen(['python test.py'])
#do some code
p.wait()
I cant seem to figure out the correct syntax for the p = subprocess.Popen (I have tried so many)
You want to pass it a list of arguments. The first argument is the program to run, python (although actually, you probably want sys.executable here); the second is the script that you want python to run. So:
p = subprocess.Popen(['python', 'test.py'])
and do I need the that to the test.py if its in the same directory?
This will work the same way as if you ran python test.py at the shell: it will just pass test.py as-is to python, and python will treat that as a path relative to the current working directory (CWD).
So, if test.py is in the CWD, this will just work.
If test.py is somewhere else, then you need to provide either an absolute path, or one relative to the CWD.
One common thing you want is that test.py is in not necessarily in the CWD, but instead it's in the same directory as the script/module that wants to launch it:
scriptpath = os.path.join(os.path.dirname(__file__), 'test.py')
… or in the same directory as the main script used to start your program:1
scriptpath = os.path.join(os.path.dirname(sys.argv[0]), 'test.py')
Either way, you just pass that as the argument:
p = subprocess.Popen(['python', scriptpath])
1. On some platforms, this may actually be a relative path. If you might have done an os.chdir since startup, it will now be wrong. If you need to handle that, you want to stash os.path.abspath(os.path.dirname(sys.argv[0])) in the main script at startup, then pass it down to other functions for them to use instead of calling dirname(argv[0]) themselves.
I want to run some command line scripts from within my python program. These scripts generates some output files. I want to grab these output files from the subprocess call as object in my python program, while canceling generation of files on disk. Problem is I don't know how to do it, or whether that is even possible.
A simple example would look like this:
#foo.py
fout1 = open("temp1.txt","w")
fout2 = open("temp2.txt","w")
fout1.write("fout1")
fout2.write("fout2")
fout1.close()
fout2.close()
#test.py
import subprocess
process = subprocess.Popen(["python","foo.py"], ????????) #what arguments to use to grab temp1.txt and temp2.txt
print(process.??????) #how to access those files
I am familiar with subprocess.Popen so that is what the example code uses, but I am open to the use of other modules too if they could do it.
This question already has answers here:
How to get the environment variables of a subprocess after it finishes running?
(5 answers)
Closed 9 years ago.
On Windows there's a 3rd party command line tool I would like to use in my python script. Let's say it's foobar.exe located under C:\Program Files (x86)\foobar. Foobar comes with an additional batch file init_env.bat that will set up the shell environment for foobar.exe to run.
I want to write a python script, that will first call init_env.bat once and then foobar.exe multiple times. However, all mechanisms I know of (subprocess, os.system and backticks) seem to spawn a new process for each execution. Therefore, calling init_env.bat is useless, because it does not change the environment of the process in which the python script runs and thus every subsequent call to foobar.exe fails, because it's environment is not set up.
Is it possible to call init_env.bat from python in a way that allows init_env.bat to alter the environment of the calling scripts process?
Is it possible to call init_env.bat from python in a way that allows
init_env.bat to alter the environment of the calling scripts
process?
Not easily, although, if the init_env.bat is really simple, you could attempt to parse it, and make the changes to os.environ yourself.
Otherwise it's much easier to spawn it in a sub-shell, followed by a call to set to output the new environment variables, and parse the output from that.
The following works for me...
init_env.bat
#echo off
set FOO=foo
set BAR=bar
foobar.bat
#echo off
echo FOO=%FOO%
echo BAR=%BAR%
main.py
import sys, os, subprocess
INIT_ENV_BAT = 'init_env.bat'
FOOBAR_EXE = 'foobar.bat'
def init_env():
vars = subprocess.check_output([INIT_ENV_BAT, '&&', 'set'], shell=True)
for var in vars.splitlines():
k, _, v = map(str.strip, var.strip().partition('='))
if k.startswith('?'):
continue
os.environ[k] = v
def main():
init_env()
subprocess.check_call(FOOBAR_EXE, shell=True)
subprocess.check_call(FOOBAR_EXE, shell=True)
if __name__ == '__main__':
main()
...for which python main.py outputs...
FOO=foo
BAR=bar
FOO=foo
BAR=bar
Note that I'm only using a batch file in place of your foobar.exe because I don't have a .exe file handy which can confirm the environment variables are set.
If you're using a .exe file, you can remove the shell=True clause from the lines subprocess.check_call(FOOBAR_EXE, shell=True).