I have file called . /home/test.sh (the space between the first . and / is intentional) which contains some environmental variables. I need to load this file and run the .py. If I run the command manually first on the Linux server and then run python script it generates the required output. However, I want to call . /home/test.sh from within python to load the profile and run rest of the code. If this profile is not loaded python scripts runs and gives 0 as an output.
The call
subprocess.call('. /home/test.sh',shell=True)
runs fine but the profile is not loaded on the Linux terminal to execute python code and give the desired output.
Can someone help?
Environment variables are not inherited directly by the parent process, which is why your simple approach does not work.
If you are trying to pick up environment variables that have been set in your test.sh, then one thing you could do instead is to use env in a sub-shell to write them to stdout after sourcing the script, and then in Python you can parse these and set them locally.
The code below will work provided that test.sh does not write any output itself. (If it does, then what you could do to work around it would be to echo some separator string afterward sourcing it, and before running the env, and then in the Python code, strip off the separator string and everything before it.)
import subprocess
import os
p = subprocess.Popen(". /home/test.sh; env -0", shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, _ = p.communicate()
for varspec in out.decode().split("\x00")[:-1]:
pos = varspec.index("=")
name = varspec[:pos]
value = varspec[pos + 1:]
os.environ[name] = value
# just to test whether it works - output of the following should include
# the variables that were set
os.system("env")
It is also worth considering that if all that you want to do is set some environment variables every time before you run any python code, then one option is just to source your test.sh from a shell-script wrapper, and not try to set them inside python at all:
#!/bin/sh
. /home/test.sh
exec "/path/to/your/python/script $#"
Then when you want to run the Python code, you run the wrapper instead.
Related
This question already has answers here:
how to "source" file into python script
(8 answers)
Closed 3 years ago.
I am struggling to execute a shell script from a Python program. The actual issue is the script is a load profile script and runs manually as :
. /path/to/file
The program can't be run as sh script as the calling programs are loading some configuration file and so must need to be run as . /path/to/file
Please do guide how can I integrate the same in my Python script? I am using subprocess.Popen command to run the script and as said the only way it works is to run as . /path/to/file and so not giving the right result.
Without knowledge of the precise reason the script needs to be sourced, this is slightly speculative.
The fundamental problem is this: How do I get a source command to take effect outside the shell script?
Let's say your sourced file does something like
export fnord="value"
This cannot (usefully) be run in a subshell (as a normally executed script would) because the environment variable and its value will be lost when the script terminates. The solution is to source (aka .) this snippet from an already running shell; then the value stays in that shell's environment until that shell terminates.
But Python is not a shell, and there is no general way for Python to execute arbitrary shell script code, short of reimplementing the shell in Python. You can reimplement a small subset of the shell's functionality with something like
with open('/path/to/file') as shell_source:
lines = shell_source.readlines()
for line in lines:
if line.strip().startswith('export '):
var, value = line[7:].strip().split('=', 1)
if value.startswith('"'):
value = value.strip('"')
elif value.startswith("'"):
value = value.strip("'")
os.environ[var] = value
with some very strict restrictions (let's not say naïve assumptions) on the allowable shell script syntax in the file. But what if the file contained something else than a series of variable assignments, or the assignment used something other than trivial quoted strings in the values? (Even the export might or might not be there. Its significance is to make the variable visible to subprocesses of the current shell; maybe that is not wanted or required? Also export variable=value is not portable; proper Bourne shell script syntax would use variable=value; export variable or one of the many variations.)
If you know what exactly your Python script needs from the shell script, maybe do something like
r = subprocess.run('. /path/to/file; printf "%s\n" "$somevariable"',
shell=True, capture_output=True, text=True)
os.environ['somevariable'] = r.stdout.split('\n')[-2]
to source the entire script in a subshell, then print to standard output the part you actually need, and capture that from your Python script (and assign it to an environment variable if that's what you eventually need to accomplish).
Iv'e been using the following shell command to read the image off a scanner named scanner_name and save it in a file named file_name
scanimage -d <scanner_name> --resolution=300 --format=tiff --mode=Color 2>&1 > <file_name>
This has worked fine for my purposes.
I'm now trying to embed this in a python script. What I need is to save the scanned image, as before, into a file and also capture any std output (say error messages) to a string
I've tried
scan_result = os.system('scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name))
But when I run this in a loop (with different scanners), there is an unreasonably long lag between scans and the images aren't saved until the next scan starts (the file is created as an empty file and is not filled until the next scanning command). All this with scan_result=0, i.e. indicating no error
The subprocess method run() has been suggested to me, and I have tried
with open(file_name, 'w') as scanfile:
input_params = '-d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} '.format(scanner, file_name)
scan_result = subprocess.run(["scanimage", input_params], stdout=scanfile, shell=True)
but this saved the image in some kind of an unreadable file format
Any ideas as to what may be going wrong? Or what else I can try that will allow me to both save the file and check the success status?
subprocess.run() is definitely preferred over os.system() but neither of them as such provides support for running multiple jobs in parallel. You will need to use something like Python's multiprocessing library to run several tasks in parallel (or painfully reimplement it yourself on top of the basic subprocess.Popen() API).
You also have a basic misunderstanding about how to run subprocess.run(). You can pass in either a string and shell=True or a list of tokens and shell=False (or no shell keyword at all; False is the default).
with_shell = subprocess.run(
"scanimage -d {} --resolution=300 --format=tiff --mode=Color 2>&1 > {} ".format(
scanner, file_name), shell=True)
with open(file_name) as write_handle:
no_shell = subprocess.run([
"scanimage", "-d", scanner, "--resolution=300", "--format=tiff",
"--mode=Color"], stdout=write_handle)
You'll notice that the latter does not support redirection (because that's a shell feature) but this is reasonably easy to implement in Python. (I took out the redirection of standard error -- you really want error messages to remain on stderr!)
If you have a larger working Python program this should not be awfully hard to integrate with a multiprocessing.Pool(). If this is a small isolated program, I would suggest you peel off the Python layer entirely and go with something like xargs or GNU parallel to run a capped number of parallel subprocesses.
I suspect the issue is you're opening the output file, and then running the subprocess.run() within it. This isn't necessary. The end result is, you're opening the file via Python, then having the command open the file again via the OS, and then closing the file via Python.
JUST run the subprocess, and let the scanimage 2>&1> filename command create the file (just as it would if you ran the scanimage at the command line directly.)
I think subprocess.check_output() is now the preferred method of capturing the output.
I.e.
from subprocess import check_output
# Command must be a list, with all parameters as separate list items
command = ['scanimage',
'-d{}'.format(scanner),
'--resolution=300',
'--format=tiff',
'--mode=Color',
'2>&1>{}'.format(file_name)]
scan_result = check_output(command)
print(scan_result)
However, (with both run and check_output) that shell=True is a big security risk ... especially if the input_params come into the Python script externally. People can pass in unwanted commands, and have them run in the shell with the permissions of the script.
Sometimes, the shell=True is necessary for the OS command to run properly, in which case the best recommendation is to use an actual Python module to interface with the scanner - versus having Python pass an OS command to the OS.
I have a python code in which at the beginning it takes a string variable let say "element_name" from user and build some sub-folders based on this string and also some output files created by this code move to those folders.
On the other hand, I have a bash script in which some codes should be running in the sub-folders made in python code.
Any help how to introduce those folders in bash? How to pass the "element_name" from python to bash?
In python code "a.py" I tried
first = subprocess.Popen(['/bin/echo', element_name], stdout=subprocess.PIPE)
second = subprocess.Popen(['bash', 'path/to/script', '--args'], stdin=first.stdout)
and then in bash
source a.py
echo $element_name
but it doesn't work.
It's not clear from your question what is in your scripts, but I guess
subprocess.run(['/bin/bash', 'path/to/script', '--args', element_name])
is doing what you intend to do, passing the value of element_name to script as an argument.
I found a way. What I did is to pass the argument in a bash file and import this bash file as a source to my main bash file. Now everything works well.
I have a file with some environment variables that I want to use in a python script
The following works form the command line
$ source myFile.sh
$ python ./myScript.py
and from inside the python script I can access the variables like
import os
os.getenv('myvariable')
How can I source the shell script, then access the variables, from with the python script?
If you are saying backward environment propagation, sorry, you can't. It's a security issue. However, directly source environment from python is definitely valid. But it's more or less a manual process.
import subprocess as sp
SOURCE = 'your_file_path'
proc = sp.Popen(['bash', '-c', 'source {} && env'.format(SOURCE)], stdout=sp.PIPE)
source_env = {tup[0].strip(): tup[1].strip() for tup in map(lambda s: s.strip().split('=', 1), proc.stdout)}
Then you have everything you need in source_env.
If you need to write it back to your local environment (which is not recommended, since source_env keeps you clean):
import os
for k, v in source_env.items():
os.environ[k] = v
Another tiny attention needs to be paid here, is since I called bash here, you should expect the rules are applied here too. So if you want your variable to be seen, you will need to export them.
export VAR1='see me'
VAR2='but not me'
You can not load environmental variables in general from a bash or shell script, it is a different language. You will have to use bash to evaluate the file and then somehow print out the variables and then read them. see Forcing bash to expand variables in a string loaded from a file
This question already has answers here:
How to get the environment variables of a subprocess after it finishes running?
(5 answers)
Closed 9 years ago.
On Windows there's a 3rd party command line tool I would like to use in my python script. Let's say it's foobar.exe located under C:\Program Files (x86)\foobar. Foobar comes with an additional batch file init_env.bat that will set up the shell environment for foobar.exe to run.
I want to write a python script, that will first call init_env.bat once and then foobar.exe multiple times. However, all mechanisms I know of (subprocess, os.system and backticks) seem to spawn a new process for each execution. Therefore, calling init_env.bat is useless, because it does not change the environment of the process in which the python script runs and thus every subsequent call to foobar.exe fails, because it's environment is not set up.
Is it possible to call init_env.bat from python in a way that allows init_env.bat to alter the environment of the calling scripts process?
Is it possible to call init_env.bat from python in a way that allows
init_env.bat to alter the environment of the calling scripts
process?
Not easily, although, if the init_env.bat is really simple, you could attempt to parse it, and make the changes to os.environ yourself.
Otherwise it's much easier to spawn it in a sub-shell, followed by a call to set to output the new environment variables, and parse the output from that.
The following works for me...
init_env.bat
#echo off
set FOO=foo
set BAR=bar
foobar.bat
#echo off
echo FOO=%FOO%
echo BAR=%BAR%
main.py
import sys, os, subprocess
INIT_ENV_BAT = 'init_env.bat'
FOOBAR_EXE = 'foobar.bat'
def init_env():
vars = subprocess.check_output([INIT_ENV_BAT, '&&', 'set'], shell=True)
for var in vars.splitlines():
k, _, v = map(str.strip, var.strip().partition('='))
if k.startswith('?'):
continue
os.environ[k] = v
def main():
init_env()
subprocess.check_call(FOOBAR_EXE, shell=True)
subprocess.check_call(FOOBAR_EXE, shell=True)
if __name__ == '__main__':
main()
...for which python main.py outputs...
FOO=foo
BAR=bar
FOO=foo
BAR=bar
Note that I'm only using a batch file in place of your foobar.exe because I don't have a .exe file handy which can confirm the environment variables are set.
If you're using a .exe file, you can remove the shell=True clause from the lines subprocess.check_call(FOOBAR_EXE, shell=True).