Sourcing a file to set environment variables from within a python script - python

Running on Ubuntu, I have a python script, written by someone else, which simply runs like this:
python cool_script.py command line args
However, this script needs some environment variables set before it can run. So I created exports.sh located in the same dir as the python script, which looks something like:
export ENV1='foo'
export ENV2='bar'
export ENV3='baz'
so running:
$source exports.sh
$python cool_script.py command line args
works great. However, my case is a bit more complex. I need to run this on several environments, where the values of ENV1,ENV2,ENV3 will have to be different. I also want to do some stuff to the outputs of cool_script.py. So I wrote cool_script_wrapper.py, which looks like this:
# set environment variables
import os
exports_file = os.path.dirname(os.path.realpath(__file__)) + '/exports.sh'
os.system('source %s' % exports_file)
# run cool_script.py
os.system('python cool_script.py command line args')
# do some stuff to the output
...
I was planning to have different exports.sh scripts for each environment I need to run on, always keeping them in the same dir as the main script. Only problem is that this 'source' approach doesn't work. The environment variables are simply not available to cool_script.py.
Searching around Stack Overflow, I understand now that it's not supposed to work, but I didn't find any solution that suits my needs. I don't want to use os.environ with hard-coded values, and I don't want to parse the exports file. I guess I could make my wrapper a bash script rather than a python, but I hope there is a better solution. Any ideas?

The problem is in the source command that attempts to call a bash builtin.
This should work:
import os
os.environ['ENV1'] = "foo"
os.system("python cool_script.py command line args")
compare with this answer:
Calling the "source" command from subprocess.Popen

Related

How to make a command that runs Python a script in Windows command line?

Background:
I'm using Windows. I know some of programming with python. I don't know much about batch, which I think I might need to do what I want.
I will show a example so it becomes more clear what I'm trying to do.
Example:
When using git, after you install it, you can call the git command from anywhere of your computer, you can execute git commands, like git init and this will create a git file in your current folder.
I don't know exactly how git works or what language they use but I want to do the same thing, create my own command that I can execute from anywhere in my computer after I "install" my program.
What I'm trying to do:
I want to make my own command and when I call it, it executes a python script.
e.g.
I install my program and it creates a command called myprogram and when I type myprogram in the command line, it's like if I typed python myprogram.py. And myprogram -someargument would be the same as python myprogram.py -someargument.
What I tried until now:
I'm searched for How to make a environment variable that runs Python script? but I never get exactly what I want, is always something like How do I set environment variable using Python script?.
Maybe I'm making the wrong question and the result I want are not showing?
I'm looking for a explanation on how to do this or at least a tutorial/guide.
Edit 1:
As UnholySheep said in the comments, it's not environment variable, its commands, so I changed the question even does what I want to know is the same thing.
Files you need:
First you need a python script (obviously) so I created a file called myprogram.py that have this simple line:
print("This should be from a command")
After you need to make a batch file, in my case I used a .cmd file called myprogram.cmd that have:
#ECHO OFF
python_directory\python.exe python_script_directory\myprogram.py %*
Configurations to make:
You need to set in PATH environment variable the location of the batch file batch_file_directory\myprogram.cmd
And now if you execute in the command line myprogram it will print This should be from a command.
You can also use .exe or .bat files.

Calling script-level Python functions from the Linux command line

How can I create functions and software for my Linux server? Let me explain this in a bit more detail.
So for my Linux server which I access with my SSH client, I have made a few Python scripts that work fine, but what I really want to do is have these Python scripts active all the time, such that I can execute functions I've created in the script (such as "def time(): ...") just by typing "time" in to the command line rather than starting up a script with ./script-name.py and then type "time". Do I need to install my Python files in to the system in some way?
I struggled searching Google because I didn't fully understand what to search, and results that came up weren't really related to my request. I did find the cmd Python module and learned how to create cmd interpreters, however, in order for me to access the commands I defined in the cmd interpreter, I had to first start the script, which as I explained above, not what I am looking for.
How can I make script-level Python functions callable from the Linux command line?
If you're using Python, you'll still need to fire up the interpreter, but you can make that happen automatically.
Start by making your script executable. Run this command in the shell:
chmod +x script-name.py
ls -l script-name.py
The output of ls should look something like this (note the xs in the left-hand column):
-rwxr-xr-x 1 me me 4 Jan 14 11:02 script-name.py
Now add an interpreter directive line at the top of your script file - this tells the shell to run Python to interpret your script:
#!/usr/bin/python
Then add code at the end of the file that calls your function:
if __name__ == '__main__':
time()
The if statement checks to see if this is the file that is being executed. It means you can still import your module from another Python file without the time() function being automatically called, if you want.
Finally, you need to put your script in the executable path.
mkdir -p $HOME/bin
mv script-name.py $HOME/bin/
export PATH=$HOME/bin:$PATH
Now you should be able to run script-name.py, and you'll see the output of the time() function. You can rename your file to whatever you like; you can even remove the .py extension.
Additional things you can do:
Use the argparse module to add command line arguments, e.g. so you can run script-name.py time to execute the time() function
Put the script in a system-wide path, like /usr/local/bin, or
Add the export PATH=$HOME/bin:$PATH line to your .bashrc so that it happens by default when you log in
The answer above is by far more complete and more informative than mine. I just wanted to offer a quick and dirty alternative.
echo "alias time='<your script> time'" > ~/.bashrc
bash
Like I said, quick and dirty.

Load environment variables of bashrc into python

I'm trying to set the environment variable of my .bashrc using Spyder; in other words I'm looking for a python command that reads my .bashrc. Any idea?
.bashrc should automatically be loaded into the environ on login
import os
print os.environ
if you wanted to create a dictionary of values from a bash source file you could in theory do something like
output = subprocess.check_output("source /path/to/.bashrc;env")
env = dict(line.split("=") for line in output.splitlines() if "=" in line))
print env
The shell's startup file is the shell's startup file. You really want to decouple things so that Python doesn't have to understand Bash syntax, and so that settings you want to use from Python are not hidden inside a different utility's monolithic startup file.
One way to solve this is to put your environment variables in a separate file, and source that file from your .bashrc. Then when you invoke a shell from Python, that code can source the same file if it needs to.
# .bashrc
source $HOME/lib/settings.sh
# Python >=3.5+ code
import subprocess
subprocess.run(
'source $HOME/lib/settings.sh; exec the command you actually want to run',
# Need a shell; need the shell to be Bash
shell=True, executable='/bin/bash',
# basic hygiene
check=True, universal_newlines=True)
(If you need to be compatible with older Python versions, try subprocess.check_call() or even subprocess.call() if you want to give up the safeguards by the check_ variant in favor of being compatible all the way back to Python 2.4.)

Set environment variable in one script and use this variable in another script

I am trying to set environment variable using python. And this variable is used in another script.
My code is:
#!/usr/bin/env python
import os
os.environ['VAR'] = '/current_working_directory'
after executing above python script,i execute second script which uses the same variable 'VAR', but it is not working.
But when i do export VAR='/current_working_directory and run the second script, it works fine. I tried putenv() also.
This depends on how the second python script get's called.
If you have a shell, and the shell first runs the first python script, then the second, it won't work. The reason is that the first python script inherits the environment from the shell. But modifying os.environ[] or calling putenv() will then only modify the inherited environment --- the one from the second python script, not the one from the shell script.
If now the shell script runs the second python script, it will again inherit the environment from the shell ... and because the shell script is unmodified, you cannot see the modification the first python script did.
One way to achive your goal is using a helper file:
#!/bin/bash
rm -f envfile
./first_pythonscript
test -f envfile && . envfile
rm -f envfile
./second_pythonscript
That code is crude, it won't work if two instances of the shell script run, so don't use it as-is. But I hope you get the idea.
Even another way is to make your second_pythonscript not a program, but a Python module that the first_pythonscript can import. You can also make it a hybrid, library when imported, program when run via the if __name__ == "__main__": construct.
And finally you can use one of the os function, e.g. os.spawnvpe
This code should provide the required environment to your 2nd script:
#!/usr/bin/env python
import os
os.environ['VAR'] = '/current_working_directory'
execfile("/path/to/your/second/script.py")
A child process cannot change the environment of its parent process.
The general shell programming solution to this is to make your first script print out the value you need, and assign it in the calling process. Then you can do
#!/bin/sh
# Assign VAR to the output from first.py
VAR="$(first.py)"
export VAR
exec second.py
or even more succinctly
#!/bin/sh
VAR="$(first.py)" second.py
Obviously, if the output from first.py is trivial to obtain without invoking Python, that's often a better approach; but for e.g. having two scripts call different functions from a library and/or communicating with a common back end, this is a common pattern.
Using Python for the communication between two pieces of Python code is often more elegant and Pythonic, though.
#!/usr/bin/env python
from yourlib import first, second
value=first()
# maybe putenv VAR here if that's really, really necessary
second(value)

How do I run a function in a Python module using only the Windows command line?

To clarify, the Python module I'm writing is a self-written .py file (named converter), not one that comes standard with the Python libraries.
Anyways, I want to somehow overload my function such that typing in
converter file_name
will send the file's name to
def converter(file_name):
# do something
I've been extensively searching through Google and StackOverflow, but can't find anything that doesn't require the use of special characters like $ or command line options like -c. Does anyone know how to do this?
You can use something like PyInstaller to create a exe out of your py-file.
To use the argument in python:
import sys
if __name__ == "__main__":
converter(sys.argv[1])
You can type in the windows shell:
python converter.py file_name.txt
to give the arguments to the sys.argv list within python. So to access them:
import sys
sys.argv[0] # this will be converter.py
sys.argv[1] # this will be file_name.txt
at the bottom of the file you want to run, add:
if __name__ == "__main__":
converter(sys.argv[1])
To have a second argument:
python converter.py file_name1.txt file_name2.txt
This will be the result:
import sys
sys.argv[0] # this will be converter.py
sys.argv[1] # this will be file_name1.txt
sys.argv[2] # this will be file_name2.txt
I would recommend using something like the builtin argparse (for 2.7/3.2+) or argparse on pypi (for 2.3+) if you're doing many complicated command line options.
Only way I can think of is to create a batch file of the same name and within it, call your python script with the parameters provided.
With batch files you don't have to specify the extension (.bat) so it gets you closer to where you want to be.
Also, without any compiling .py to .exe, you may make your script 'executable', so that if you issue command line like myscript param param, the system will search for myscript.py and run it for you, as if it was an .exe or .bat file.
In order to achieve this, configure the machine where you plan to run your script:
Make sure you have file associations set (.py to python interpreter, that is, if you doubleclick at your script in the explorer -- it gets executed). Normally this gets configured by the Python installer.
Edit the COMSPEC environment variable (look inside My Computer properties) to include .PY extension as well as .EXE, .COM, etc.
Start a fresh cmd.exe from Start menu to use the new value of variable. Old instances of any programs will see only old value.
This setup could be handy if you run many scripts on the same machine, and not so handy if you spread you scripts to many machines. In the latter case you better use py2exe converter to bundle up your application into a self-sufficient package (which doesn't require even python to be installed).

Categories

Resources