I have a script go.py, which needs to run unit tests (with nose) on several different modules, each with their own virtual envs.
How can I activate each virtual env prior to testing, and deactivate it after?
i.e. I want to do this (pseudo-code):
for fn in functions_to_test:
activate(path_to_env)
run_test(fn)
deactivate()
activate()
Inside a virtual env, there is ./bin/activate_this.py
This does what I want. So in go.py I say
import os
activate_this_file = os.path.join(env_dir,'bin/deactivate_this.py')
execfile(activate_this_file, dict(__file__=activate_this_file))
run_test()
I've currently got run_test() working using
suite_x = TestLoader().loadTestsFromName(test_module + ":" + test_class)
r = run(suite = suite_x, argv = [sys.argv[0], "--verbosity=0", "-s"])
deactivate
This is the part that I can't figure out.
What's the deactivation equivalent of env/bin/activate_this.py?
Broader Context
Each module will be uploaded to AWS by go.py as a lambda function. (Where 'lambda function' has a specific meaning in an AWS context, and is not related to lambda x:foo(x))
I want go.py to run unit tests on each lambda function, inside their respective virtual env (since they'll be executed in those virtual envs once deployed to AWS). Each lambda function uses different libraries, hence they have different virtual envs.
The activate_this.py script is not meant to be used to switch virtual environments in the middle of a computation. It is meant to be used ASAP at the start of your process and not be touched ever again. If you look at the content of the script you'll see that it does not take care to record anything for a future deactivation. Once the activate_this.py script is done, what the state of the interpreter was before the script started is lost. Moreover the documentation also warns (with emphasis added):
Also, this cannot undo the activation of other environments, or modules that have been imported. You shouldn’t try to, for instance, activate an environment before a web request; you should activate one environment as early as possible, and not do it again in that process.
Instead of the approach you were hoping to use, I'd have the orchestrator spawn the python interpreter (with subprocess) that is specific to the virtual environment that needs to be used and pass to it the test runner ("nosetests", presumably) with the arguments needed for it to find the tests it needs to run in that environment.
There isn't an easy, complete, and generic way to do this. The reason being that activate_this.py does not just modify the module search path, but it also does site configurations with site.addsitedir(), which may execute sitecustomize or usercustomize in the same the python process. In comparison, the shell script version of activate simply modifies environment variables and have each python process reexecute site customization themselves, so cleanup is significantly easier.
How to work around this issue? There are several possibilities:
You might want to run your tests under tox. This is the solution I think would be most preferred.
If you are sure that none of the packages in your virtualenv have irreversible sitecustomize/usercustomize, you can write a deactivate() that undoes virtualenv's modification to sys.path, os.environ, and sys.prefix or one that remembers those value in activate() so deactivate() can undo them.
You can fork or create a subprocess in activate() before execfile("activate_this.py"). To deactivate the virtual environment, simply return to parent process. You'll have to figure out how the child process can return the test results so the parent/main process can compile the final test report.
Related
From what I've read, any changes to the environment variables in a Python instance are only available within that instance, and disappear once the instance is closed. Is there any way to make them stick by committing them to the system?
The reason I need to do this is because at the studio where I work, tools like Maya rely heavily on environment variables to configure paths across multiple platforms.
My test code is
import os
os.environ['FAKE'] = 'C:\\'
Opening another instance of Python and requesting os.environ['FAKE'] yields a KeyError.
NOTE: Portability will be an issue, but the small API I'm writing will be able to check OS version and trigger different commands if necessary.
That said, I've gone the route of using the Windows registry technique and will simply write alternative methods that will call shell scripts on other platforms as they become requirements.
You can using SETX at the command-line.
By default these actions go on the USER env vars.
To set and modify SYSTEM vars use the /M flag
import os
env_var = "BUILD_NUMBER"
env_val = "3.1.3.3.7"
os.system("SETX {0} {1} /M".format(env_var,env_val))
make them stick by committing them to
the system?
I think you are a bit confused here. There is no 'system' environment. Each process has their own environment as part its memory. A process can only change its own environment. A process can set the initial environment for processes it creates.
If you really do think you need to set environment variables for the system you will need to look at changing them in the location they get initially loaded from like the registry on windows or your shell configuration file on Linux.
Under Windows it's possible for you to make changes to environment variables persistent via the registry with this recipe, though it seems like overkill.
To echo Brian's question, what are you trying to accomplish? There is probably an easier way.
Seems like there is simplier solution for Windows
import subprocess
subprocess.call(['setx', 'Hello', 'World!'], shell=True)
I don't believe you can do this; there are two work-arounds I can think of.
The os.putenv function sets the environment for processes you start with, i.e. os.system, popen, etc. Depending on what you're trying to do, perhaps you could have one master Python instance that sets the variable, and then spawns new instances.
You could run a shell script or batch file to set it for you, but that becomes much less portable. See this article:
http://code.activestate.com/recipes/159462/
Think about it this way.
You're not setting shell environment variables.
You're spawning a subshell with some given environment variable settings; this subshell runs your application with the modified environment.
According to this discussion, you cannot do it. What are you trying to accomplish?
You are forking a new process and cannot change the environment of the parent process as you cannot do if you start a new shell process from the shell
You might want to try Python Win32 Extensions, developed by Mark Hammond, which is included in the ActivePython (or can be installed separately). You can learn how to perform many Windows related tasks in Hammond's and Robinson's book.
Using PyWin32 to access windows COM objects, a Python program can use the Environment Property (a collection of environment variables) of the WScript.Shell object.
Try to use py-setenv that will allow you to set variable via registry
python -m pip install py-setenv
From within Python? No, it can't be done!
If you are not bound to Python, you should consider using shell scripts (sh, bash, etc). The "source" command allows you to run a script that modifies the environment and will "stick" like you want to the shell you "sourced" the script in. What's going on here is that the shell executes the script directly rather creating a sub-process to execute the script.
This will be quite portable - you can use cygwin on windows to do this.
In case someone might need this info. I realize this was asked 7 yrs ago, but even I forget how sometimes. .
Yes there is a way to make them "stick" in windows. Simply go control panel, system, advanced system settings,when the system properties window opens you should see an option (button) for Environment Variables. .The process for getting to this is a little different depending on what OS you're using (google it).
Choose that (click button), then the Environment Variables window will open. It has 2 split windows, the top one should be your "User Variables For yourusername". . .choose "new", then simply set the variable. For instance one of mine is "Database_Password = mypassword".
Then in your app you can access them like this: import os, os.environ.get('Database_Password'). You can do something like pass = os.environ.get('Database_Password').
I'm new to python and enjoying learning the language. I like using the interpreter in real time, but I still don't understand completely how it works. I would like to be able to define my environment with variables, imports, functions and all the rest then run the interpreter with those already prepared. When I run my files (using PyCharm, Python 3.6) they just execute and exit.
Is there some line to put in my .py files like a main function that will invoke the interpreter? Is there a way to run my .py files from the interpreter where I can continue to call functions and declare variables?
I understand this is a total newbie question, but please explain how to do this or why I'm completely not getting it.
I think you're asking three separate things, but we'll count it as one question since it's not obvious that they are different things:
1. Customize the interactive interpreter
I would like to be able to define my environment with variables, imports, functions and all the rest then run the interpreter with those already prepared.
To customize the environment of your interactive interpreter, define the environment variable PYTHONSTARTUP. How you do that depends on your OS. It should be set to the pathname of a file (use an absolute path), whose commands will be executed before you get your prompt. This answer (found by Tobias) shows you how. This is suitable if there is a fixed set of initializations you would always like to do.
2. Drop to the interactive prompt after running a script
When I run my files (using PyCharm, Python 3.6) they just execute and exit.
From the command line, you can execute a python script with python -i scriptname.py and you'll get an interactive prompt after the script is finished. Note that in this case, PYTHONSTARTUP is ignored: It is not a good idea for scripts to run in a customized environment without explicit action.
3. Call your scripts from the interpreter, or from another script.
Is there a way to run my .py files from the interpreter where I can continue to call functions and declare variables?
If you have a file myscript.py, you can type import myscript in the interactive Python prompt, or put the same in another script, and your script will be executed. Your environment will then have a new module, myscript. You could use the following variant to import your custom definitions on demand (assuming a file myconfig.py where Python can find it):
from myconfig import *
Again, this is not generally a good idea; your programs should explicitly declare all their dependencies by using specific imports at the top.
You can achieve the result you intend by doing this:
Write a Python file with all the imports you want.
Call your script as python -i myscript.py.
Calling with -i runs the script then drops you into the interpreter session with all of those imports, etc. already executed.
If you want to save yourself the effort of calling Python that way every time, add this to your .bashrc file:
alias python='python -i /Users/yourname/whatever/the/path/is/myscript.py'
You set the environment variable PYTHONSTARTUP as suggested in this answer:
https://stackoverflow.com/a/11124610/1781434
subprocess.run('export FOO=BAR', shell=True)
This simply doesn't work, and I have no idea why.
All I am trying to do I set an environment variable from my python (3.5.1) script, and when I run the above line, nothing happens. No errors are raised, and when I check the environment variable myself, it has not been set.
Other shell commands with subprocess.run() do work, such as ls and pwd, but not export.
.run() was added in Python 3.5 (in case you didn't recognise it), but I have also tried the above line with .call() and .Popen(), with no change in results.
I am aware that I can set environment variables in python with os.environ['FOO'] = "BAR", but I will be using shell commands a lot in my project, and I expect that I will need to string multiple commands together, which will make using export easier than os.environ.
My project will run on Linux, which is what my machine is running on.
It works fine; however, the variable setting only exists in the subprocess. You cannot affect the environment of the local process from a child.
os.environ is the correct solution, as it changes the environment of the local process, and those changes will be inherited by any process started with subprocess.run.
You can also use the env argument to run:
subprocess.run(["cmdname", "arg1", "arg number 2"], env=dict(FOO='BAR', **os.environ))
This runs the command in a modified environment that includes FOO=BAR without modifying the current environment.
In my build (I'm using Linux) I need to call a Python script and set some env variables. I need these variables to be set even after I exit the script. I am able to set it using os.environ within the script but whenever I exit the script and try to see if the env variable is set from the terminal (echo $myenv) - I get nothing.
I am new to Python and did quite a bit googling to figure this out. However, I am not quite sure if it's possible. I tried using the subprocess:
subprocess.call('setenv myenv 4s3', shell=True)
Also tried using os.system:
os.system("setenv myenv 4s3")
So far, I didn't succeed.
You cannot set environment variables from a child process and have them be visible in the parent process. Every process gets its own copy of the environment, and changes do not propagate upwards.
What you could do is have the Python script print the settings it wants to change and have the outside shell execute the appropriate commands.
Maybe if you find some equivalent function like c vfork for Python.
When you vfork, both processes share memory space so, you might overwrite environment variables in parent process from child process.
Warning: vfork has many security issues, and therefore not recommended. Just use it if you are desperate.
(Note: I’ve Linux in mind, but the problem may apply on other platforms.)
Problem: Linux doesn’t do suid on #! scripts nor does it activate “Linux capabilities” on them.
Why dow we have this problem? Because during the kernel interpreter setup to run the script, an attacker may have replaced that file. How? The formerly trusted suid/capability-enabled script file may be in a directory he has control over (e.g. can delete the not-owned trusted file, or the file is actually a symbolic link he owns).
Proper solution: make the kernel allow suid/cap scripts if: a) it is clear that the caller has no power over the script file -or- like a couple of other operating systems do b) pass the script as /dev/fd/x, referring to the originally kernel-opened trusted file.
Answer I’m looking for: for kernels which can’t do this (all Linux), I need a safe “now” solution.
What do I have in mind? A binary wrapper, which does what the kernel does not, in a safe way.
I would like to
hear from established wrappers for (Python) scripts that pass Linux capabilities and possibly suid from the script file to the interpreter to make them effective.
get comments on my wrapper proposed below
Problems with sudo: sudo is not a good wrapper, because it doesn’t help the kernel to not fall for that just explained “script got replaced” trap (“man sudo” under caveats says so).
Proposed wrapper
actually, I want a little program, which generates the wrapper
command line, e.g.: sudo suid_capability_wrapper ./script.py
script.py has already the suid bit and capabilites set (no function, just information)
the generator suid_capability_wrapper does
generate C(?) source and compile
compile output into: default: basename script.py .py, or argument -o
set the wrapper owner, group, suid like script.py
set the permitted capabilities like script.py, ignore inheritable and effective caps
warn if the interpreter (e.g. /usr/bin/python) does not have the corresponding caps in its inheritable set (this is a system limitation: there is no way to pass on capabilites without suid-root otherwise)
the generated code does:
check if file descriptors 0, 1 and 2 are open, abort otherwise (possibly add more checks for too crazy environment conditions)
if compiled-in target script is compiled-in with relative path, determine self’s location via /proc/self/exe
combine own path with relative path to the script to find it
check if target scripts owner, group, permissions, caps, suid are still like the original (compiled-in) [this is the only non-necessary safety-check I want to include: otherwise I trust that script]
set the set of inherited capabilities equal to the set of permitted capabilities
execve() the interpreter similar to how the kernel does, but use the script-path we know, and the environment we got (the script should take care of the environment)
A bunch of notes and warnings may be printed by suid_capability_wrapper to educate the user about:
make sure nobody can manipulate the script (e.g. world writable)
be aware that suid/capabilities come from the wrapper, nothing cares about suid/xattr mounts for the script file
the interpreter (python) is execve()ed, it will get a dirty environment from here
it will also get the rest of the standard process environment passed through it, which is ... ... ... (read man-pages for exec to begin with)
use #!/usr/bin/python -E to immunize the python interpreter from environment variables
clean the environment yourself in the script or be aware that there is a lot of code you run as side-effect which does care about some of these variables
You don't want to use a shebang at all, on any file - you want to use a binary which invokes the Python interpreter, then tells it to start the script file for which you asked.
It needs to do three things:
Start a Python interpreter (from a trusted path, breaking chroot jails and so on). I suggest statically linking libpython and using the CPython API for this, but it's up to you.
Open the script file FD and atomically check that it is both suid and owned by root. Don't allow the file to be altered between the check and the execution - be careful.
Tell CPython to execute the script from the FD you opened earlier.
This will give you a binary which will execute all owned-by-root-and-suid scripts under Python only. You only need one such program, not one per script. It's your "suidpythonrunner".
As you surmised, you must clear the environment before running Python. LD_LIBRARY_PATH is taken care of by the kernel, but PYTHONPATH could be deadly.