Can't activate python venv environment from Makefile - python

I'm trying to activate my virtual environment with a Makefile command but I'm getting an error when I run below.
Command
make env
Error
Makefile:20: warning: overriding commands for target `make'
Makefile:17: warning: ignoring old commands for target `make'
source ../env/bin/activate
make: source: No such file or directory
make: *** [env] Error 1
Makefile
...
env:
source ../env/bin/activate
The environment exists one directory above the directory with the Makefile.
Other Makefile commands work.
source ../env/bin/activate on the command line works.
I wonder if there is something special about Makefiles I don't understand that is causing this to fail?

There are more layers to this and you will probably find yourself with your next step, but one at a time. Recipes are executed in shell, more specifically, by default I reckon in our case as well, /bin/sh which does not understand source, so... change your Makefile to say:
env:
. ../env/bin/activate
Or define make SHELL variable to say /bin/bash and it will appear to work:
SHELL := /bin/bash
env:
source ../env/bin/activate
But, next thing... line of a recipe does fork() its own shell (environment), so changes you've made to one shell instance (by sourcing a script), does not affect the next one. You could get around that and spawn just one shell by concatenating multiple recipes commands to actually being one line (escaping newlines and separating commands with ; instead:
env:
. ../env/bin/activate ; \
SOME_COMMAND
But this is still only effective for the commands that are part of that one command line.
You might consider doing the same in multiple recipes, but really, if you need the environment to be active for whatever happens in the make, you may just wan to activate it first before calling make.

Related

Setting up build environment using source file inside bash script does not work

So I have a build environment that gets setup when I do a . setup or source setup.
There are a bunch of commands (lets say command abc, command xyz) that become available once the setup file is sourced which I need to use inside a bash script. So I have to do something like this :
#!/bin/bash
cd build_dir
. setup
command abc && command xyz
And then on calling my script, I expect command abc and command xyz to have been executed.
But instead I see an error saying command abc not Found.
The setup environment is complex enough that I wouldn't want to bog my script with adding all the commands and env variables manually, I'd rather ditch the script completely.
Why is this happening and is there anyway of doing this with either shell scripting or python?
command abc means to ignore any alias or function named abc and instead searches an executable file inside the directories listed in $PATH. Obviously there is no such file on your system.
So, you need to know where your file abc is located. Then you have two possibilities: Either add this directory to your PATH,
PATH=$PATH:/foo/bar/abcdir
or you don't use command, but execute your program using the explicit path, i.e.
/foo/bar/abcdir/abc && command xyz

Shell/Terminal: Execute a command for all files in a directory using their absolute path

I'm trying to execute a command for each file in a directory but while using their absolute path (such as /home/richi/mydir/myfile.py) instead of their relative path (such as myfile.py).
In other words, I want to execute a command on files in a directory based on their absolute path - similar to for file in *.py; do thecommand -a "$file"; done but not quite.
I'm asking this because I'm trying to implement a Travis CI script running in an Ubuntu 14.04 environment which will install and use pyminifier to recursively minify all the Python code files in a directory.
Please note that I'm asking may be similar to this post but it's not.
Since you're on a standard Linux distro with a full userland, you can just use the realpath command:
Print the resolved absolute file name…
For example:
$ pwd
/home/abarnert/src/test
$ touch 1
$ realpath 1
/home/abarnert/src/test/1
That's it.
If you don't know how to use that from within bash, you can call a subcommand using $(…) syntax:
$ echo $(realpath 1)
/home/abarnert/src/test/1
Of course you want to pass it the value of the variable file, but that's just as easy:
$ file=1
$ echo $(realpath "$file")
/home/abarnert/src/test/1
I'm assuming you're using bash here. With a different sh-style shell, things will be different; with tcsh or zsh or fish or something, it may be even more different.
A really old userland, or a really stripped down one (e.g., for an embedded system) might not include realpath. In that case, you can use readlink, since the GNU version, as usually, adds everything including a couple kitchen sinks, and can be used as a realpath substitute.
Or, if worst comes to worst, Python has come with a realpath function since 2.2:
$(python -c 'import os,sys; print(os.path.realpath(sys.argv[1]))' "$file")

deactivate conflict in virtualenvwapper and anaconda

I'm using virtualenv to switch my python dev env. But when I run workon my_env, I meet such error message:
Error: deactivate must be sourced. Run 'source deactivate'
instead of 'deactivate'.
Usage: source deactivate
removes the 'bin' directory of the environment activated with 'source
activate' from PATH.
After some searches on google, it seems that workon, which is defined in /usr/local/bin/virtualenvwrapper.sh, calls deactivate. And there is a script with the same name is present in Anaconda's bin, so it gets called by workon by mistake.
Any suggestion for working around this conflict?
One solution which works for me is to rename deactivate in Anaconda's bin:
mv deactivate conda-deactivate
I agree with the comment by #FredrikHedman that renaming scripts in the anaconda/miniconda bin directory has the potential to be fragile. His full post led me to what I feel is a more robust answer. (Thanks!)
Rather than simply throwing away any errors thrown by calling deactivate, we could simply condition that call on whether the function would be called versus the file. As mentioned, virtualenv and virtualenvwrapper create a function named deactivate; the *condas call a script file of the same name.
So, in the virtualenvwrapper.sh script, we can change the following two lines which test for whether deactivate is merely callable:
type deactivate >/dev/null 2>&1
if [ $? -eq 0 ]
with the stricter test for whether it's a shell function:
if [ -n $ZSH_VERSION ] ; then
nametype="$(type -w deactivate)"
else
nametype="$(type -t deactivate)"
fi
if [ "${nametype##* }" == "function" ]
This change avoids triggering the spurious error noted in the original question, but doesn't risk redirecting other useful errors or output into silent oblivion.
Note the variable substitution on nametype in the comparison. This is because the output of type -w under zsh returns something like "name: type" as opposed to type -t under bash which returns simply "type". The substitution deletes everything up to the last space character, if any spaces exist, leaving only the type value. This harmlessly does nothing in bash.
(Thanks to #toprak for the zsh test and for the correct flag, type -w, under zsh. I look forward to more cross-shell coding tips!)
As ever, I appreciate constructive feedback and comments!
You can edit /usr/local/bin/virtualenvwrapper.sh to make deactivate point to an absolute path to whatever deactivate it is supposed to be referencing.
As I do not have enough reputations to add a comment:
Thomas Capote's suggestion is fine (thx 4 that), except "zsh" does not have a "-t" option for the build-in command "type". Therefore its necessary to add another conditional statement to get desired result for "nametype":
# Anaconda workaround for "source deactivate" message:
# Start of workaround:
#type deactivate >/dev/null 2>&1
#if [ $? -eq 0 ]
if [ -n $ZSH_VERSION ] ; then
nametype="$(type -w deactivate)"
else
nametype="$(type -t deactivate)"
fi
if [ "${nametype##* }" == "function" ]
# End of workaround
Hope it helps other zsh users.
In anaconda activate is an executable script located in the anaconda bin directory, but is a function in in virtualenvwrapper.sh. So this is sort of a namespace collision problem, but also a case of overlap in functionality.
Anacondas is a python distribution and – among many other things – has support for dealing with virtual environment via conda env while the virtualenvwrapper is focused on working with different virtual environments. Just renaming the anaconda/bin/activate script is a brittle solution and may break conda.
The code of virtualenvwrapper.sh (function workon) executes deactivate which happens to use the anaconda script. This script returns an error. The workon code then goes on and removes the deactivate name and sources its own deactivate and also creates a deactivate function on the fly.
In summary, it does the right thing and the "error" can be viewed more as a warning. If you want to make it go away you can modify the workon function (search for the line # Deactivate any current environment "destructively")
deactivate
-->
deactivate >/dev/null 2>&1
(I have suggested this change to the virtualenvwrapper maintainer)

Shell : workon not found [duplicate]

So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the mkvirtualenv function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries.
I am trying to add a bin/run-app executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in
#!/bin/bash
# cd into the project directory
workon "$(cat .venv)"
python main.py
A file .venv contains the virtualenv name. But when I run this script, I get workon: command not found error.
Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script.
So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
Just source the virtualenvwrapper.sh script in your script to import the virtualenvwrapper's functions. You should then be able to use the workon function in your script.
And maybe better, you could create a shell script (you could name it venv-run.sh for example) to run any Python script into a given virtualenv, and place it in /usr/bin, /usr/local/bin, or any directory which is in your PATH.
Such a script could look like this:
#!/bin/sh
# if virtualenvwrapper.sh is in your PATH (i.e. installed with pip)
source `which virtualenvwrapper.sh`
#source /path/to/virtualenvwrapper.sh # if it's not in your PATH
workon $1
python $2
deactivate
And could be used simply like venv-run.sh my_virtualenv /path/to/script.py
I can't find the way to trigger the commands of virtualenvwrapper in shell. But this trick can help: assume your env. name is myenv, then put following lines at the beginning of scripts:
ENV=myenv
source $WORKON_HOME/$ENV/bin/activate
This is a super old thread and I had a similar issue. I started digging for a simpler solution out of curiousity.
gnome-terminal --working-directory='/home/exact/path/here' --tab --title="API" -- bash -ci "workon aaapi && python manage.py runserver 8001; exec bash;"
The --workingdirectory forces the tab to open there by default under the hood and the -ci forces it to work like an interactive interface, which gets around the issues with the venvwrapper not functioning as expected.
You can run as many of these in sequence. It will open tabs, give them an alias, and run the script you want.
Personally I dropped an alias into my bashrc to just do this when I type startdev in my terminal.
I like this because its easy, simple to replicate, flexible, and doesn't require any fiddling with variables and whatnot.
It's a known issue. As a workaround, you can make the content of the script a function and place it in either ~/.bashrc or ~/.profile
function run-app() {
workon "$(cat .venv)"
python main.py
}
If your Python script requires a particular virtualenv then put/install it in virtualenv's bin directory. If you need access to that script outside of the environment then you could make a symlink.
main.py from virtualenv's bin:
#!/path/to/virtualenv/bin/python
import yourmodule
if __name__=="__main__":
yourmodule.main()
Symlink in your PATH:
pymain -> /path/to/virtualenv/bin/main.py
In bin/run-app:
#!/bin/sh
# cd into the project directory
pymain arg1 arg2 ...
Apparently, I was doing this the wrong way. Instead of saving the virtualenv's name in the .venv file, I should be putting the virtualenv's directory path.
(cdvirtualenv && pwd) > .venv
and in the bin/run-app, I put
source "$(cat .venv)/bin/activate"
python main.py
And yay!
add these lines to your .bashrc or .bash_profile
export WORKON_HOME=~/Envs
source /usr/local/bin/virtualenvwrapper.sh
and reopen your terminal and try
You can also call the virtualenv's python executable directly. First find the path to the executable:
$ workon myenv
$ which python
/path/to/virtualenv/myenv/bin/python
Then call from your shell script:
#!/bin/bash
/path/to/virtualenv/myenv/bin/python myscript.py

Why can't environmental variables set in python persist?

I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general?
You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
export VAR="foo"
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
cat .data
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
export DATA=`cat .data`
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
alias set-data="export DATA=`cat .data`"
You can put that alias command inside the .bashrc or .bash_profile files in your home directory to have that command available in any new bash shell you start.
One workaround is to output export commands, and have the parent shell evaluate this..
thescript.py:
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
..and the bash alias (the same can be done in most shells.. even tcsh!):
alias setblahblahenv="eval $(python thescript.py)"
Usage:
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
You can output any arbitrary shell code, including multiple commands like:
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
Just remember to be careful about escaping any dynamically created output (the pipes.quote module is good for this)
If you set environment variables within a python script (or any other script or program), it won't affect the parent shell.
Edit clarification:
So the answer to your question is yes, it is true.
You can however export from within a shell script and source it by using the dot invocation
in fooexport.sh
export FOO="bar"
at the command prompt
$ . ./fooexport.sh
$ echo $FOO
bar
It's not generally possible. The new process created for python cannot affect its parent process' environment. Neither can the parent affect the child, but the parent gets to setup the child's environment as part of new process creation.
Perhaps you can set them in .bashrc, .profile or the equivalent "runs on login" or "runs on every new terminal session" script in MacOS.
You can also have python start the simulation program with the desired environment. (use the env parameter to subprocess.Popen (http://docs.python.org/library/subprocess.html) )
import subprocess, os
os.chdir('/home/you/desired/directory')
subprocess.Popen(['desired_program_cmd', 'args', ...], env=dict(SOMEVAR='a_value') )
Or you could have python write out a shell script like this to a file with a .sh extension:
export SOMEVAR=a_value
cd /home/you/desired/directory
./desired_program_cmd
and then chmod +x it and run it from anywhere.
What I like to do is use /usr/bin/env in a shell script to "wrap" my command line when I find myself in similar situations:
#!/bin/bash
/usr/bin/env NAME1="VALUE1" NAME2="VALUE2" ${*}
So let's call this script "myappenv". I put it in my $HOME/bin directory which I have in my $PATH.
Now I can invoke any command using that environment by simply prepending "myappenv" as such:
myappenv dosometask -xyz
Other posted solutions work too, but this is my personal preference. One advantage is that the environment is transient, so if I'm working in the shell only the command I invoke is affected by the altered environment.
Modified version based on new comments
#!/bin/bash
/usr/bin/env G4WORKDIR=$PWD ${*}
You could wrap this all up in an alias too. I prefer the wrapper script approach since I tend to have other environment prep in there too, which makes it easier for me to maintain.
As answered by Benson, but the best hack-around is to create a simple bash function to preserve arguments:
upsert-env-var (){ eval $(python upsert_env_var.py $*); }
Your can do whatever you want in your python script with the arguments. To simply add a variable use something like:
var = sys.argv[1]
val = sys.argv[2]
if os.environ.get(var, None):
print "export %s=%s:%s" % (var, val, os.environ[var])
else:
print "export %s=%s" % (var, val)
Usage:
upsert-env-var VAR VAL
As others have pointed out, the reason this doesn't work is that environment variables live in a per-process memory spaces and thus die when the Python process exits.
They point out that a solution to this is to define an alias in .bashrc to do what you want such as this:
alias export_my_program="export MY_VAR=`my_program`"
However, there's another (a tad hacky) method which does not require you to modify .bachrc, nor requires you to have my_program in $PATH (or specify the full path to it in the alias). The idea is to run the program in Python if it is invoked normally (./my_program), but in Bash if it is sourced (source my_program). (Using source on a script does not spawn a new process and thus does not kill environment variables created within.) You can do that as follows:
my_program.py:
#!/usr/bin/env python3
_UNUSED_VAR=0
_UNUSED_VAR=0 \
<< _UNUSED_VAR
#=======================
# Bash code starts here
#=======================
'''
_UNUSED_VAR
export MY_VAR=`$(dirname $0)/my_program.py`
echo $MY_VAR
return
'''
#=========================
# Python code starts here
#=========================
print('Hello environment!')
Running this in Python (./my_program.py), the first 3 lines will not do anything useful and the triple-quotes will comment out the Bash code, allowing Python to run normally without any syntax errors from Bash.
Sourcing this in bash (source my_program.py), the heredoc (<< _UNUSED_VAR) is a hack used to "comment out" the first-triple quote, which would otherwise be a syntax error. The script returns before reaching the second triple-quote, avoiding another syntax error. The export assigns the result of running my_program.py in Python from the correct directory (given by $(dirname $0)) to the environment variable MY_VAR. echo $MY_VAR prints the result on the command-line.
Example usage:
$ source my_program.py
Hello environment!
$ echo $MY_VAR
Hello environment!
However, the script will still do everything it did before except exporting, the environment variable if run normally:
$ ./my_program.py
Hello environment!
$ echo $MY_VAR
<-- Empty line
As noted by other authors, the memory is thrown away when the Python process exits. But during the python process, you can edit the running environment. For example:
>>> os.environ["foo"] = "bar"
>>> import subprocess
>>> subprocess.call(["printenv", "foo"])
bar
0
>>> os.environ["foo"] = "foo"
>>> subprocess.call(["printenv", "foo"])
foo
0

Categories

Resources