How to share a virtual environment with pipenv? - python

Pipenv virtual environnements (venv) will be shared with children folders.
So for example if you have installed an venv in ~/foo/, it will be accessible in ~/foo/baz/
But what if you want to share the same venv between ~/foo/bob/ and ~/baz/alice/ ?
The following kind of worked for me. I hope it can help.

There's a undocumented feature in pipenv: if you create a file named .venv in the project root with a path to a virtualenv, pipenv will use that instead of an autogenerated path.
(You'll still need to keep Pipfile's and Pipfile.locks synchronized. Making them symlinks as #MalikKoné suggests may do, but not if Pipfiles are under version control as they are supposed to.)
This, however, is more fit for cases when you already have an established set of environments that you wish to reuse. Otherwise, placing environments in arbitrary places is prone to create a mess eventually.

To share virutal env with pipenv
Create a directory ~/foo/bob/
mkdir -p ~/foo/bob/ ; cd ~/foo/bob/
create a virtual env in ~/foo/bob/
pipenv --three
This will create ~/.local/share/virtualenvs/bob-signature/
Install whatever packages you need. For example
pipenv install jupyter
This will create a Pipfile.lock in ~/foo/bob/
Create another directory, say ~/baz/alice/ and create a venv there
mkdir -p ~/baz/alice ; cd ~/baz/alice/ ; pipenv --three
As before pipenv will have created alice-signature/ in ~/.local/share/virtualenvs/.
Remove that folder and replace it by a link to bob-signature
cd ~/.local/share/virtualenvs/
rm -r alice-signature/
ln -s bob-signature/ alice-signature
In ~/baz/alice/, link Pipfile and Pipfile.lock to the ones in ~/baz/bob/
cd ~/baz/alice/ ;
rm Pipfile ; rm Pipfile.lock
ln -s ~/foo/bob/Pipfile . ; ln -s ~/foo/bob/Pipfile.lock .
Now, You should have a venv accessible from alice/ or bob/, and packages installed from any of those directories will be shared.

Related

Anaconda Sudo PIP permissions Problems

I'm learning Python and have Anaconda installed, and I'm trying to familiarize myself with the process of getting an Eye-Color Detection project working.
I'm running into the following error after going through readme:
Eye-Color-Detection git:(master) ✗ sudo pip install -r requirements.txt
WARNING: The directory '/Users/{user}/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
When trying to update:
(tweet) ➜ Eye-Color-Detection git:(master) ✗ conda update --all
[Errno 13] Permission denied: '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py' -> '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py.c~'
Q: How might I go about doing this correctly within the same Conda environment?
Most of the time, a sudo pip install is almost never what you really want. While in some cases, it may "appear" to work and solve you're immediate problem. More often than not you've just broken your system python without knowing it.
In the context of that repo, I'd ignore the repo's README and do this.
$ git clone https://github.com/ghimiredhikura/Eye-Color-Detection
$ cd Eye-Color-Detection
Create a virtualenv environment, change yourenvname as you like.
$ conda create -n yourenvname python=3.x
$ conda activate yourenvname
Install the dependencies and run the code
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=ima
As fixing you conda environment may be difficult to debug depending on what else you've sudo'd in attempting to resolve your issue. If you happen to be familiar with "regular" virtualenv's created using python's builtin virtual environment tooling, then you could also try this to get you going.
$ python3 -m venv .venv --copies
$ source .venv/bin/active
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=image
What you need to do is you have to change the directory premission to writable.
You can do it using this command,
$ sudo chmod 7777 /Users/{user}/library/caches/
to change permissions recursively,
$ sudo chmod 7777 -R /Users/{user}/library/caches/
or you can own that directory by using this command,
$ sudo chown OWNER:GROUP /Users/{user}/library/caches/
where OWNER is the username for your computer which you can find in the terminal by using this command.
$ whoami
GROUP is optional.

gitignore for a python venv based in the root of a git project

I'm setting up a python project in git and the I want to run the project in a venv. I'm running python3 -m venv . in the root of the git repo.
This creates multiple directories and a file:
bin
include
lib
lib64
share
pyenv.cfg
Which of these files and folders are import to my colleagues to set up the environment in the same way I did? In the root of the project I've also included this script to set up:
python3 -m venv .
source bin/activate
pip3 install -r requirements.txt
You should ignore the entire venv folder.
To make that easier, you probabaly want to do python3 -m venv .venv instead, and add .venv to your .gitignore file.

Why does venv break after rsync?

I am trying to build a CI/CD process for my Python scripts and applications. I am able to build my venv within the testing container but when I rsync it over to the target server, the version of Python seems to break. This is what I am trying:
- cp -a ./. $APP_DIR
- cd $APP_DIR
- python3 -m venv venv
- source venv/bin/activate
- pip3 install -r requirements.txt
...
- rsync...
All environments involved are running Python 3.6.8
When I activate the venv on the target server and run which python3 I get /usr/bin/python3 which is incorrect.
Why? Why does venv break when deployed to a server via rsync?
I'm new to Python development and the virtual environment process. Should venv's only be created on the server (or container) that they need to run on? Sometimes my target servers don't have python3-venv installed on them. Is it possible to deploy a venv with the code and use it to run my scripts?
When creating an environment via venv, it stores the absolute path of the environment path into bin/activate. Additionally some symlinks are created in the new environment pointing to existing python installation.
As a consequence of this the environment is only valid on the hosts and path venv was executed. This is also stated in the documentation (some parts omitted):
Running this command creates the target directory [...] and places a pyvenv.cfg file in it with a home key pointing to the Python installation from which the command was run. It also creates a bin [...] subdirectory containing a copy/symlink of the Python binary/binaries (as appropriate for the platform or arguments used at environment creation time).
You can easily check this fact by these commands:
mkdir /tmp/example_dir_for_stackoverflow
cd /tmp/example_dir_for_stackoverflow
python3 -m venv venv
grep stackoverflow venv/bin/activate
It will output:
VIRTUAL_ENV="/tmp/example_dir_for_stackoverflow/venv"
If you rsync this environment to another system onto a different path and/or different python installation, the settings in bin/activate don't match and it don't work.
In my opinion yout best bet is to exclude venv folder from rsync with
rsync --exclude 'venv' source/ destination/
The requirements.txt file is your best friend for keeping your dependencies satisfied everywhere.
I also suggest you to install python3-venv package from your Linux distribution if you're satisfied by the provided Python version. Else install another Python version at all (you'll find in the Internet how to install a different Python for your distribution).
By example:
Host 1 (This is where you develop and you may add something to your venv)
cd /tmp/
mkdir app_base # base folder for venv/ and app_code/
cd app_base/
mkdir app_code # base folder for code only
# LOCAL virtual environment creation and activatin
python3 -m venv venv
source venv/bin/activate
# Just an example of whatever you may need
pip install numpy
# Let's say that it could be enough for your app to work.
# Create requirements.txt
pip3 freeze >requirements.txt
Server, Container, whatever remote..
SETUP
This should run once (or at least before rsync). It's the same first 5 lines from the above snippet.
cd /tmp/
mkdir app_base
cd app_base/
mkdir app_code
python3 -m venv venv
Now that you've done the setup on the remote host, let's return to Host 1, where you develop.
You need to rsync your app_code and requirements.txt (and maybe some other stuff), but not the venv folder
Host 1
You can wrap this in a cron job
rsync -xav -e ssh --exclude 'venv' /tmp/app_base/ user#X.X.X.X:/tmp/app_base/
Then, finally, you can keep your server virtual environment up to your needs, directly running this on the server.
Server, Container, whatever remote..
cd /tmp/app_base
source venv/bin/activate
pip3 install -r requirements.txt
Now, on the remote host, you should be able to run (the unit test of) your code.
The 'strict' answer to your bolded question
Why? Why does venv break when deployed to a server via rsync?
is: some Python packages (like the numpy I've used in the example) provide binary routines, for performance reasons. Copying the virtual environment folder will only work in the very same Linux distribution or Windows version, with the very same architecture and Python version. And it's not the purpose virtual environments were created for.

Changing path of pip to Virtual Env

I've cloned a codebase from Heroku onto a new comp, when I try to run it none of the Python libraries that I've installed are present. After I run which pip I see that my path is /usr/local/bin/pip.
(1) How do I change the path so all the libraries install into my virtual env and (2) how can I install everything from my requirements.txt instead of individually install libraries.
(venv)admins-MacBook-Air:lhv-talenttracker surajkapoor$ which pip
/usr/local/bin/pip
Try looking at your venv/bin/activate file and see if the VIRTUAL_ENV matches your current path. If it doesn't match, change it to match your path and activate again.
$ cat activate |grep VIRTUAL_ENV=
VIRTUAL_ENV="/does/this/path/match?"

Python: virtualenv - gtk-2.0

To add gtk-2.0 to my virtualenv I did the following:
$ virtualenv --no-site-packages --python=/usr/bin/python2.6 myvirtualenv
$ cd myvirtualenv
$ source bin/activate
$ cd lib/python2.6/
$ ln -s /usr/lib/pymodules/python2.6/gtk-2.0/
Virtualenv on Ubuntu with no site-packages
Now in the Python interpreter when I do import gtk it says: No module named gtk. When I start the interpreter with sudo it works.
Any reason why I need to use sudo and is there a way to prevent it?
Update:
Forgot to mention that cairo and pygtk work but it's not the one I need.
Update2:
Here the directory to show that I ain't crazy.
http://www.friendly-stranger.com/pictures/symlink.jpg
sudo python imports it just fine because that interpreter isn't using your virtual environment. So don't do that.
You only linked in one of the necessary items. Do the others mentioned in the answer to the question you linked as well.
(The pygtk.pth file is of particular importance, since it tells python to actually put that directory you linked onto the python path)
Update
Put that stuff in $VIRTUALENV/lib/python2.6/site-packages/ rather than the directory above that.
Looks like the .pth files aren't read from that directory - just from site-packages
This works for me (Ubuntu 11.10):
once you activate your virtualenv directory make sure 'dist-packages' exists:
mkdir -p lib/python2.7/dist-packages/
Then, make links:
For GTK2:
ln -s /usr/lib/python2.7/dist-packages/glib/ lib/python2.7/dist-packages/
ln -s /usr/lib/python2.7/dist-packages/gobject/ lib/python2.7/dist-packages/
ln -s /usr/lib/python2.7/dist-packages/gtk-2.0* lib/python2.7/dist-packages/
ln -s /usr/lib/python2.7/dist-packages/pygtk.pth lib/python2.7/dist-packages/
ln -s /usr/lib/python2.7/dist-packages/cairo lib/python2.7/dist-packages/
For GTK3:
ln -s /usr/lib/python2.7/dist-packages/gi lib/python2.7/dist-packages/
Remember to add a link to pygtk.py
ln -s /usr/lib/python2.7/dist-packages/pygtk.py lib/python2.7/dist-packages/
On Debian based Linux systems (Ubuntu, Mint) you can just install the ruamel.venvgtk package I put on PyPI. It will create the relevant links in your virtualenv during installation (if they are not yet there).
A more detailed explanation can be found in this answer
If it is not a requirement, that Python system packages are not used in the virtual environment, I would install apt install python-gtk2 (Ubuntu) and then create the virtual environment with:
virtualenv --system-site-packages .
That way, you do not pollute the system environment with your pip installations in the virtual environment, but reuse everything from the system. Especially pygtk.

Categories

Resources