Where is the common location/directory to store configuration in a python virtualenv?
For Linux there is /etc for user stuff there is XDG_CONFIG_HOME (~/.config) but for virtualenv ...?
I know that I can store my configuration in any location that I want, but maybe there is a common location, which makes my application easier to understand by python experts.
So I think this is the most common approach...
1. postactivate with virtualenvwrapper
I've always done this in the postactivate file myself. In this approach, you can either define environment variables directly in that file (my preference) or in a separate file in your project dir which you source in the postactivate file. To be specific, this is actually a part of virtualenvwrapper, as opposed to virtualenv itself.
http://virtualenvwrapper.readthedocs.io/en/latest/scripts.html#postactivate
(If you want to be really clean, you can also unset your environment vars in the postdeactivate file.)
Alternatively, you can do this directly in the activate file. I find that approach less desirable because there are other things going on in there too.
https://virtualenv.pypa.io/en/latest/userguide.html#activate-script
Two popular alternatives I've also used are:
2. .env with autoenv
Independent of virtualenv, another approach towards solving the same problem is Kenneth Reitz' autoenv, which automatically sources a .env whenever you cd into the project directory. I do not use this one much anymore.
https://github.com/kennethreitz/autoenv
3. .env with Python Decouple
If you only need the environment variables for Python code (and not, for example, in a shell script inside your project) then Python Decouple is a related approach which uses a simplified .env file in the root of your project. I find myself using it more and more these days personally.
https://github.com/henriquebastos/python-decouple/
I'm somewhat surprised to see this is not discussed in detail in The Hitchhiker's Guide to Python - Virtual Environments. Perhaps we can generate a pull request about it from this question.
Related
I have a .python-version file, and when I create a Python repo with github and specify that it should have a .gitignore, it adds the .python-version file to it. It seems to me that that file should NOT be ignored since other people running the code on different machines would want to know what version of Python they need.
So why is it .gitignored?
While being too specific, you can still version that file (meaning: not include it in the default .gitignore), as :
it will be used only by pyenv
it is a good addition to the README, in order to illustrate what version of python is recommended for the specific project,
it can be overridden easily (if you are using pyenv), or simply ignored (if you don't have pyenv).
As the article "How to manage multiple Python versions and virtual environments " states:
When setting up a new project that is to use Python 3.6.4 then pyenv local 3.6.4 would be ran in its root directory.
This would both set the version, and create a .python-version file, so that other contributors’ machines would pick it up.
But:
pyenv looks in four places to decide which version of Python to use, in priority order:
The PYENV_VERSION environment variable (if specified).
You can use the pyenv shell command to set this environment variable in your current shell session.
The application-specific .python-version file in the current directory (if present).
You can modify the current directory's .python-version file with the pyenv local command.
The first .python-version file found (if any) by searching each parent directory, until reaching the root of your filesystem.
The global version file. You can modify this file using the pyenv global command.
If the global version file is not present, pyenv assumes you want to use the "system" Python. (In other words, whatever version would run if pyenv weren't in your PATH.)
The reason why .python-version should be gitignored is because its version is too specific. Tiny versions of Python (e.g. 2.7.1 vs 2.7.2) are generally compatible with each other, so you don't want to lock down to a specific tiny version. Furthermore, many Python apps or libraries should work with a range of Python versions, not just a specific one. Using .python-version indicates that you want other developers to use an exact, specific Python version, which is usually not a good idea.
If you want to indicate the minimum Python version needed, or otherwise a version range, then I believe documenting that in a README is a more appropriate solution.
It can also be a bit problematic when using python virtual environments, as people may want to use virtual environment names different than 3.7.2/envs/myvenv.
Old post but still relevant.
My answer would be "it depends".
The name of a virtual env can also be used in .python-version, if it is managed with the help of the virtualenv plugin of pyenv. This makes this file pretty useless it the project is managed on a public Git repo and you can exclude it (but not to do is harmless as told in other answers).
But (and I am in this situation) if you manage the project on a private repo and share virtual envs, it can make sense to not exclude it from Git. This allows you to work with a different environment (including the Python version) on an experimental branch of the project. Of course, it would have been far cleaner to fork or clone the original project and experiment with the new env in the copy, but sometimes it easier to just create a new branch.
At the end of the day, IMHO there is no universal answer to the question, and it depends on your workflow.
Well sir I think answer to your question is YES. I just openend GitHub official repo and checked the project gitignore.
It showed .python-version file mentioned there.
And if it's not getting ignored you can simply check for correct way to mention.
I am a beginner trying to learn a bit of Python; first practical applications will be data analytics. My learning setup consists of Mac OS X, Miniconda2, Pycharm and Git.
Is it better to set up a project folder 'bar' within a conda environment folder 'foo' (~/miniconda2/env/foo/bar)?
Or is it better to leave the conda environment alone as ~/miniconda2/env/foo and set up a project folder as ~/repos/bar?
Virtualenv users I've seen put the env and the project in a single folder, but I have not seen a similar, popular or recommended workflow for conda.
Thank you in advance for any advice.
While I haven't used conda myself, I expect they aren't trying to change the concept of a virtual environment too much. That being said, I personally find it better to keep them separate, i.e. have a ~/.virtualenvs and a ~/repos folder.
As you mentioned, though, it's pretty common to store both the virtualenv and the project itself in the same folder. What I would stress here is that the virtualenv should then be in the project folder, not the other way around. For example:
~/repos/Foo/.fooenv
The reason for this is that virtualenvs should be disposable, whereas your projects are not. That means that you should be able to freely remove a virtualenv without fearing you've accidentally deleted your project folder along with it.
As my limited brain has come to understand it after much reading, relative imports bad, absolute imports good. My question is, how can one effectively manage a "live" and "development" version of a package? That is, if I use absolute imports, my live code and development code are going to be looking at the same thing.
Example
/admin/project1/__init__.py
/scripts/__init__.py
/main1.py
/main2.py
/modules/__init__.py
/helper1.py
with "/admin" on my PYTHONPATH, the contents of project1 all use absolute imports. For example:
main1.py
import project1.modules.helper1
But I want to copy the contents of project1 to another location, and use that copy for development and testing. Because everything is absolute, and because "/admin" is on PYTHONPATH, my copied version is still going to be referencing the live code. I could add my new location to PYTHONPATH, and change the names of all files by hand (i.e. add "dev" to the end of everything), do my changes/work, then when I'm ready to go live, once again, by hand, remove "dev" from everything. This, will work, but is a huge hassle and prone to error.
Surely there must be some better way of handling "live" and "development" versions of a Python project.
You want to use virtualenv (or something like it).
$ virtualenv mydev
$ source mydev/bin/activate
This creates a local Python installation in the mydev directory and modifies several key environment variables to use mydev instead of the default Python directories.
Now, your PYTHONPATH looks in mydev first for any imports, and anything you install (using pip, setup.py, etc) will go in mydev. When you are finished using the mydev virtual environment, run
$ deactivate
to restore your PYTHONPATH to its previous value. mydev remains, so you can always reactivate it later.
#chepner's virtualenv suggestion is a good one. Another option, assuming your project is not installed on the machine as a python egg, is to just add your development path to the front of PYTHONPATH. Python will find your development project1 before the regular one and everyone is happy. Eggs can spoil the fun because they tend to get resolved before the PYTHONPATH paths.
Is there anything equivalent or close in terms of functionality to Python's virtualenv, but for Perl?
I've done some development in Python and a possibility of having non-system versions of modules installed in a separate environment without creating any mess is a huge advantage. Now I have to work on a new project in Perl, and I'm looking for something like virtualenv, but for Perl. Can you suggest any Perl equivalent or replacement for python's virtualenv?
I'm trying to setup X different sets of non-system Perl packages for Y different applications to be deployed. Even worse, these applications may require different versions of the same package, so each of them may require to be installed in a separate module/library environment. You may want to do this manually for X < Y < 3. But you should not do this manually for 10 > Y > X.
Ideally what I'm looking should work like this:
perl virtualenv.pl my_environment
. my_environment/bin/activate
wget http://.../foo-0.1.tar.gz
tar -xzf foo-0.1.tar.gz ; cd foo-0.1
perl Makefile.pl
make install # <-- package foo-0.1 gets installed inside my_environment
perl -MCPAN -e 'install Bar' # <-- now package Bar with all its deps gets installed inside my_environment
There's a tool called local::lib that wraps up all of the work for you, much like virtualenv. It will:
Set up #INC in the process where it's used.
Set PERL5LIB and other such things for child processes.
Set the right variables to convince CPAN, MakeMaker, Module::Build, etc. to install libraries and store configuration in a local directory.
Set PATH so that installed binaries can be found.
Print environment variables to stdout when used from the commandline so that you can put eval $(perl -Mlocal::lib)
in your .profile and then mostly forget about it.
I've used schroot for this purpose. It is a bit heavier than virtualenv but you can be sure that nothing will leak in that shouldn't.
Schroot manages a chroot environment for you, but mounts your home directory in the chroot so it appears like a normal shell session, just using the binaries and libraries in the chroot.
I think it may be debian/ubuntu only though.
After setting up the schroot, your script above would look like
schroot -c my_perl_dev
wget ...
See http://www.debian-administration.org/articles/566 for an interesting article about it
Also checkout perl-virtualenv , this seems to be wrapper around local::lib as suggested by Hobbs, but creates a bin/activate and bin/deactivate so you can use it just like the python tool.
I've been using it quite successfully for a month or so without realising it wasn't as standards as perhaps it should be.
It makes it lot easier to set up a working virtualenv for perl as while local:lib will tell you what variables you need to set, etc. perl-virtualenv creates an activate script which does it for you.
While investigating, I discovered this and some other pages (this one is too old and misses new technologies, this reddit post is a slight misdirect).
The problem with perlbrew and plenv is that they seem to be replacements for pyenv, not virtualenv. As noted here pyenv is for managing python versions, virtualenv is for managing per-project module versions. So, yes, in some ways similar to local::lib, but with better usability.
I've not seen a proper answer to this question yet, but from what I've read, it looks like the best solution is something along the lines of:
Perl version management: plenv/perlbrew (with most people
favouring the more contemporary bash based plenv over the perl based
perlbrew from what I can see)
Module version management: Carton
Module installation: cpan (well, cpanminus anyway, ymmv)
To be honest, this is not an ideal set up, although I'm still learning, so it may yet be superior. It just doesn't feel right. It certainly isn't a like for like replacement for virtualenv.
There are a couple of posts I've found saying "it is possible" but neither has gone any further.
I am not sure whether this is the same as that virtualenv thing you are talking about, but have a look for the #INC special variable in the perlvar manpage.
Programs can modify what directories they check for libraries uwith use lib. This lib directory can be relative to the current directory. Libraries from these directories will be used before system libraries, as they are placed at the beginning of the #INC array.
I believe cpan can also install libraries to specific directories. Granted, cpan draws from the CPAN site in order to install things, so this may not be the best option.
It looks like you just need to use the INSTALL_BASE configuration for Makefile.PL (or the --install_base option for Build.PL)? What exactly do you need the solution to do for you? It sounds like you just need to get the installed module in the right place. You've presented your problem as an XY Problem by specifying what you think is the solution is rather than letting us help you with your task.
See How do I keep my own module/library directory? in perlfaq8, for instance.
If you are downloading modules from CPAN, the latest cpan command (in App::Cpan) has a -j switch to allow you to choose alternate CPAN.pm configuration files. In those configuration files you can set the CPAN.pm options to install wherever you like.
Based on your clarification, it sounds like local::lib might work for you in single, simple cases, but I do this for industrial strength deployments where I set up custom, private CPANs per application, and install directly from those custom CPANs. See my MyCPAN::App::DPAN module, for instance. From that, I use custom CPAN.pm configs that analyze their environment and set the proper values to each application can install everything in a directory just for that application.
You might also consider distributing your application as a Task::. You install it like any other Perl module, but dependencies share that same setup (i.e. INSTALL_BASE).
What I do is start the CPAN shell (cpan) and install my own Perl 5.10 from it
(I believe the command is install perl-5.10). This will ask for various configuration
settings; I make sure to make it point to paths under /usr/local
(or some other installation location other than the default).
Then I put its resulting location in my executable $PATH before the standard perl, and use its CPAN shell to install the modules I need (usually, a lot).
My Perl scripts all start with the line
#!/usr/bin/env perl
Never had a problem with this approach.
Another developer and I disagree about whether PYTHONPATH or sys.path should be used to allow Python to find a Python package in a user (e.g., development) directory.
We have a Python project with a typical directory structure:
Project
setup.py
package
__init__.py
lib.py
script.py
In script.py, we need to do import package.lib. When the package is installed in site-packages, script.py can find package.lib.
When working from a user directory, however, something else needs to be done. My solution is to set my PYTHONPATH to include "~/Project". Another developer wants to put this line of code in the beginning of script.py:
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
So that Python can find the local copy of package.lib.
I think this is a bad idea, as this line is only useful for developers or people running from a local copy, but I can't give a good reason why it is a bad idea.
Should we use PYTOHNPATH, sys.path, or is either fine?
If the only reason to modify the path is for developers working from their working tree, then you should use an installation tool to set up your environment for you. virtualenv is very popular, and if you are using setuptools, you can simply run setup.py develop to semi-install the working tree in your current Python installation.
I hate PYTHONPATH. I find it brittle and annoying to set on a per-user basis (especially for daemon users) and keep track of as project folders move around. I would much rather set sys.path in the invoke scripts for standalone projects.
However sys.path.append isn't the way to do it. You can easily get duplicates, and it doesn't sort out .pth files. Better (and more readable): site.addsitedir.
And script.py wouldn't normally be the more appropriate place to do it, as it's inside the package you want to make available on the path. Library modules should certainly not be touching sys.path themselves. Instead, you'd normally have a hashbanged-script outside the package that you use to instantiate and run the app, and it's in this trivial wrapper script you'd put deployment details like sys.path-frobbing.
In general I would consider setting up of an environment variable (like PYTHONPATH)
to be a bad practice. While this might be fine for a one off debugging but using this as
a regular practice might not be a good idea.
Usage of environment variable leads to situations like "it works for me" when some one
else reports problems in the code base. Also one might carry the same practice with the
test environment as well, leading to situations like the tests running fine for a
particular developer but probably failing when some one launches the tests.
Along with the many other reasons mentioned already, you could also point outh that hard-coding
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
is brittle because it presumes the location of script.py -- it will only work if script.py is located in Project/package. It will break if a user decides to move/copy/symlink script.py (almost) anywhere else.
Neither hacking PYTHONPATH nor sys.path is a good idea due to the before mentioned reasons. And for linking the current project into the site-packages folder there is actually a better way than python setup.py develop, as explained here:
pip install --editable path/to/project
If you don't already have a setup.py in your project's root folder, this one is good enough to start with:
from setuptools import setup
setup('project')
I think, that in this case using PYTHONPATH is a better thing, mostly because it doesn't introduce (questionable) unneccessary code.
After all, if you think of it, your user doesn't need that sys.path thing, because your package will get installed into site-packages, because you will be using a packaging system.
If the user chooses to run from a "local copy", as you call it, then I've observed, that the usual practice is to state, that the package needs to be added to PYTHONPATH manually, if used outside the site-packages.