I have a full python installation with files in /usr/local/, but also have one that I compiled from source sitting in ~/python_dist. If I look at sys.path on each interpreter I see that they indeed import from different libraries.
Currently I can run $ PYTHONPATH=~/other_py_libs ~/python_dist/bin/python to invoke the custom interpreter with some other modules available in the path. However, I don't want permanently change the global PYTHONPATH variable.
How can I permanently change the python path for only one specific python install?
The easiest way to do this is to use a virtualenv (manage with virtualenvwrapper). With virtual environments you can set up different, isolated python environments (kind of like little python playgrounds). Switching between them (with the help of virtualenvwrapper) is as easy as typing workon envname. You don't have to worry about switching the PYTHONPATH around, and you can direct scripts to use a specific environment simply by running them with the python install in that environment, e.g. using #! /home/myname/.virtualenvs/envname/bin python.
Related
I am trying to use Anaconda and conda environments to allow Python programs for data acquisition* etc. to run from the (Anaconda) command line on Windows. The set up will be that the Python programs are installed to a particular location (cloned from Github), within %PATH% or whichever environment variable is more appropriate.
From an Anaconda command prompt in another directory and a particular conda environment, I want (both myself and other users) to be able run either python test.py <args> or test.py <args> (either solution is acceptable) and have a system wide conda environment run its Python to execute the program. test.py can/will have an appropriate shebang set.
Right now the python test.py calls the correct Python within the active conda environment, but cannot find the test.py program as Python won't search the %PATH% or similar looking for the program. test.py does something (Windows does not complain that the executable can't be found, and I've been playing with the file associations to get this far), but doesn't appear to start Python - a simple print function or raise statement as the only entry in the file does nothing.
I've tried setting file associations in Windows, but this hasn't changed anything. I've copied the py.exe/pyw.exe across to the Anaconda environments, with no change.
Is this something that can be done within Anaconda, or am I going to have to fall back on installing base Python directly and trying to use the launcher mechanism there?
Note that I'm also intending to deploy these programs on Raspbian, so any solutions, including non-Anaconda ones, that will work cross platform there would be worth extra effort on my part.
*these programs have significant usage of library packages for accessing external USB/GPIB/serial/ethernet connected lab equipment and use matplotlib, scipy, etc., hence the desire for a cloneable conda environment as the base environment.
It turns out the correct answer to this is fairly simple, but is fairly hard to find explained well. This might be a little clearer than the other answers I found:
Install the standalone launcher from pylauncher and add #!/usr/bin/env python shebangs to your scripts.
This should register .py files to Python.File and will find your Anaconda Pythons in appropriate environments. If you don't have a non-Anaconda python, it will use the Anaconda base environment (these two facts were the key element I was missing from various other answers around this problem that I had looked at, including the documents on python.org).
If you have a Python from python.org installed, then a standalone command line shell will use that, defaulting to Python 2.x, then Python 3.x. With #!/usr/bin/env python shebang, then a regular command shell will try to use python.org pythons first, then the Anaconda base environment. An Anaconda prompt will use the active environment. #! /usr/bin/env python2 or python3 will try to use python.org pythons only and fail if they are not found.
Installing Python 2.7 from python.org installers (and letting the installer set the file associations) will break pylauncher, and reinstalling will not fix it. Instead, set Computer\HKEY_CLASSES_ROOT\Python.File\Shell\open\command default value to "C:\WINDOWS\py.exe" "%L" %* to revert back to the pylauncher set up (assuming you used the launchwin.* packages to install it).
So, I've been toying around with python environments and I think i screwed mine up.
So when I ran python in shell, it would tell me i'm running on 2.7
I'm on windows 10, and i need a python switcher for my next project so i download pywin and used pywin to switch it to 3.5
the command i used was
pywin setdefault 3.5
now when i type python it says
python is not recognized as an internal or external command.
but py produces
Python 2.7.12
Now i can't use pip, easy_install, virtualenv
all of these commands i used to use, i suddenly no longer have access to them.
I tried switching back
but it wont even recognize pywin anymore.
The best way to check which Python version is executed is to check your environment variables. Another way of checking this is using the which command. (open cmd and run which python).
But, first you need to start a new cmd prompt to ensure your environment variables are not altered.
On Windows, but also any OS, you need to check the PATH and PYTHONPATH variables.
For Windows, follow the recommandations available in the Python documentation.
If you're not very experienced with working with windows, installations, and other similar things, I would recommend that you uninstall python, delete all versions/folders containing python (compiled) files (those that were installed with python, not the ones you've written) and then reinstall python. The installer should re-set the path variable to the correct location.
I have a virtualenv in a structure like this:
venv/
src/
project_files
I want to run a makefile (which calls out to Python) in the project_files, but I want to run it from a virtual environment. Because of the way my deployment orchestration works, I can't simply do a source venv/bin/activate.
Instead, I've tried to export PYTHONPATH={project_path}/venv/bin/python2.7. When I try to run the makefile, however, the python scripts aren't finding the dependencies installed in the virtualenv. Am I missing something obvious?
The PYTHONPATH environmenbt variable is not used to select the path of the Python executable - which executable is selected depends, as in all other cases, on the shell's PATH environment variable. PYTHONPATH is used to augment the search list of directories (sys.path in Python) in which Python will look for modules to satisfy imports.
Since the interpreter puts certain directories on sys.path before it actions PYTHONPATH precisely to ensure that replacement modules with standard names do not shadow the standard library names. So any standard library module will be imported from the library associated with the interpreter it was installed with (unless you do some manual furkling, which I wouldn't recommend).
venv/bin/activate does a lot of stuff that needs to be handled in the calling shell's namespace, which can make tailoring code rather difficult if you can't find a way to source the script..
You can actually just call the Python interpreter in your virtual environment. So, in your Makefile, instead of calling python, call venv/bin/python.
To run a command in a virtualenv, you could use vex utility:
$ vex venv make
You could also check, whether make PYTHON=venv/bin/python would be enough in your case.
PYTHONPATH adjusts sys.path list. It doesn't change python binary. Don't use it here.
I use macvim built with python interpreter support on OS X lion for python coding.
In order for omnicompletion to work with the libraries for a particular python
virtualenv I would like macvim to recognize that it is opened within an activated
python virtualenv.
On Ubuntu this works exactly like i expect it to;
If I open vim in and activate virtualenv, all libs specific to that virtualenv are on Python's
path. This does not work in macvim when launch from the mvim shell script inside an
activated virtualenv. Instead, the Python path consists of the global Python's libs.
I know there is a way around this with some semi heavy vim-scripting, but I would
prefer it if it behaves like on Ubuntu. I would at least want to know why it doesn't
behave that way. Any ideas are welcome.
this might be a very simple question but I need your help. I work in a network and I cannot install the programs I want. Anyway, I need to use another version of python, which is installed in the directory /new_version/.
Now, when I type "python" in a shell (I use bash) the command point to the version of python installed in the machine I'm working with. I'd love that when I type "python" this command point to the /new_version/ which I've installed. It would be also better if I can call this "new version" with another command, i.e. python2.
I tried changing the PYTHONPATH in the .bashrc but it didn't work.
alias newpython="/path/to/your/new_version/python"
Add this to your .bashrc, you can then start the new python with newpython and the standard one with python.
Add the line
export PATH=/new_version/:$PATH
to your ~/.bashrc (or ~/.bash_profile) file. Then, whenever you run python, it will find the new version first in your PATH. Note this is PATH, not PYTHONPATH. See the comment by #Aaron.
Edit: Only do it this way if you want python to point to the new version. Use an alias as #cularis suggested if you want to call it something different, or make a symlink with:
ln -s /new_version/python /path/to/a/dir/you/add/to/your/path/newpython
Install virtualenv. With this you can easily set up different Python versions like that:
virtualenv -p /new_version/bin/python
Also, virtualenv enables you to easily install other Python packages via pip install.
And finally, there's a package called tox which can automate testing with different Python versions...