I am trying to install a new conda environment that will be totally separate from my other environments, so I run:
conda create --name foot35 python=3.5
Anaconda then asks for my approval to install these NEW packages:
asn1crypto: 0.22.0-py35he3634b9_1
ca-certificates: 2017.08.26-h94faf87_0
cachecontrol: 0.12.3-py35h3f82863_0
certifi: 2017.7.27.1-py35hbab57cd_0
cffi: 1.10.0-py35h4132a7f_1
chardet: 3.0.4-py35h177e1b7_1
colorama: 0.3.9-py35h32a752f_0
cryptography: 2.0.3-py35h67a4558_1
distlib: 0.2.5-py35h12c42d7_0
html5lib: 0.999999999-py35h79d4e7f_0
idna: 2.6-py35h8dcb9ae_1
lockfile: 0.12.2-py35h667c6d9_0
msgpack-python: 0.4.8-py35hdef45cb_0
openssl: 1.0.2l-vc14hcac20b0_2 [vc14]
packaging: 16.8-py35h5fb721f_1
pip: 9.0.1-py35h69293b5_3
progress: 1.3-py35ha84af61_0
pycparser: 2.18-py35h15a15da_1
pyopenssl: 17.2.0-py35hea705d1_0
pyparsing: 2.2.0-py35hcabcaab_1
pysocks: 1.6.7-py35hb30ac0d_1
python: 3.5.4-hedc2606_15
requests: 2.18.4-py35h54a615f_1
setuptools: 36.5.0-py35h21a22e4_0
six: 1.10.0-py35h06cf344_1
urllib3: 1.22-py35h8cc84eb_0
vc: 14-h2379b0c_1
vs2015_runtime: 14.0.25123-hd4c4e62_1
webencodings: 0.5.1-py35h5d527fb_1
wheel: 0.29.0-py35hdbcb6e6_1
win_inet_pton: 1.0.1-py35hbef1270_1
wincertstore: 0.2-py35hfebbdb8_0
I don't know why it suggests these specific ones. I looked up lockfile and its website says:
Note: This package is deprecated.
Here is a screenshot of my command prompt as additional information.
I am trying to do a clean install that is unrelated/independent to the root environment.
Why is conda trying to install these things and how do I fix it?
conda create will "Create a new conda environment from a list of specified packages." ( https://conda.io/docs/commands/conda-create.html )
What list??!? The .condarc file is the conda configuration file.
https://conda.io/docs/user-guide/configuration/use-condarc.html#overview
The .condarc file can change many parameters, including:
Where conda looks for packages.
If and how conda uses a proxy server.
Where conda lists known environments.
Whether to update the bash prompt with the current activated environment name.
Whether user-built packages should be uploaded to Anaconda.org.
**Default packages or features to include in new environments.**
Additionally, if you ever typed conda config, even accidentally...
The .condarc file is not included by default, but it is automatically created in your home directory the first time you run the conda config command.
A .condarc file may also be located in the root environment, in which case it overrides any in the home directory.
If you would like a single clean env then Boshika's recommendation of --no-default-packages flag for an instance though, you can check and modify the default packages for all further envs. ( https://conda.io/docs/user-guide/configuration/use-condarc.html#always-add-packages-by-default-create-default-packages )
Always add packages by default (create_default_packages)
When creating new environments, add the specified packages by default. The default packages are installed in every environment you create. You can override this option at the command prompt with the --no-default-packages flag. The default is to not include any packages.
EXAMPLE:
create_default_packages:
- pip
- ipython
- scipy=0.15.0
Lockfile may be there due to legacy requirements across all operating systems. Hopefully, you have the tools to remove it if you choose.
To avoid conda from installing all default packages you can try this
conda create --name foot35 --no-deps python=3.5
please don't loose the hope it's very weird for me also.
What you have to do just follow the steps: -
1.Download the anaconda for you system from it's official site and Install it : https://repo.continuum.io
After the Installation process, you can select your own package from there and please don't need to download anything from anywhere, it's full of packages over the internet.
3.If you want to work on python download Syder IDE its very useful for the Machine learning library.
Don't create other environment instead of root by defaults otherwise you have to duplicate all the file again, if there is any error while installing in root so close the window and again run as administration and after that its works fine.
Cause all the file in your root environment so you don't worry about the path in future and you can install and uninstall the packages : like - numpy , pandas, tensorflow and its gpu , scikit-learn etc from there eaisly.
Thank you
These packages are generally useful if you wish to pip install ... anything. Without many of them doing a pip install requests could result in errors such as these (and more)
No Module named Setuptools
pip: command not found
pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available
The issue that the conda create ... exposes is that the packages it wants to pull down are variable (based on when you create the environment). If you wish to maintain the same environment for you and for those who may collaborate with you, then freezing or pinning conda create's default installed package may be necessary.
One way of doing this is creating your environment with conda env create using a conda environment YAML file such as this example:
dependencies:
- ca-certificates=2018.03.07
- certifi=2018.4.16
- libedit=3.1.20170329
- libffi=3.2.1
- ncurses=6.1
- openssl=1.0.2o
- pip=10.0.1
- python=3.6.6
- readline=7.0
- setuptools=40.0.0
- sqlite=3.24.0
- tk=8.6.7
- wheel=0.31.1
- xz=5.2.4
- zlib=1.2.11
conda env create -n <NAME_OF_ENVIRONMENT> -f <PATH_TO_CONDA_REQUIREMENTS_FILE>
(note it's conda env create not conda create)
I am working on a project where they are using Ansible to run several conda installs. I need to install two additional packages from github that have dependencies that are already covered by the existing conda installs with the second package having a dependency on the first.
Using the Ansible code below, I can get the first package to install without reinstalling the dependencies.
- name: install mypackage
shell: /home/myname/envs/myproject/bin/pip install --install-option="--prefix=/home/myname/envs/myproject" --egg https://github.com/myname/mypackage/archive/my_branch.zip
This gets me 95% of the way there, however, when I try to install the second package, it doesn't recognize the first package as having been installed and fails.
I am new to this and I have been throwing things up against the wall but I'm not able to install the first package in such a way where:
It recognizes the existing conda installs
The second package identifies the first one
From what I can understand from your task you are using a venv to install the packages, that's good. I don't understand why, though, you are using the shell module to handle the install.. This not good.
You can handle all this with ansible' pip module :
- name: "Install mypackage"
pip:
virtualenv: /home/{{ lookup('env','USER') }}/envs/myproject/
name: "{{ item }}"
with_items:
- "https://github.com/myname/mypackage1/archive/my_branch.zip"
- "https://github.com/myname/mypackage2/archive/my_branch.zip"
This should install correctly the packages in the order you require, without the hassle of having to work your way through shell output.
Note that you can mix normal python packages with eggs etc..
As an alternative to virtualenv you can use executable.
Have a look at the docs
I believe the question is how to use ansible to pip install packages within a conda environment. Noting that it is perfectly possible to use pip install within a conda environment, which is particularly useful in cases where the desired package does not exist on the conda repositories and cannot be installed with conda install.
The goal is thus to use the environment created by conda, and not a virtualenv (for which, btw, ansible's pip module provides specific parameters).
I have managed to do so by using ansible's pip module and pointing the pip executable to the one installed within the desired conda environment.
See code below, notice usage of the executable variable:
- name: Install pip packages WITHIN a designated conda environment
pip:
name: some_package_name
executable: "/home/[username]/[anaconda3]/envs/[conda_env_name]/bin/pip"
# ^-- Of course you will need to ensure the correct path.
This will pip install the packages inside the designated conda environment.
Using pip3 to install a package in a virtualenv causes the package to be installed in the global site-packages folder instead of the one in the virtualenv folder. Here's how I set up Python3 and virtualenv on OS X Mavericks (10.9.1):
I installed Python3 using Homebrew:
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"
brew install python3 --with-brewed-openssl
Changed the $PATH variable in .bash_profile; added the following line:
export PATH=/usr/local/bin:$PATH
Running which python3 returns /usr/local/bin/python3 (after restarting the shell).
Note: which python3 still returns /usr/bin/python though.
Installed virtualenv using pip3:
pip3 install virtualenv
Next, create a new virtualenv and activate it:
virtualenv testpy3 -p python3
cd testpy3
source bin/activate
Note: if I don't specify -p python3, pip will be missing from the bin folder in the virtualenv.
Running which pip and which pip3 both return the virtualenv folder:
/Users/kristof/VirtualEnvs/testpy3/bin/pip3
Now, when I try to install e.g. Markdown using pip in the activated virtualenv, pip will install in the global site-packages folder instead of the site-packages folder of the virtualenv.
pip install markdown
Running pip list returns:
Markdown (2.3.1)
pip (1.4.1)
setuptools (2.0.1)
virtualenv (1.11)
Contents of /Users/kristof/VirtualEnvs/testpy3/lib/python3.3/site-packages:
__pycache__/
_markerlib/
easy_install.py
pip/
pip-1.5.dist-info/
pkg_resources.py
setuptools/
setuptools-2.0.2.dist-info/
Contents of /usr/local/lib/python3.3/site-packages:
Markdown-2.3.1-py3.3.egg-info/
__pycache__/
easy-install.pth
markdown/
pip-1.4.1-py3.3.egg/
setuptools-2.0.1-py3.3.egg
setuptools.pth
virtualenv-1.11-py3.3.egg-info/
virtualenv.py
virtualenv_support/
As you can see, the global site-packages folder contains Markdown, the virtualenv folder doesn't.
Note: I had Python2 and Python3 installed before on a different VM (followed these instructions) and had the same issue with Python3; installing packages in a Python2 based virtualenv worked flawlessly though.
Any tips, hints, … would be very much appreciated.
Funny you brought this up, I just had the exact same problem. I solved it eventually, but I'm still unsure as to what caused it.
Try checking your bin/pip and bin/activate scripts. In bin/pip, look at the shebang. Is it correct? If not, correct it. Then on line ~42 in your bin/activate, check to see if your virtualenv path is right. It'll look something like this
VIRTUAL_ENV="/Users/me/path/to/virtual/environment"
If it's wrong, correct it, deactivate, then . bin/activate, and if our mutual problem had the same cause, it should work. If it still doesn't, you're on the right track, anyway. I went through the same problem solving routine as you did, which piping over and over, following the stack trace, etc.
Make absolutely sure that
/Users/kristof/VirtualEnvs/testpy3/bin/pip3
is what you want, and not referring to another similarly-named test project (I had that problem, and have no idea how it started. My suspicion is running multiple virtualenvs at the same time).
If none of this works, a temporary solution may be to, as Joe Holloway said,
Just run the virtualenv's pip with its full path (i.e. don't rely on searching the executable path) and you don't even need to activate the environment. It will do the right thing.
Perhaps not ideal, but it ought to work in a pinch.
Link to my original question:
VirtualEnv/Pip trying to install packages globally
For me this was not a pip or virtualenv problem. It was a python problem. I had set my $PYTHONPATH manually in ~/.bash_profile (or ~/.bashrc) after following some tutorial online. This manually set $PYTHONPATH was available in the virtualenv as it probably should be allowed.
Additionally add2virtualenv was not adding my project path to my $PYTHONPATH for some reason within the virtualenv.
Just some forking paths for those who might still be stuck! Cheers!
I had the same problem, I solved it by removing venv directory and recreating it!
deactivate (if venv is activated first deactivate it)
rm -rf venv
virtualenv -p python3 venv
. ENV/bin/activate
pip3 install -r requirements.txt
Now everything works like a charm.
Edit: Here's the above code modified for Python3's venv:
deactivate # (if venv is activated first deactivate it)
rm -rf venv # Delete the old venv directory
python3 -m venv venv # Recreate a new, empty venv
. venv/bin/activate # Activate it
pip3 install -r requirements.txt # Install the dependencies
The first thing to check is which location pip is resolving to:
which pip
if you are in a virtualenv you would expect this to give you something like:
/path/to/virtualenv/.name_of_virtualenv/bin/pip
However it may be the case that it's resolving to your system pip for some reason. For example you may see this from within your virtualenv (this is bad):
/usr/local/bin/pip
(or anything that isn't in your virtualenv path).
To solve this check your pipconfig in:
~/.pipconf
~/.conf/pip
/etc/pip.conf
and make sure that there is nothing that is coercing your Python path or your pip path (this fixed it for me).
Then try starting a new terminal and rebuild your virtualenv (delete then create it again)
I had the same issue on macos with python 2 and 3 installed.
Also, I had aliases to point to python3 and pip3 in my .bash_profile.
alias python=/usr/local/bin/python3
alias pip=/usr/local/bin/pip3
Removing aliases and recreating virtual env using python3 -m venv venv fixed the issue.
Go to bin directory in your virtual environment and write like this:
./pip3 install <package-name>
I had this problem too. Calling pip install <package_name> from the /bin directory within my Python 3.3 virtual environment on my Mavericks Mac caused the Python package to be installed in the Python 2.7 global site packages directory. This was despite the fact that my $PATH started with the directory containing pip. Weird. This doesn't happen on CentOS. For me, the solution was calling pip3 instead of pip. When I had installed pip within the virtual environment via ez_setup, three "pip" executables had been installed in the /bin directory - pip, pip3, and pip3.3. Curiously, all three files were exactly the same. Calling pip3 install <package_name> caused the Python package to be installed correctly into the local site-packages directory. Calling pip with the full pathname into the virtual environment also worked correctly. I'd be interested to know why my Mac isn't using $PATH the way I would expect it to.
I hit into the same issue while installing a python package from within a virtualenv.
The root cause in my case was different.
From within the virtualenv, I was (out of habit on Ubuntu), doing:
sudo easy_install -Z <package>
This caused the bin/pip shebang to be ignored and it used the root's non virtualenv python to install it in the global site-packages.
Since we have a virtual environment, we should install the package without "sudo"
I stumbled upon the same problem running Manjaro. I created the virtual environment using python3 -m ven venv and then activated using source venv/bin/actiave. which python and which pip both pointed towards the correct binaries in the virtualenv, however I was not able to install to the virtualenv, even when using the full path of the binaries. Turned out that when I uninstalled the python-pip package with sudo pacman -R python-pip python-reportlab (had to include reportlab to satisfy dependencies) everything started to work as expected. Not sure why, but this is probably due to a double install where the system package is taking precedence.
I had a similar problem after updating to pip==8.0.0. Had to resort to debugging pip to trace out the bad path.
As it turns out my profile directory had a distutils configuration file with some empty path values. This was causing all packages to be installed to the same root directory instead of the appropriate virtual environment (in my case /lib/site-packages).
I'm unsure how the config file got there or how it had empty values but it started after updating pip.
In case anyone else stumbles upon this same problem, simply deleting the file ~/.pydistutils.cfg (or removing the empty config path) fixed the problem in my environment because pip went back to the default distributed configuration.
Here are some practices that could avoid headaches when using Virtual Environments:
Create a folder for your projects.
Create your Virtualenv projects inside of this folder.
After activating the environment of your project, never use "sudo pip install package".
After finishing your work, always "deactivate" your environment.
Avoid renaming your project folder.
For a better representation of this practices, here is a simulation:
creating a folder for your projects/environments
$ mkdir venv
creating environment
$ cd venv/
$ virtualenv google_drive
New python executable in google_drive/bin/python
Installing setuptools, pip...done.
activating environment
$ source google_drive/bin/activate
installing packages
(google_drive) $ pip install PyDrive
Downloading/unpacking PyDrive
Downloading PyDrive-1.3.1-py2-none-any.whl
...
...
...
Successfully installed PyDrive PyYAML google-api-python-client oauth2client six uritemplate httplib2 pyasn1 rsa pyasn1-modules
Cleaning up...
package available inside the environment
(google_drive) $ python
Python 2.7.6 (default, Oct 26 2016, 20:30:19)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import pydrive.auth
>>>
>>> gdrive = pydrive.auth.GoogleAuth()
>>>
deactivate environment
(google_drive) $ deactivate
$
package NOT AVAILABLE outside the environment
(google_drive) $ python
Python 2.7.6 (default, Oct 26 2016, 20:32:10)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import pydrive.auth
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pydrive.auth
>>>
Notes:
Why not sudo?
Virtualenv creates a whole new environment for you, defining $PATH and some other variables and settings. When you use sudo pip install package, you are running Virtualenv as root, escaping the whole environment which was created, and then, installing the package on global site-packages, and not inside the project folder where you have a Virtual Environment, although you have activated the environment.
If you rename the folder of your project (as mentioned in the accepted answer)...
...you'll have to adjust some variables from some files inside the bin directory of your project.
For example:
bin/pip, line 1 (She Bang)
bin/activate, line 42 (VIRTUAL_ENV)
Came across the same issue today. I simply reinstalled pip globally with sudo easy_install pip (OSX/ Max), then created my virtualenv again with sudo virtualenv nameOfVEnv. Then after activating the new virtualenv the pip command worked as expected.
I don't think I used sudo on the first virtualenv creation and that may have been the reason for not having access to pip from within the virtualenv, I was able to get access to pip2 before this fix though which was odd.
I had this problem. It turned out there was a space in one of my folder names that caused the problem. I removed the space, deleted and reinstantiated using venv, and all was well.
This problem occurs when create a virtualenv instance and then change the parent folder name.
None of the above solutions worked for me.
My venv was active.
pip -V and which pip gave me the correct virtualenv path, but when I pip install-ed packages with activated venv, my pip freeze stayed empty.
All the environment variables were correct too.
Finally, I just changed pip and removed virtualenv:
easy_install pip==7.0.2
pip install pip==10
sudo pip uninstall virtualenv
Reinstall venv:
sudo pip install virtualenv
Create venv:
python -m virtualenv venv_name_here
And all packages installed correctly into my venv again.
After creating virtual environment, try to use pip located in yourVirtualEnvName\Scripts
It should install a package inside Lib\site-packages in your virtual environment
I had a similar problem on Windows. It was caused by renaming folder structures in my project within a virtualenv folder name. The paths in files didn't change but stayed as they were when virtual env was created. As Chase Ries mentioned I've changed the paths to VIRTUAL_ENV and python.exe in files
./venv/Scripts/activate.bat, set "VIRTUAL_ENV=path_to_venv\venv" 11 line in file
./venv/Scripts/Activate.ps1, $env:VIRTUAL_ENV="path_to_venv\venv" 30 line in file
./venv/Scripts/pip.exe, #!d:\path_to_env\venv\scripts\python.exe this line is at the end of a file, in my case moved to the right in 667 line, I am working on disc d so at the begining of the path is the letter of it
./venv/Scripts/pip3.7.exe, #!d:\path_to_env\venv\scripts\python.exe this line is at the end of a file, in my case moved to the right in 667 line
./venv/Scripts/pip3.exe, #!d:\path_to_env\venv\scripts\python.exe this line is at the end of a file, in my case moved to the right in 667 line
I had this problem too. Calling sudo pip install caused Python packages to be installed in the global site-packages diretory and calling pip install just worked fine.
So no use sudo in virtualenv.
The same problem. Python3.5 and pip 8.0.2 installed from Linux rpm's.
I did not find a primary cause and cannot give a proper answer. It looks like there are multiple possible causes.
However, I hope I can help with sharing my observation and a workaround.
pyvenv with --system-site-packages
./bin does not contain pip, pip is available from system site packages
packages are installed globally (BUG?)
pyvenv without --system-site-packages
pip gets installed into ./bin, but it's a different version (from ensurepip)
packages are installed within the virtual environment (OK)
Obvious workaround for pyvenv with --system-site-packages:
create it without the --system-site-packages option
change include-system-site-packages = false to true in pyvenv.cfg file
It's also worth checking that you didn't modify somehow the path to your virtualenv.
In that case the first line in bin/pip (and the rest of the executables) would have an incorrect path.
You can either edit these files and fix the path or remove and install again the virtualenv.
For Python 3ers
Try updating. I had this exact same problem and tried Chases' answer, however no success. The quickest way to refactor this is to update your Python Minor / Patch version if possible. I noticed that I was running 3.5.1 and updated to 3.5.2. Pyvenv once again works.
This happened to me when I created the virtualenv in the wrong location. I then thought I could move the dir to another location without it mattering. It mattered.
mkdir ~/projects
virtualenv myenv
cd myenv
git clone [my repository]
Oh crap, I forgot to cd into projects before creating the virtualenv and cloning the rep. Oh well, I'm too lazy to destroy and recreate. I'll just move the dir with no issues.
cd ~
mv myenv projects
cd projects/myenv/myrepo
pip install -r requirements
Nope, wants more permissions, what the?
I thought it was strange but SUDO AWAY! It then installed the packages into a global location.
The lesson I learned was, just delete the virtualenv dir. Don't move it.
Had this issue after installing Divio: it had changed my PATH or environment in some way, as it launches a terminal.
The solution in this case was just to do source ~/.bash_profile which should already be setup to get you back to your original pyenv/pyenv-virtualenv state.
Somehow a setup.cfg file with a prefix="" in the project folder
running pip install on the virtualenv outside the project folder worked so from the inside it was telling pip to use an empty prefix which defaults to "/"
removing the file fixed it
I had this problem, and after trying all the above solution I just removed everything and started afresh.
In my own case i used sudo in creating one of the folders in which the virtual environment existed, and sudo give the priviledges to root
I was very pissed! But it worked!
I have to use 'sudo' for installing packages through pip on my ubuntu system for some reason. This is causing the packages to be installed in global site-packages. Putting this here for anyone who might face this issue in future.
I had exactly the problem from the title, and I solved it. Pip started to install in the venv site-packages after I cleaned my PATH: it had a path to my local ~/bin directory at the very beginning.
So, my advice: thoroughly check your environment variables for "garbage" or any non-standard things. Unfortunately, virtualenv can be sensitive to those.
Good luck!
Short answer is run Command virtualenv with parameter “—no-site-packages”.
Long answer with explanation :-
So after running here and there, and going through lot of threads i found my self the problem. Above answers have given the idea but I would like to go again over everything though.
The problem is even if you’re activating the environment it’s referring to the system environment because of the way we have crated the virtualenv.
when we run the command virtualenv env -p python3
it will install the virtualenv but it will not create no-global—site-packages.txt.
Because of that when you activate the environment by source activate command there this file called site.py (name can be different, i just forgot ) which runs and checks if this file is not present it will not add your env path to sys.path and use systems python.
to fix this issue just run virtualenv with extra parameter —no-site-packages it will create that file and when you activate the environment it will add your custom environment path in your PATH variable making it accessible.
Lot of good discussion above, but virtualenv examples were used. Since 'conda' is now the recommended tool to manage virtualenv, I have summarized the steps in running pip in conda env as follow.
I'll use py36r as the name of the env, and /opt/conda/envs is the prefix to the envs):
$ source /opt/conda/etc/profile.d/conda.sh # skip if already done
$ conda activate py36r
$ pip install pkg_xyz
$ pip list | grep pkg_xyz
Note that the pip executed should be in /opt/conda/envs/py36r/bin/pip (not /opt/conda/bin/pip).
Alternatively, you can simply run the following without conda activate
$ /opt/conda/envs/py36r/bin/pip
Also, if you install using conda, you can install without activate:
$ conda install -n py36r pkg_abc ...
WINDOWS
For me solution was not to use
mkvirtualenv, but:
python -m venv path/to/your/virtualenv
workon works correctly.
while in virtualenv: pip -V shows virtualenv's path to pip