Update:
It turns out that the virtualenv was not properly initialized before running easy_install. Once this has been rectified, things started to work as intended. There's no solution to post, since the stated problem did not exist in the first place. The 'when I activate the virtualenv' step was not properly taken (don't ask), so the following malfunction was an illusion.
Case closed.
Original question:
I have a virtualenv. Inside it, sys.path looks like this:
[...,
'/<inside_virtualenv>/lib/python2.6/site-packages/foo-1.2.egg',
...
'/usr/local/lib/python2.6/dist-packages/foo-2.0.egg'
]
If I import foo from inside the virtualenv, I get foo-1.2 imported, as expected.
I have an egg; its setup file lists another egg as a dependency that has foo=1.2 in its dependencies.
When I activate the virtualenv and try to run python <my_egg>/setup.py develop, I get an error:
Processing dependencies for <my egg>
Installed distribution foo 2.0 conflicts with requirement foo==1.2
I even patched setuptools/command/easy_install.py to print sys.path right inside the try statement that raises this exception. The path is all right, listing foo-1.2 first and foo-2.0 distant second.
What am I doing wrong? Is there any way to make easy_install ignore the non-virtualenv foo-2.0 installation and accept foo-1.2 inside the virtualenv?
Removing the offending entry from sys.path inside my egg's setup.py does not help. While sys.path only contains the correct version of foo, the process fails with the same error.
There's another possible case where this can happen, beyond the one you were experiencing directly, but which is easily avoided:
When setting up a new virtualenv, use --no-site-packages to avoid including libraries from your system Python installation unless you're certain they don't (and will never) conflict.
Related
pytest appears to be using old source code and failing tests because of it. I'm not sure how to update it.
Test code:
from nba_stats import league
class TestLeaders():
def test_default():
leaders = league.Leaders()
print(leaders)
Source code (league.py):
from nba_stats.nba_api import NbaAPI
from nba_stats import constants
class Leaders:
...
When I run pytest on my parent directory, I get an error that refers to an old import statement.
_____________________________ ERROR collecting test/test_league.py ______________________________
ImportError while importing test module '/home/mfb/src/nba_stats/test/test_league.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_league.py:1: in <module>
from nba_stats import league
../../../.virtualenvs/nba_stats_dev/lib/python3.6/site-packages/nba_stats/league.py:1: in <module>
from nba_stats import _api_scrape, _get_json
E ImportError: cannot import name '_api_scrape'
I tried resetting my virtualenvironment and also reinstalling my package via pip. What do I need to do to tell it to see the new import statement and why is this happening?
Edit: Deleting my virtual environment completely and then creating a new one seemed to fix it, but it seems to be a recurring issue with any further source code changes. Surely there must be a way to not have to reset my virtualenvironment each time?
Looks like you installed that package (possibly as a dependency through something else if not directly) and also have it cloned locally for development. You can look into local editable installs (https://pip.pypa.io/en/stable/reference/pip_install/#editable-installs), but personally, I prefer to make the test refer directly to the package under which it is being run, since then it can be used "as-is" after cloning it. Do this by modifying sys.path in your test_league.py. Ie., assuming it has a structure with the python code under python/nba_stats, in the parent directory of `test
sys.path = [os.path.join(os.pardir, 'python')] + sys.path
at the top of test_league.py. This puts your local package up front and import will consider it first.
EDIT:
Since you tried and it still did not work (please do make sure that the snippet above does point to the local python package as in the actual structure; the above is just a common one but you may have a different structure), here is how you can see which directories are considered in order, and which are eventually selected:
python -vv -m pytest -svx
You will be able to see if there are spurious directories in sys.path, whether the one tried (as in the snippet above) matches as expected or not, any left-over .pyc files that get picked up, etc.
EDITv2: Since you stated that python -m pytest works, but pytest not, have a look where that pytest executable is coming from with which pytest. Likely it's a system one that refers to a different python then the one in your virtualenv. To see which python it picks up, do:
cat `which python`
and look at the top line.
If that is not the same as what which python gives you (with your desired virtualenv activated), you may have to install pytest for that current virtualenv (python -m pip install pytest).
For example, import trackpy returns the module not found error.I have already confirmed that trackpy has been downloaded somewhere on my computer, because attempting to install it again via conda install -c soft-matter trackpy will eventually return something to the effect of "all files already installed". This seems to occur for every "external import" (numpy, scipy, matplotlib), i.e. one that was downloaded somewhere from the internet. This does not happen for "internal imports" (sys, os). I believe this is just a matter of jupyter not looking for the files in the correct place, but I don't know how to fix something like this.
Edit: Relevant info: I ran
import sys
sys.executable
which returns 'c:\\users\\reese\\miniconda3\\python.exe'. In the pkgs folder for miniconda3, there are none of the imports that I want. However in 'c:\\users\\reese\\Anaconda\\pkgs' are all the imports, trackpy and all else. Is there an easy way to make jupyter check here for imports? I already tried straight up copying the entire pkgs folder and pasting it in miniconda3's pkgs folder, but it did not work.
Two solutions I would propose.
Okay Solution:
Yes, you can add the path to your other packages with sys.path:
import sys
sys.path.insert(0,'PATH_TO_YOUR_OTHER_PACKAGES')
import Packages_of_another_path
By insert it at index zero, you ensure that your other packages get first priority in case there in another package with the same name.
Better Solution:(Recommendable)
Always use environments. E.g.
conda create —name your_env python=3.6 pip
conda activate your_env
conda install packages1 packages2
pip install package3
In this environment you can keep all you things together.
Everything you wish to use your packages, activate your environment and start hacking ;)
I have recently tried to use pylearn2, a deep machin learning package for Python developed at University of Montreal.
I've just installed it and tried to run a simple example, but it did not work.
I have been using a pc with an Ubuntu 13.10 system, on which I found ipython installed.
I have installed Theano and later pylearn2, by following this webpage instructions:
http://deeplearning.net/software/pylearn2/
I have also modified the .bashrc file, as suggested
I thought that everything went well, and then I tried this Quick start example:
http://deeplearning.net/software/pylearn2/tutorial/index.html
I stopped at the first command:
python make_dataset.py
My terminal states:
Traceback (most recent call last): File "make_dataset.py", line 14,
in
Do you have any ideas on why it is not working?
Do you why these errors occur?
Thanks a lot
EDIT: the 14 line is the first non-commented line of the file. It states
from pylearn2.utils import serial
Without more information, I can only guess, but my first guess is…
You haven't actually installed pylearn2, because if you follow the linked docs to grab the git repo and add a PYLEARN2_DATA_PATH variable, nothing gets installed into site-packages (or dist-packages or anywhere else on sys.path).
This means that pylearn2 will only work when you start Python from within the top-level directory of the pylearn2 repo.
So, if you run a script like this:
$ cd /path/to/pylearn2
$ cd scripts/tutorials/grbm_smd/
$ python make_dataset.py
… it won't actually work.
It looks like there is a setup.py file in the repository. Does it work? I have no idea. Even though the docs don't mention using it, you might want to try. Either this:
$ pip install .
… or, if you don't have pip or it doesn't work on this package:
$ python setup.py install
Either way, of course, you may need sudo or a flag to install to your user site-packages instead of system, etc., as with any other Python package.
If that doesn't work, you might be able to just add /path/to/pylearn2 to your sys.path in some way. The most obvious way is by doing an export PYTHONPATH=/path/to/pylearn2:$PYTHONPATH in your ~/.bashrc.
Also, you will need to either source ~/.bashrc or create a new shell to get any effects of modifying the file.
If you're wondering why the instructions and the tutorial together don't give you enough information to make this work without a lot of hassle, I think that's covered in the very top of the documentation:
Pylearn2 is still undergoing rapid development. Don’t expect a clean road without bumps!
And the very fact that there is no PyPI download yet implies that this really is not ready for novices to use. If you don't know enough about using Python packages (and bash basics) to muddle through on your own, there's a good chance you won't be able to use this package.
When I'm working on module fork, I'll often add the fork (in progress) to the virtualenv of a full project for integrated testing, using
python setup.py develop
( which updates the easy_install.pth to point to the local copy )
When I'm done, the only way I've figured out how to clearly get rid of this, is to remove the entry from easy_install.pth or edit it to point to the already installed version.
I also can't easy_install --upgrade , because it realizes the development version is the latest.
I think pip could force the upgrade, but then it tries to reinstall every single dependency.
Does anyone have a good technique / strategy for managing this sort of stuff. I know I'm missing something obvious here.
Getting the following kinds of warnings when running most python scripts in the command line:
/Library/Python/2.6/site-packages/virtualenvwrapper/hook_loader.py:16: UserWarning: Module
pkg_resources was already imported from /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/pkg_resources.pyc, but /Library/Python/2.6/site-packages is being added to sys.path
import pkg_resources
/Library/Python/2.6/site-packages/virtualenvwrapper/hook_loader.py:16: UserWarning: Module site was already imported from /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site.pyc, but /Library/Python/2.6/site-packages is being added to sys.path
import pkg_resources
I think it has to do with a combination of using distribute and virtualenv, but wanted to check if anyone else has run in to this or would know how to go about fixing it.
Perhaps use the virtualenv option --no-site-packages so you won't see any system site-packages within your virtual environment. Having items installed both in your virtualenv and on the system root may be the cause of this issue.
Using --no-site-packages when creating your virtualenv prevents any conflict between system packages. I almost always use that option when creating a new virtualenv to prevent any conflicts. Though I may have several copies of libraries, at least they don't mess with each other.
The python equivalent of putting a bit of electrical tape over the check engine light would be to use the -W command line flag or by adding a warning filter.
In my case reinstalling of anything did not help. There were some orphaned .pyc files (specifically pkg_resources.pyc) left in /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python
sudo find . -type f -name "*.pyc" -delete
made it work. This link helped me to track down the problem.
I had this sort of Python packaging hell visit today too.
Running Python 2.7.3 on Ubuntu, using namespace packages and using zc.buildout.
Finally, updating system wide Distribute from older version 0.6.30 to latest version 0.6.35 resolved the problem.
If the warning shows in a program you are modifying, try it this way (examply with pytz):
try:
import pytz
except ImportError:
from pkg_resources import require
require('pytz')