python modules not found after pip install pyenv mac - python

I'm still very green with python, pip, bash, and symlinks. My machine was working just fine, and I was able to run python CLIs from my computer without issue until I needed to run a serverless resource locally for debugging. I followed some instructions on our README to get everything going, and ever since then my machine is throwing errors trying to find python modules, that I can clearly see are still there. I've been searching for a solution that is similar to mine, but haven't found a fix yet.
I'm on a Mac running 10.14.6, using a Virtual Env that uses Python 3.7, and iTerm2 with zsh.
Here are the commands I used in the terminal that borked my local the dev environment.
$ brew install pyenv
$ pyenv install 3.7.5
$ pyenv global 3.7.5
This resulted in me not being able to to run pip install. I have since used brew to uninstall pyenv, and reinstall python and am now able to run pip install commands.
This command however, I do not understand, and have not been able to undo. I suspect it is the culprit of my python modules issues, but am not completely sure to be honest.
$ echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n eval "$(pyenv init -)"\nfi' >> ~/.zshrc
Here is the traceback from the python module error if that is helpful
File "/Users/<user>/<dir>/<repo dir>/.venv/bin/<evolv>", line 11, in <module>
load_entry_point('evolv==0.1', 'console_scripts', 'evolv')()
File "/Users/<user>/<dir>/<repo dir>/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 489, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/Users/<user>/<dir>/<repo dir>/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2852, in load_entry_point
return ep.load()
File "/Users/<user>/<dir>/<repo dir>/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2443, in load
return self.resolve()
File "/Users/<user>/<dir>/<repo dir>/.venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2449, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
ModuleNotFoundError: No module named 'experiments'
I have triple checked that I ran
pip install -r requirements.txt
And
python setup.py install
Which is required for this CLI to work. A thing to note, our README instructions were to use
python setup.py install -e .
However that threw an error that the -e flag was unknown. I am not sure if this matters. If anyone has some insight that can point me in the direction of a fix that would be most appreciated. And if someone can help me better understand the bash command I used I will be eternally grateful as I am desperately trying to understand bash.

Related

How to run shell-exectued python script in a virtual environment

I have a shell script that executes some python code, and installs the necessary dependencies to do so beforehand:
sudo -E PATH="$PATH" python3 -m pip install -r requirements.txt
The script runs successfully when executed like so:
./script.sh
but I get an import error from Python when I run the script with sudo permission:
sudo ./script.sh
Traceback (most recent call last):
File "dsc.py", line 2, in <module>
from imblearn.over_sampling import ADASYN
File "/usr/local/lib/python3.8/dist-packages/imblearn/__init__.py", line 53, in <module>
from . import ensemble
File "/usr/local/lib/python3.8/dist-packages/imblearn/ensemble/__init__.py", line 8, in <module>
from ._forest import BalancedRandomForestClassifier
File "/usr/local/lib/python3.8/dist-packages/imblearn/ensemble/_forest.py", line 28, in <module>
from sklearn.utils.fixes import _joblib_parallel_args
ImportError: cannot import name '_joblib_parallel_args' from 'sklearn.utils.fixes' (/usr/local/lib/python3.8/dist-packages/sklearn/utils/fixes.py)
I don't understand why Python can no longer import ADASYN when the script is given sudo permission.
I am using virtualenv for version control. When I check where the imblearn package is located when in the same virtual environment from where the shell script was exectued, I get:
❯ python3 -m pip show imblearn
WARNING: Package(s) not found: imblearn
But I can find it when not in a virtual environment:
❯ deactivate
❯ python3 -m pip show imblearn
Name: imblearn
Version: 0.0
Summary: Toolbox for imbalanced dataset in machine learning.
Home-page: https://pypi.python.org/pypi/imbalanced-learn/
Author: UNKNOWN
Author-email: UNKNOWN
License: UNKNOWN
Location: /usr/local/lib/python3.8/dist-packages
Requires: imbalanced-learn
Required-by:
Which means that my shell script does install the dependencies, but not within the virtual environment from which it is executed. Removing the imblearn package and running the shell script installs it again, so I am certain that it is this shell script that installs the package to the default location. I thought that by running Python code with the sudo -E PATH="$PATH" option makes sure that the python command has the same path as the virtual environment from which it is run, and thus the correct dependencies. But given that the dependencies get installed in my default Python package directory /usr/local/lib/python3.8/dist-packages, this seems not to be the case.
So how can I run python code in a shell script within the same virtual environment from which the shell script was executed? Is this possible at all? I am familiar with Python, but very new to shell.
I'm using Python 3.8.10 (both as default and in the virtualenv), Ubuntu 20.04 and zsh.
If you run sudo ./script.sh, the actually executor of the script is the superuser, which means none of the env-related stuff in your virtual env is synced over.
You may need to specify the python virtual env inside your ./script.sh, i.e., adding one line on the top of the ./script.sh: #!path/to/your/virtualenvs/python
The problem can be mitigated by running the bash script like so:
sudo -E PATH=$PATH ./script.sh
instead of just sudo ./script.sh.
This way, sudo executes the shell script, and the python path of your current virtualenv is passed along.
Inside the shell script, you now don't need to execute all python scripts with sudo -E PATH=$PATH script.py, but simply using python script.py suffices, since in this case python refers to the virtualenv python version.

pyinstaller asks for dependency that has assuredly been provided in venv and -p

Trying to make an executable python module into an .exe using pyinstaller. So I
created an environment: c:\Python39\python.exe -m venv %DIR_BASE%,
activated it: %DIR_BASE%\scripts\activate, and finally
wrote a requirements.txt including the line jsonschema, saying
python -m pip install -r requirements.txt
Now the master command accessing the pyinstaller.exe. that per se knows nothing
of the new environment beyond the -p parameter I give it, and which I hoped would suffice:
C:\python39\scripts\pyinstaller
-p "X:\paws\src\shared\python;X:\paws\PyEnv\Py39\Lib\site-packages;
X:\paws\PyEnv\Py39\Lib\site-packages\jsonschema;
X:\paws\PyEnv\Py39\Scripts"
-n ptGui
%DIR_BASE%\prototype_gui\__main__.py
Alas, I got:
File "c:\python39\lib\site-packages\pkg_resources\__init__.py", line 785, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'jsonschema' distribution was not found
and is required by the application
However, I am pretty positive that the environmental(!) installation of that
package succeeded:
(Py39) X:\paws\PyEnv\Py39\Lib\site-packages>ls -1 | grep jsonschema
jsonschema
jsonschema-3.2.0.dist-info
Pretty sure that is some beginners misconception. Does anyone of you see it,
and would tell me how to do it right?
I can answer my own question. Apparently it is wrong to install pyinstaller on the base installation, use it from there, and expect everything to work.
Instead of C:\python39\python -m pip install pyinstaller I now use
X:\PyEnv\Py39\Scripts\activate
python -m pip install pyinstaller
yielding an env-specific X:\PyEnv\Py39\Scripts\pyinstaller.exe. Using that all was good.

How to make pip available to git bash command line on Windows?

I added the pip installation folder in my python site-packages directory to my PATH, but I can still only run it via python -m pip in my git bash. Just pip gives me command not found.
I looked around the pip directory and I don't see any binaries, so this makes sense. Yet clearly pip is a thing that is normally used on the command line without python -m. So what component am I missing here?
Try adding C:/path/to/python/Scripts/ to PATH
There should be a pip.exe there!
in your Anaconda directory there is a pip file in the direcctory Scripts.
add this path : C:\Users#yourname\Anaconda3\Scripts
i did the same and it was successful !!
(when i wrote pip in git bash it worked)

manage.py - ImportError: No module named django

I just ported a working django app from a windows system to ubuntu by just copying all the files to /var/www/some/dir/djangoApp. But now, when executing
python manage.py runserver 8080
I get the Error:
ImportError: no module named django
I have already installed a fresh version of django with python setup.py install to /usr/local/lib/python2.7/dist-packages/django/ and added the path to PYTHONPATH.
The linux system in not maintained by me and has numerous python versions installed.
calling >>> import django in the shell does not raise an ImportError.
I'm very confused. Please help me!
Here's the traceback from the console:
Traceback (most recent call last):
File "manage.py", line 13, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 280, in execute
translation.activate('en-us')
File "/usr/local/lib/python2.7/dist-packages/django/utils/translation/__init__.py", line 130, in activate
return _trans.activate(language)
File "/usr/local/lib/python2.7/dist-packages/django/utils/translation/trans_real.py", line 188, in activate
_active.value = translation(language)
File "/usr/local/lib/python2.7/dist-packages/django/utils/translation/trans_real.py", line 177, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "/usr/local/lib/python2.7/dist-packages/django/utils/translation/trans_real.py", line 159, in _fetch
app = import_module(appname)
File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
ImportError: No module named django
Since you just migrated to a UNIX environment, i suggest you migrate also to the best practices on such a platform too.
Download PIP
sudo apt-get install python-pip
Download and install virtualenv to set up a separate python virtual environment for your apps. This will allow you to run different flavors of django and other software without conflicts.
sudo pip install virtualenv
Create virtual environment by running. You will get a folder called myvirtualenvironment with a bin folder and a few executables inside it.
virtualenv myvirtualenvironment --no-site-packages
In order to tell your shell that you're working with that newly created virtual environment you need to run the activate script found in /myvirtualenvironment/bin/
source myvirtualenvironment/bin/activate
Now you can install django specifically to that virtual environment.
pip install django OR pip install django==1.6 depending on what version you want to install. If you don't specify, the latest version will be installed.
Now, migrate your Django project inside of /myvirtualenvironment/ and run the the runserver command.
Sometimes there are some .pyc files in the directories and you don't get any error from the console. Trying to install Django from pip.
sudo pip install django
Best practices advise to create a requirements.txt file (From you Windows installation)
pip freeze > requirements.txt
And then create a new virutalenv to install every package
mkvirtualenv myapp
pip install -r requirements.txt
I landed on this page after getting the same error (On a site I've been actively developing just fine for months). For me #asaji's answer reminded me that I had forgotten to launch my virtual env.
After launching my Virtual Env . Scripts/activate it worked great!
It seems like quite a big job for a problem that (MIGHT) be very small.
I had this exact problem, it was working one day, and then the next day it wasn't working anymore. I'm pretty new to Linux and Django in general but knows python well so didn't really know where to look other than "Virtual environment"
I started installing virtual environments again (like some people suggest) BUT DON'T!
At least not until you tried this and think about it:
Did you install your virtual environment as a temp one(did you perhaps install it like this: "pip install pipenv"?)
if you did(like you should have done it) you will have somewhere around your current django project 2 files - pipfile & pipfile.lock
open your terminal, cd to the path of those files(same folder)
write in terminal: pipenv shell
BOOM: You just reactivated your "TEMPORARY" virtual environment and Django works exactly as it should, will and did.

Python easy_install in a virtualenv gives setuptools error

There are a number of other StackOverflow questions similar to this one, but in each case, the platform was different or the error message was different or the solution had no effect or was outdated. I am trying to set up a Python 2.7.6 virtualenv and install modules into it but easy_install gives me errors indicating that setuptools is not available. But AFAIK easy_install is part of setuptools, so this makes no sense.
The problem only happens in a virtualenv. Here's what I've done:
Created a brand new Red Hat 5 virtual machine
Did a yum -y update to get the latest stuff, rebooted
Downloaded Python-2.7.6.tar.gz, unzipped, ./configure; make; sudo make install
Confirmed that python -V gives me 2.7.6 and sudo python -V also gives me 2.7.6
wget https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
Modified ez_setup.py to add the --no-check-certificate flag to wget to get around proxy server problems in our network
sudo python ez_setup.py
sudo easy_install pip
sudo pip install virtualenv
virtualenv virtpy
. virtpy/bin/activate
easy_install elementtree
All of these steps succeed except for the last one, which fails with:
Traceback (most recent call last):
File "/home/gperrow/virtpy/bin/easy_install", line 7, in <module>
from setuptools.command.easy_install import main
File "/home/gperrow/virtpy/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 44, in <module>
from setuptools.package_index import PackageIndex
File "/home/gperrow/virtpy/lib/python2.7/site-packages/setuptools/package_index.py", line 203, in <module>
sys.version[:3], require('setuptools')[0].version
File "/usr/local/bin/scripts/pkg_resources.py", line 584, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/bin/scripts/pkg_resources.py", line 482, in resolve
raise DistributionNotFound(req) # XXX put more info here
pkg_resources.DistributionNotFound: setuptools
I'm starting with a clean VM and I've done nothing really unusual but I'm finding "easy_install" anything but. Am I doing something wrong or am I missing one or more steps?
I can't tell why exactly you get errors, but I am confident that there is a systematic approach that lets you cleanly install your custom Python including a working pip and virtualenv. In the following, I describe the procedure that I would use.
First of all, leave your system's Python untouched, for a number of reasons. One of them is that parts of your Linux distribution might depend on the specifics of its default Python. You don't want to break these parts. Another reason is that the vanilla Python installed to default locations might become confused by residuals of the original Python (distributions may have a specific Python/dist-packages/site-packages directory layout that differs from the vanilla one). This might or might not be a real problem in practice -- you can conceptually prevent these issues by not overwriting the system's Python. Another argument is that there is no need to install Python 2.7.6 as root. Install it as unprivileged user (called 'joe' from here on) and put it into /opt or something. This would be a clean start.
After having set up your custom Python, create yourself a little shell script, e.g. setup.sh that sets up the environment for using your custom Python version. Make sure to adjust and clean up the environment. Obviously, this especially affects PATH and PYTHONPATH. I would make sure that PYTHONPATH is unset and that PATH properly points to the custom install. Look at env and try to identify if there is anything left that might configure python in unexpected ways. After all, make sure that
$ command -v python
$ python -v
, executed as joe, look right.
Still being joe and under the proper environment, install pip for the custom Python. According to http://pip.readthedocs.org/en/latest/installing.html, download https://raw.github.com/pypa/pip/master/contrib/get-pip.py and execute it: python get-pip.py. Validate that it installed properly and that your environment is still right:
$ command -v pip
/CUSTOM/PYTHON/bin/pip
$ pip --version
pip 1.x.x from /CUSTOM/PYTHON/lib/python2.7/site-packages
At this point you should make sure that your environment does not contain any VIRTUALENV_* variables (which might have been set by your distribution or whatever component (unlikely, but worth checking)). If any VIRTUALENV_* variable is set, it most likely configures virtualenv in an unexpected way. Get rid of this (unset, or change). Then go ahead and install virtualenv into your new Python with the new pip, via pip install virtualenv. It might also be worth a try to install the latest development version of virtualenv via pip install https://github.com/pypa/virtualenv/tarball/develop.
Create and activate a new virtual environment. Using command -v pip, verify that pip comes from the virtual environment. Then install your custom package(s).
Note: I would definitely use pip to install things to the new virtual environment, and not easy_install, if possible. pip will quite soon be the official installer tool (it will be included with Python 3.4). If for some reason you really depend on easy_install, this should be possible (the easy_install command is provided by the virtual environment), but just to be sure you should also verify this via command -v easy_install.
I have a couple of suggestions, and also what I believe is your problem. Let's go for the problem first.
I noticed you said in your third bullet point
Confirmed that python -V gives me 2.7.6 and sudo python -V also gives me 2.7.6
But you did NOT display the python version visible after the 2nd to last bullet point, when you activate your virtualenv. Since that step plays with your path, it's possibly not invoking the python you think.
What does python -V give AFTER you activate your virtualenv? I strongly suspect after the activate step you are being redirected and invoking the system python (which on RHEL is typically <= 2.5). It's important to RHEL that you not upgrade the system installed version of python, and the folks at RedHat go through several hoops to ensure this.
My first suggestion is to go back to the step where you installed python, and specify an alternate installation. Something like:
./configure --enable-shared --prefix=/opt/python2.7 && make && sudo make install
(note: --enable-shared is not specifically required...just a good idea)
The second suggestion I have is related to python package management. We will typically use easy_install to get pip installed. Once we have pip, we switch over to using pip for everything. It would be interesting to know what happens in your last step AFTER you activate your virtualenv, you then
pip install elementtree
One more suggestion. After you've installed python2.7, and then installed virtualenv, pip & easy_install, you should then have *-2.7 version of these scripts available. It might be better to try invoking them & specify the version. This removes any ambiguity about the version you're requesting. eg:
virtualenv-2.7 virtpy
pip-2.7 install elementtree
easy_install-2.7 elementtree
Your approach is correct and other answers (Jan-Philip's and Piotr's) also, but your problem is simple:
You use a part of an old installation of setuptools on Python sys.path together with the new installation. It is obvious that the line numbers in pkg_resources.py should be by approximately by 100 lines greater for the current version of setuptools than it is in your traceback:
...
File "/usr/local/bin/scripts/pkg_resources.py", line 669, in require ## 584 is too old
needed = self.resolve(parse_requirements(requirements))
File "/usr/local/bin/scripts/pkg_resources.py", line 572, in resolve ## 482 is too old
raise DistributionNotFound(req)
The line numbers of the first three files of setuptools in traceback are correct: "virtpy/bin/easy_install", "virtpy/.../site-packages/setuptools/command/easy_install.py", "virtpy/.../site-packages/setuptools/package_index.py". Using different versions of the same package is everytimes a big problem.
Examine your Python sys.path by python -c "import sys; print sys.path" and think why "/home/gperrow/virtpy/lib/python2.7/site-packages" is not before "/usr/local/bin/scripts", or search everywhere for the string "/usr/local/bin/scripts". Fix it. One possible solution could be to install setuptools again locally into your active virtualenv: python ez_setup.py. (A fast method of examining the reason is to determine first if it is caused by user settings. Create a new user account, run the last three commands (virtualenv virtpy; ...) from that account, look at result and delete that user. If it works, examine which configuration file in your profile makes the problem.)
Verify finally that the new pkg_resources are used:
(virtpy)$ python -c "import pkg_resources; print pkg_resources"
<module 'pkg_resources' from '/home/you/virtpy/lib/python2.7/site-packages/pkg_resources.pyc'>
# not the obsoleted /usr/local/bin/scripts/...
Did you try to use Software Collections? It is a standard approach on RHEL to get more up to date packages like python27:
http://developerblog.redhat.com/2013/08/08/software-collections-quickstart/
Then to use python27 you have to prefix all your python commands with
scl enable python
e.g. to have a bash with python27:
scl enable python27 bash
This setup will possibly have a more compatible environment.

Categories

Resources