I am having problems installing modules and then importing them into specific Jupyter Notebook kernels. I want to install them directly into the kernel as opposed to throughout anaconda to separate dependencies in projects. Here is how the problem goes:
I firstly want a package, for example, nltk
I navigate to and activate the conda environment (called python3) and run 'conda install nltk'
I then load that environment into Jupyter using ipykernel with the command 'python -m ipykernel install --user --name python3'
When trying to import the package into the notebook it tells me that it cannot be found
I have been struggling with this for a while. Where am I going wrong? I greatly appreciate all the help.
NOTE: I have somehow managed to install and import many packages into notebooks using the aforementioned process. I'd really like a method to do this in a foolproof manner.
Not entirely clear where things go wrong, but perhaps clarifying some of the terminology could help:
"navigate to...the conda environment" - navigating has zero effect on anything. Most end-users should never enter or directly write to any environment directories.
"...and activate the conda environment" - activation is unnecessary - a more robust installation command is always to use a -n,--name argument:
conda install -n python3 nltk
This is more robust because it is not context-sensitive, i.e., it doesn't matter what (if any) environment is currently activated.
"load that environment into Jupyter using ipykernel" - that command registers the environment as a kernel at a user-level. That only ever needs to be run once per kernel - not after each new package installation. Loading the kernel happens when you are creating (or changing the settings of) a notebook. That is, you choose the kernel in the Jupyter GUI.
Even better, keep jupyter in a dedicated environment with an installation of nb_conda_kernels and Jupyter (launched from that dedicated environment) will auto-discover all Conda environments that have valid kernels installed (e.g., ipykernel, r-irkernel).
Related
In a conda environment with Python 3.8.15 I did
pip install ultralytics
successfully installed ...,ultralytics-8.0.4
But when running from ultralytics import YOLO , it says
ModuleNotFoundError: No module named 'ultralytics'
I run using Colab.
First I install Ultralytics using pip command
!pip install ultralytics
then
from ultralytics import YOLO
and it worked.
Were you using Jupyter notebook? if so, jupyter might not using the correct python interpreter. Or you're using jupyter that installed on system instead of jupyter installed inside a conda environment.
To check which python jupyter use this code on a cell:
import sys
print(sys.executable)
To list all installed python interpreter available. use this command:
!which -a python
The python inside the conda environment should be on the path something like this:
~/.conda/envs/{myenv}/bin/python
To use correct interpreter inside a conda environment you need to use separate jupyter installation inside the conda environment. see this answer: How to use Jupyter notebooks in a conda environment?
You can use magic %pip install from a cell inside the notebook to insure the installation occurs in the environment that the Jupyter notebook kernel is using. Mikhael's answer points out the thorough way to be really sure how to deal with this and fully control things. However, it is nice to have convenient, quick alternatives when trying to get past a hurdle.
For those using actual Jupyter anywhere (not Google Colab), the install command would be:
%pip install ultralytics
Be sure to let the installation process fully depending on your system and network this can take a bit. Next, after running any magic install command, you'll see a message to restart the kernel, and it is always best to do that before trying the import statement. Finally, after restarting the kernel you can run the suggest import after of from ultralytics import YOLO and hopefully not encounter ModuleNotFoundError: No module named 'ultralytics' now.
The magic command was added to insure that installation occurs in the environment where the kernel backing the notebook is found. See here for more about the modern magic install commands in Jupyter. (For those using conda/Anaconda/mamba as the primary package manager for when packages have conda install recipes, theres a related %conda install variation that also insures installation to the proper environment that the kernel is using.)
See JATIN's answer if you are using Google Colab at this time. Because I don't believe Google Colab has the magic pip install as they have sadly not kept up with current Jupyter abilities.
The exclamation point use in conjunction with pip install is outdated for typical Jupyter given the addition of the magic command. Occasionally, the exclamation point not insuring the the install occurs in the same environment wherein the kernel is running could lead to issues/confusion, and so the magic command was added a few years ago to make installs more convenient. For more about the shortcoming of the exclamation point variant for this particular task, see the first sentence here.
In fact, these days no symbol is better than an exclamation point in front of pip install or conda install when running such commands inside a vanilla Jupyter notebook. No symbol being even better than an exclamation typically now is due to automagics being enabled by default on most Jupyter installations. And so without the symbol, the magic command variant will get used behind-the-scenes. Typically, it is better to be explicit though and use the magic symbol, but you may see no symbol work or be suggested and wonder what is happening.
I have a conda environment containing all packages for jupyter notebook (say it's called jupyter_env. In a different conda environment I have R installed including r-irkernel (say the env is called R_env).
For python kernels I can easily make a python kernel in a specific environment (called e.g. pyth27) available to my jupyter installation in a different environment:
(pyth27) > python -m ipykernel install --prefix=/path/to/jupyter/env --name "python27"
Is there anything similar possible for the R kernel? So far I can only run the R kernel using a jupyter installation within the same environment(R_env).
One solution might be the nb-conda_kernels package. However there I'm not clear if it always adds all available kernels from all environments or whether I can specify which environments should be searched.
My question is similar to this one https://github.com/jupyter/jupyter/issues/397. Only that I don't want to use the base environment to start jupyter but a dedicated environment.
As described on https://github.com/IRkernel/IRkernel, the r-ikernel package provides a mechanism similar to python -m ipykernel install, to be run in R:
R> IRkernel::installspec()
To run this from Bash, you can do
(R_env)> Rscript -e "IRkernel::installspec()"
Now the tricky part, due to Jupyter and R being in different environments: According to https://github.com/IRkernel/IRkernel/issues/499, IRkernel::installspec() requires the jupyter-kernelspec command. I've tested two methods to provide it (to be done before issuing the above commands):
jupyter-kernelspec is part of Jupyter and hence in the file tree of jupyter_env, so add its path to PATH (I found it's better to add to the end so as to not disrupt other path lookups during the Rscript call)
(R_env)> export PATH="$PATH:</path/to/conda>/envs/jupyter_env/bin"
jupyter-kernelspec is included in the jupyter_client conda package, so you can do
(R_env)> conda install jupyter_client
Caveat: this installs a number of dependencies, including Python.
I opted for the first method to keep R_env free of Python packages.
In a project where I have to run some Jupyter notebooks, I created a virtual environment using pipenv and installed some packages (note that I used the --site-packages flag).
Although now I am now able to run the notebooks with pipenv run papermill ..., I cannot run them from Jupyter using pipenv run or pipenv shell because of some ModuleNotFoundError exceptions.
In particular, the modules that are note found in the second case are the ones installed in the virtual environment only and not inherited from global-sites.
Indeed, if I check the sys.path I can see the difference in the two cases: in the second there is no ~/.local/share/virtualenvs/... entry.
Why am I having this issue and how can it be solved? (If possible, I would prefer not to pollute my ~/.local/share/jupyter/kernels with other kernels from virtualenvs).
As was suggested here, you also need to make sure that the kernel is also under the venv:
python -c "import IPython"
python -m ipykernel install --user --name=my-virtualenv-name
and then switch the kernel named "my-virtualenv-name" in the jupyter user interface
I'm new using Jupyter on Miniconda and I was having a problem while importing packages (ImportError: DLL load failed ), looking for answers the solution was to initialize a base environment in my bash.
I used to initialize jupyter typing jupyter notebook in bash, but using the solution given, I have to activate conda activate bash and then type jupyter notebook. What is the difference between starting Jupyter the way I used to and this new way?
conda activate command activates a virtual environment. It is an isolated environment so all packages you installed in the virtual environment cannot be used outside it. When you start bash, you are in the base environment and it seems that you installed your Jupiter in bash environment so you cannot use bash's Jupiter in base environment and vice versa. It may be a little annoying at the beginning, but it can let you use different environments for different purposes. For example, since pip only allows one version of a specific package to be installed, different environments can let you test a new version of a package without breaking the functionality of the original program.
I'm currently experiencing some troubles with jupyter notebook and system shell commands. I use nb_conda_kernels to be able to access all of my conda environment from a jupyter notebook launched in base environment, and this works perfectly in most of my use cases. For simplicity sake, let's assume I have 2 environments, the base one, and one named work_env. I launch jupyter notebook in the base environment, and select the work_env kernel upon opening the notebook I'm working on.
Today I came across this line:
! pip install kaggle --upgrade
upon execution of the cell (with the work_env kernel correctly activated), pip installed the kaggle package in my base environment. The intended result was to install this package in my work_env. Any ideas on how to make shell commands execute in the "right" environment from jupyter notebook?
Try specifying the current python interpreter.
import sys
!$sys.executable -m pip install kaggle --upgrade
sys.executable returns the path to the python interpreter you are currently running. $ passes that variable to your terminal (! runs the command on the terminal).
Aliases expand Python variables just like system calls using ! or !! do: all expressions prefixed with ‘$’ get expanded. For details of the semantic rules, see PEP-215
from https://ipython.org/ipython-doc/3/interactive/magics.html
-m is used to run a library module (pip in this case) as a script (check python -h). Running pip as a script guarantees that you are using the pip linked to the current python interpreter rather than the one specified by your system variables.
So, in this way you are sure that pip is installing dependencies on the very same python interpreter you are working on (which is installed in your current environment), this does the trick.