I am working with Jupyter Notebooks and several virtualenvs in Ubuntu 18.04 (I'm not using Conda).
I usually create a new virtualenv for each new python project I'm working on and since I'm working on many projects I would like to AVOID the creation of multiple Jupyter kernels as suggested here.
Instead, I would like to tell Jupyter to start the python kernels in the virtualenv in which I am running it BY DEFAULT.
I know this is possible since it is the way it used to work, but then I don't know what I did wrong and now if I don't create a kernel for each virtualenv Jupyter doesn't allow me to use the python modules I have installed in that environment.
The way it used to work was the following:
I activate the virtualenv
source bin/activate
I installed Jupyter in that virtualenv
I ran Jupyter in that virtualenv
jupyter notebook
I selected Python3/2, R kernel depending on what I needed
I could import all the modules installed in that virtualenv
Now it's not working anymore and if I try to import a module installed in that virtualenv it gives me the following error:
ModuleNotFoundError: No module named 'modulename'
Even if I checked that the Jupyter notebook it's looking in the right paths:
! which python
/home/user/venv_name/bin/python
and
! which pip
/home/user/venv_name/bin/pip
How can I bring back the old setting?
Related
I'm trying to import tensorflow as ts in my script. While everything is fine in a Notebook, when I try to recreate the same script in a .py file the import returns the following, popular message:
ModuleNotFoundError: No module named 'tensorflow'
Note that both Jupyter and terminal are using the same virtual environment. In Notebook it's picked as a Kernel and in terminal it's activated with conda.
I was convinced the whole Kernel/env idea was about preventing exactly that kind of situations. Although I'm new to python so maybe I'm missing something basic.
Also, this might have something to do with that I'm using an M1 and macos, so I have the tensorflow-macos installed. But please consider there might be other reasons.
Make sure your pyenv installation doesn't interfere with the conda environments. Pyenv can overrule which python installation is used even if the conda environment has picked another. In my case it was the reason. Jupyter Notebook wasn't affected by pyenv. There are two solutions:
Remove pyenv
Make sure it is clear to you how both pyenv and conda are affecting your setup and correct it. Some considerations and suggestions are mentioned in this question: Installing anaconda with pyenv, unable to configure virtual environment
I am having problems installing modules and then importing them into specific Jupyter Notebook kernels. I want to install them directly into the kernel as opposed to throughout anaconda to separate dependencies in projects. Here is how the problem goes:
I firstly want a package, for example, nltk
I navigate to and activate the conda environment (called python3) and run 'conda install nltk'
I then load that environment into Jupyter using ipykernel with the command 'python -m ipykernel install --user --name python3'
When trying to import the package into the notebook it tells me that it cannot be found
I have been struggling with this for a while. Where am I going wrong? I greatly appreciate all the help.
NOTE: I have somehow managed to install and import many packages into notebooks using the aforementioned process. I'd really like a method to do this in a foolproof manner.
Not entirely clear where things go wrong, but perhaps clarifying some of the terminology could help:
"navigate to...the conda environment" - navigating has zero effect on anything. Most end-users should never enter or directly write to any environment directories.
"...and activate the conda environment" - activation is unnecessary - a more robust installation command is always to use a -n,--name argument:
conda install -n python3 nltk
This is more robust because it is not context-sensitive, i.e., it doesn't matter what (if any) environment is currently activated.
"load that environment into Jupyter using ipykernel" - that command registers the environment as a kernel at a user-level. That only ever needs to be run once per kernel - not after each new package installation. Loading the kernel happens when you are creating (or changing the settings of) a notebook. That is, you choose the kernel in the Jupyter GUI.
Even better, keep jupyter in a dedicated environment with an installation of nb_conda_kernels and Jupyter (launched from that dedicated environment) will auto-discover all Conda environments that have valid kernels installed (e.g., ipykernel, r-irkernel).
I'm new using Jupyter on Miniconda and I was having a problem while importing packages (ImportError: DLL load failed ), looking for answers the solution was to initialize a base environment in my bash.
I used to initialize jupyter typing jupyter notebook in bash, but using the solution given, I have to activate conda activate bash and then type jupyter notebook. What is the difference between starting Jupyter the way I used to and this new way?
conda activate command activates a virtual environment. It is an isolated environment so all packages you installed in the virtual environment cannot be used outside it. When you start bash, you are in the base environment and it seems that you installed your Jupiter in bash environment so you cannot use bash's Jupiter in base environment and vice versa. It may be a little annoying at the beginning, but it can let you use different environments for different purposes. For example, since pip only allows one version of a specific package to be installed, different environments can let you test a new version of a package without breaking the functionality of the original program.
I'm currently experiencing some troubles with jupyter notebook and system shell commands. I use nb_conda_kernels to be able to access all of my conda environment from a jupyter notebook launched in base environment, and this works perfectly in most of my use cases. For simplicity sake, let's assume I have 2 environments, the base one, and one named work_env. I launch jupyter notebook in the base environment, and select the work_env kernel upon opening the notebook I'm working on.
Today I came across this line:
! pip install kaggle --upgrade
upon execution of the cell (with the work_env kernel correctly activated), pip installed the kaggle package in my base environment. The intended result was to install this package in my work_env. Any ideas on how to make shell commands execute in the "right" environment from jupyter notebook?
Try specifying the current python interpreter.
import sys
!$sys.executable -m pip install kaggle --upgrade
sys.executable returns the path to the python interpreter you are currently running. $ passes that variable to your terminal (! runs the command on the terminal).
Aliases expand Python variables just like system calls using ! or !! do: all expressions prefixed with ‘$’ get expanded. For details of the semantic rules, see PEP-215
from https://ipython.org/ipython-doc/3/interactive/magics.html
-m is used to run a library module (pip in this case) as a script (check python -h). Running pip as a script guarantees that you are using the pip linked to the current python interpreter rather than the one specified by your system variables.
So, in this way you are sure that pip is installing dependencies on the very same python interpreter you are working on (which is installed in your current environment), this does the trick.
I'm pretty new to programming so forgive my ignorance.
When installing certain python packages/modules in the cmd prompt I am able to import them fine when using Jupyter Notebook. But some modules Jupyter Notebook cannot import without installing via Conda first. Why is this?
The problem seems to be related to system and environment that you are using and not the programming :)
Since you are a beginner, let us understand the concepts first rather than solving the problem.
Python code is run on an interpreter that is installed on your machine.
Jupyter is a web application that takes code using a given language's interpreter. So Jupyter, on its own doesn't run your code. It uses the interpreter installed on a system(your local machine).
Conda is a package manager and also an environment manager. That means using conda, you can create a virtual environment on your machine and that virtual environment can have its own installation of an interpreter. This virtual environment can have its own copy of the packages/modules as well.
Now comes the best part: The jupyter notebook can be asked to use any of the interpreters including the ones installed on a virtual environment.
So most likely, you are running the jupyter notebook from an environment which doesn't have the required dependencies. So either run the jupyter notebook outside the environment or install the required packages in the environment where your jupyter notebook is running.
To know which environment is being used by your jupyter notebook, run below lines from the jupyter notebook cell:
import sys
sys.executable
If you don't get something like /usr/bin/python then the jupyter is running inside an environment. So you have to install all the packages/modules inside that environment only.