Multiple Python 3 kernels in Docker Jupyter Notes - python

I’m trying to add different python 3 kernels to a docker based Jupyter Notes installation. This would be optimal since many different notebooks will need to run on this installation.
So far I have tried installing a virtual environment, and added the second kernel, but however I order the instructions I always end up with the same kernel in both. Is this even possible? All mentions online seem to be for py2 and py3 kernels. Any examples you could share? Thanks!
I tried creating a virtual environment, activating it and then adding the ikernel, but in the end both kernels had the same version in jn

Related

Will there be a problem if I don't add Python to environment variables? I have installed Python and worked on Pycharm and haven't had any problems

I am working on Python 3.9 with Pycharm and it has worked fine so far. Anytime I need to check something from the command prompt I write 'py [rest of the command]' and it does the work. I am trying to install jupyter now, and will be learning Machine Learning in future. Will this cause any problems?
There should not be any issues, as python should be already set inside your environment variable, due to the fact that you are able to run python codes with the py command, I am assuming running python programs is what you mean by 'does the work'.
Jupyter can be downloaded with two methods, one with the anaconda distribution and other with pip, a python package manager. I would recommend the anaconda distribution because even though it may be a little big in size, it does contain all the other essentials required for machine learning.

How to not install Jupyter in every conda environment?

TL/DR: Is there really no way to just tell jupyter console to run in some conda environment, without first unnecessarily installing (and hence depending on) Jupyter in that environment?
I really did try to make this not appear entirely like a rant... I hope you see there is an actual question here.
It seems like getting Jupyter to work with conda environments requires either
Installing a new Jupyter in every conda environment you want to use, or
Installing ipykernel in every conda environment you want to use (which depends on the jupyter package...), and creating a new kernel from within the environment.
I find this a bit astonishing, since I do not think of Jupyter as a requirement of the project, but rather as just another editor/IDE-like thing, making use of environments. Conda's purpose is to manage reproducible dependencies; Jupyter's should be to interpret code within the environment I tell it. Since I'd like to store the environment.yml in git and share it with others, I see no purpose in also requiring them to install Jupyter; they might not even use it.
Yet, it seems not to work that way at all. It feels like when I would like to use Emacs to make use of an environment, I'd have to install an "emacskernel" package in every environment. That's not how it works.
What I would like is to have one globally installed Jupyter, which can just be pointed to different environments -- similar to how the Julia REPL with julia --project=... works (yeah I know that conda is not a built-into-the-language package manager, but you should get the analogy...). (This would kinda work if conda environments would "inherit", i.e., fall back to the "global environment" for unfound dependencies, and you could just use the global Jupyter from within each one; but as I understand, they don't?)
Is this possible at all? What am I missing? Are there any better alternatives providing global Jupyter + local environments? (I must admit I have never used virtualenv or the like...)
(This older question seems to cover the same topic for pipenv, but there's not real answer there... neither a definite NO, nor an explaination why.)

How to use more than one conda environment in a project

I am working on a research project in which I need to use some scientific packages each of which comes with their specific requirement files including their needed libraries. I am coding python in jupyter notebook using Anaconda in Windows 10.
Based on what I've read on the web, each project needs to have its own environment, so I created an environment (say project_env) using conda. During my project, in some parts, I need to use some external scientific packages (let's call 'bst' and 'MDN'), cloned from Github, each of which has their specific dependencies.
my current practice is just installing all these dependencies in the same environment (project_env), and code the whole project in one notebook. However, as going forward, things getting more complicated and facing some conflicts between installed packages even using conda installation. So, I came up with this idea to keep things apart as much as possible, i.e. creating two other environments for the external packages (bst_env and MDN_env) and then using them whenever I need them in the project. Under this scenrio, I cannot include all my project code in one jupyter notebook because as far as I know there is no way to switch between environments from inside a notebook. However, in this way it is quite difficult and messy to run different notebooks for different parts of the project.
My question is: Is there a method to run more than one environment from a notebook? if no, what would be the best practice to handle these environments in a project? should I export my variables from my source code (run in project_env) to other environments (bst_env or MDN_env) every time and activate and run their according environments and notebooks every time or there is a better practice to do that?
I found this great package (nb_conda_kernels), which is exactly what I wanted. It enables you to switch between environments (kernels) inside a jupyter notebook, just by selecting from a list of available environments.
As mentioned here (https://github.com/Anaconda-Platform/nb_conda_kernels), just type: 'conda install nb_conda_kernels' in conda terminal to install this package in the environment (kernel) from which you want to run other environments (kernels). In my case (the above question) it is 'project_env'. Also, make sure to have 'ipykernel' installed in the external environments you want to use in your notebook (in my case: 'bst_env' and 'MDN_env').
Now, during working in a notebook under environment "A", you can use dependencies installed in environments "B" or "C" just by selecting these environments from the list of kernels in jupyter notebook.

Conflicts between multiple versions of conda

I use miniconda to manage my python environments on Windows 10. Additionally, I use software called ESRI ArcGIS Pro that comes bundled with it's own versions of conda and python that are somewhat modified to work with their software. I must use ESRI's conda to manage environments that interact with this application.
I have this same set up on both my laptop and desktop, and until recently had no issues. However, recently ESRI's conda stopped working on my laptop. Any conda commands (e.g. conda list, conda info --envs, conda create -n myenv, even just conda by itself) produce no output whatsoever. At first I suspected that PATH was set incorrectly, but I've check that this is not the case (even calling ERIS's conda.exe with a full path still does not work). I then suspected that the conda.exe file itself was corrupted, but this also is not the case (copied it to my desktop and it works fine there).
I suspect it may have something to do with my separate miniconda installation. It doesn't seem to be an issue of environmental variables being set incorrectly (again checked against the working system), but I'm wondering if there is any possibility that there are Registry entries (perhaps set by my Miniconda install) that could be causing this issue?
Any thoughts on why this might be the case? Or advice on how to proceed with diagnosing the issue?
EDIT:
Per merv's request, my conda environmental variables:
CONDA_DEFAULT_ENV=C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3
CONDA_PREFIX=C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3
CONDA_PS1_BACKUP=$P$G
Clearly these paths are different than normal due to the custom distribution.
To answer your other questions, no other conda commands generate any output whatsoever. As for activate I don't have any other environments to try activating (the arcgispro-py3 env you see above is the name of the 'base' env that ships with the software), but deactivate seems to work. Another slight difference to mention is that conda activate ... is not a command in this special conda, you have to just use activate by itself which AFAICT calls a shell script.

Should i install the programming IDE inside or outside conda environment (tensorflow 2 in windows 10)

I already set up everything that is required to run TensorFlow 2 on windows 10. I also did install miniconda and create an environment. I tested Tensorflow 2 library by importing it in python console and it is running like a charm. Now what I am missing is the IDE for programming.
I just wonder, where is the best place to install the IDE (such as jupyter/spyder/others)? Should I install it inside or outside the environment? From the tutorial that is spread across the internet, sometimes I found the one who places it inside the environment, and sometimes I also found the one who places it outside the environment instead.
I don't really know what are the advantages and consequences of both cases.

Categories

Resources