Get conda environment variable inside Jupyter Notebook - python

With my conda environment activated I set an environment variable with the command
conda env config vars set ROOT_DIRECTORY=$PWD
Now, if a run echo $ROOT_DIRECTORY the output shows /home/augusto/myproject
How can I get that variable inside Jupyter Notebook? I tried with the command below, but the output shows None.
import os
print(os.getenv('ROOT_DIRECTORY'))
By the way, I have shure that Jupyter Notebook are using the correct Kernel. Running the above code inside a .py file works correctly, i.e. the output shows /home/augusto/myproject.

Find env dir
jupyter kernelspec list
cd path_to_kernel_dir
sudo nano kernel.json
Add environment section to json
json should look something like this
{
"argv": [
"/home/jupyter-admin/.conda/envs/tf/bin/python3",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"env": {"LD_LIBRARY_PATH":"/home/jupyter-admin/.conda/envs/tf/lib/"},
"display_name": "tf_gpu",
"language": "python",
"metadata": {
"debugger": true
}
}

Related

VS Code Jupyter integration does not consider custom LD_LIBRARY_PATH

I recently setup a fresh EC2 instance for development running Amazon Linux 2. To run the recent version of prefect (https://orion-docs.prefect.io/) I had to install an up to date version of SQLite3, which I compiled from source. I then set the LD_LIBRARY_PATH environmental variable to "/usr/local/lib", and installed python 3.10.5 with the LDFLAGS and CPPFLAGS compiler arguments to include that folder as well, so that the new sqlite libraries are found by python. All good so far, when running the jupyter notebook server or the prefect orion server from the terminal everything works fine. If I want to use the integrated jupyter environment from VS Code I run into the issue that the kernel does not start:
Failed to start the Kernel.
ImportError: /home/mickelj/.pyenv/versions/3.10.5/lib/python3.10/lib-dynload/_sqlite3.cpython-310-x86_64-linux-gnu.so: undefined symbol: sqlite3_trace_v2.
This leads me to believe that the system sqlite library is used, as this is the same error I get when I unset the LD_LIBRARY_PATH env variable. However when calling
ldd /home/mickelj/.pyenv/versions/3.10.5/lib/python3.10/lib-dynload/_sqlite3.cpython-310-x86_64-linux-gnu.so I am getting the following:
linux-vdso.so.1 (0x00007ffcde9c8000)
libsqlite3.so.0 => /usr/local/lib/libsqlite3.so.0 (0x00007f96a3339000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f96a311b000)
libc.so.6 => /lib64/libc.so.6 (0x00007f96a2d6e000)
libz.so.1 => /lib64/libz.so.1 (0x00007f96a2b59000)
libm.so.6 => /lib64/libm.so.6 (0x00007f96a2819000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f96a2615000)
/lib64/ld-linux-x86-64.so.2 (0x00007f96a3870000)
Where the new sqlite3 library is correctly referenced. If I unset the LD_LIBRARY_PATH variable the second line changes to:
libsqlite3.so.0 => /lib64/libsqlite3.so.0 (0x00007f9dce52e000)
So my guess is that the VS Code jupyter integration does not consider environment variables, so my question is: is there a way to specify them (and in particular the LD_LIBRARY_PATH) globally for VS Code or for the built-in jupyter server at runtime or anywhere else to fix this?
Recently, jupyter is repairing .env related problems.
You can try to install vscode insiders and install pre-release version of jupyter extension.
Using ipykernel to create a custom kernel spec with env variable solved this for me.
Steps:
Create a kernelspec with your environment.
conda activate myenv # checkout that venv, using conda as an example
# pip install ipykernel # in case you don't have one
python -m ipykernel install --user --name myenv_ldconf
Edit the kernelspec file, add env variable in the object
nano ~/.local/share/jupyter/kernels/myenv_ldconf/kernel.json
You will see something like this:
{
"argv": [
"/home/alice/miniconda3/envs/myenv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "myenv_ldconf",
"language": "python",
"metadata": {
"debugger": true
}
}
After adding env variable:
{
"argv": [
"/home/alice/miniconda3/envs/myenv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "myenv_ldconf",
"language": "python",
"env": {"LD_LIBRARY_PATH": "/home/alice/miniconda3/envs/myenv/lib"},
"metadata": {
"debugger": true
}
}
Ref: How to set env variable in Jupyter notebook
Change the kernel in vscode to myenv_ldconf.

Is there any way to use jupyter notebook to run different python version in the same notebook?

I have added the conda environment to my Jupyter notebook, but the Python version is still 3.8 (Fig 1). What I would like to do is to create an environment which contains Python version 3.7 in a Jupyter notebook without starting from a command prompt and running two separate Jupyter notebooks (Fig 2). Is it possible to have simply one jupyter notebook with two separate environments and two different Python versions?
Fig1
Fig2
Please read this page of the docs for ipykernel.
In your environment you want to install only ipykernel (not full Jupyter), and use one of the ipykernel install --name command to register the kernels with Jupyter.
If that does not work, use jupyter kernelspec list to see which kernels jupyter see and where. A kernelspec is at minimum a kernel.json file in the right place to tell jupyter how to find kernels.
For example I have the following
$ cat ~/miniconda3/share/jupyter/kernels/python3/kernel.json
{
"argv": [
"~/miniconda3/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3 (ipykernel)",
"language": "python",
"metadata": {
"debugger": true
}
}
I can use the documentation above, or create the following by hand:
$ cat ~/miniconda3/share/jupyter/kernels/python3.6/kernel.json
{
"argv": [
"~/miniconda3/envs/mypython36env/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3.6 !!",
"language": "python",
"metadata": {
"debugger": true
}
}
and assuming I have the corresponding Python 3.6 env, then I'll get two kernel one of them being Python 3.6

How can I use venv with SublimeREPL in Sublime Text 3?

For starters, here is my dev environment:
Windows 7 (although I have the same issue on another machine that is Windows 10)
Python 3.6
Git Bash
Sublime Text 3 (version 3.1.1, Build 3176)
SublimeREPL
In Git Bash, I created a new virtual environment:
$ mkdir ~/.venv
$ cd ~/.venv
$ python -m venv test-env
To activate that virtual environment, I use:
$ source ~/.venv/test-env/Scripts/activate
NOTE: I had to modify the activate script (NOT activate.bat) to get the venv to activate properly. Specifically, I changed line 40 which looked something like:
VIRTUAL_ENV="C:\Users\my_user_name\.venv\test-env"
to
VIRTUAL_ENV="/c/Users/my_user_name/.venv/test-env"
Now, when I am in the test-env virtual environment (as evidenced by the "(test-env)" text in Git Bash), I can do the usual stuff like
(test-env)
$ pip install numpy
I also have the SublimeREPL package installed in Sublime Text 3. I setup a new build system (SublimeREPL-python.sublime-build) that looks like:
{
"target": "run_existing_window_command",
"id": "repl_python_run",
"file": "config/Python/Main.sublime-menu"
}
Now, suppose I have a script
# test.py
import numpy as np
print('numpy was imported without error')
I can type Ctrl+Shift+B, then start typing 'repl', which autoselects the SublimeREPL-python build, then hit Enter. The SublimeREPL appears, but generates an error:
Traceback (most recent call last):
File "test.py", line 2, in <module>
import numpy as numpy
ModuleNotFoundError: No module named 'numpy'
>>>
SublimeREPL was called without using my virtual environment, which causes it to throw the error because numpy wasn't installed in my global python environment.
How can I run my Python script from Sublime Text 3 using SublimeREPL and accessing my virtual environment that was created using venv?
FWIW, I already tried creating a Sublime Project for this code and adding the following to the .sublime-project file:
"build_systems":
[
{
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"name": "test-env",
"selector": "source.python",
"shell_cmd": "\"C:\\Users\\my_user_name\\.venv\\test-env\\Scripts\\python\" -u \"$file\""
}
]
This allowed me to type Ctrl+Shift+B, then "test-env", then Enter (to build with the test-env build system I just created), which worked as expected and ran without error. However, it does not use SublimeREPL, which I'd like so that I can debug my code (which is more complicated that the simple test script I posted above!) and explore the variables in the REPL rather than just running code in the console.
I know it's not an answer, but a partial solution that might help anyone else on same situation as I am. Also because SublimeREPL support is almost nothing.
Note: This solution requires to modify Main.sublime-menu for every environment and assumes SublimeREPL runs interactively already (adding the "-i" for execution).
Browse packages and open SublimeREPL/config/python/Main-sublime-menu.
Search for line with "id": "repl_python_run".
Duplicate the nearest content inside the curly brackes.
Basically pasting after the same end bracket the following (note the "-i" option on "cmd" to run interactively):
{"command": "repl_open",
"caption": "Python - RUN current file",
"id": "repl_python_run",
"mnemonic": "R",
"args": {
"type": "subprocess",
"encoding": "utf8",
"cmd": ["python", "-u", "-i", "$file_basename"],
"cwd": "$file_path",
"syntax": "Packages/Python/Python.tmLanguage",
"external_id": "python",
"extend_env": {"PYTHONIOENCODING": "utf-8"}
}
},
Change the id repl_python_run to whatever you like, e.g., repl_my_first_env_python.
Change python commmand from "cmd": ["python", "-u", "-i", "$file_basename"] to use wherever your python's virtual environment executable is, e.g., /home/me/.virtualenvs/my_first_env/bin/python (linux example). Save the file.
Edit project to include inside the square brackets from "build_systems" the following:
(If you have no project, create a new Build System and paste next block of code)
{
"name": "my first env",
"target": "run_existing_window_command",
"id": "repl_my_first_env_python",
"file": "config/Python/Main.sublime-menu"
},
Finally Save.
You'll see when Ctrl+Shift+B you'll see my first env as an option.

How to automatically load profile in IPython/Jupyter notebooks?

In earlier versions of IPython it was possible to load a specific profile by
ipython notebook --profile=my_profile
so that I can include things like autoreload in my_profile.
Since using IPython 4.0.1 (Jupyter actually) I'm getting the
[W 09:21:32.868 NotebookApp] Unrecognized alias: '--profile=my_profile', it will probably have no effect.
warning, and the profile is not loaded. Have you come across a workaround?
In Jupyter create a kernel for each profile.
Find the kernels directory under the jupyter configuration directory {jupyter configuration}/kernels (on my Mac this is $HOME/Library/Jupyter/kernels):
$ mkdir profile-x-kernel
$ cat << EOF > profile-x-kernel/kernel.json
{
"display_name": "Profile X (Python 3)",
"language": "python3",
"env": { },
"argv": [ "python3", "-m", "ipykernel", "--profile", "profile_x", "-f", "{connection_file}" ]
}
EOF
Then you can select the kernel from the kernel menu in Jupyter, without having to restart the whole notebook server.

IPython kernels: specifying default directory in kernel.json

When defining my own custom ipython/jupyter kernels, is there a way to set a kernel specific default directory directly in the kernel.json file?
{
"argv": [ "python2", "-m", "IPython.kernel",
"-f", "{connection_file}"],
"display_name": "tmp_kernel",
"language": "python"
}
My goal is to create a kernel that per default creates its notebooks in my /tmp directory.
At the moment I am starting ipython in multiple terminals, which is rather inconvenient.

Categories

Resources