Bokeh not working for Livelossplot in conda Jupyter notebook - python

Exploring Livelossplot (https://github.com/stared/livelossplot) for plotting loss and accuracy for train and val in PyTorch. Plotted.
Wanted an interactive (able to zoom in) plot. They have a Bokeh enable facility (https://github.com/stared/livelossplot/blob/master/examples/bokeh.ipynb). Tried. Not working for me!
Interface: Conda -> Jupyter notebook
OS: Linux
Browser: FireFox
Library: PyTorch
Suggest (1 or more):
Is there a way to resolve the issue?
Is there an alternative to making this livelossplot interactive?
Is there an easy alternative to livelossplot?

Related

Tensorflow slows intellisense (type-hint) in jupyter notebook

im fairly new to TensorFlow but type hints taking forever to popup are kinda getting on my nerves now so I was curious if anybody knew any fixes.
while using TensorFlow 2.0 in vscode jupyter notebook the intellisense dropdown takes up to 2 seconds to actually appear. this only happens when I import TensorFlow, so no other library slows down my IntelliSense.
im on a Mac intel chip for reference. not the fastest machine but still.
restarting everything
updating tensorflow
using these two imports seemed to help a bit
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
This what i recommend you is to start over with tensorflow with Jupyter notebook with specific virtual environment created on anaconda. Just for that purpose using tensorflow 2.0.
Here is detailed link to instructions:
Anaconda installs TensorFlow 1.15 instead of 2.0

Screen freeze when training deep learning model from terminal but not Pycharm

I have an extremely weird issue where if I run pytorch model training from Pycharm, it works fine but when I run the same code on the same environment from terminal, it freezes the screen. All windows become non-interactable. The freeze affects only me, not other users and for them >>top shows that the model is no longer training. The issue is consistent and reproducible across machines, users, and GPU slots.
All dependencies are installed to a conda environment dl_segm_auto. In pycharm I have it selected as the interpreter. Parameters are passed through Run->Edit configuration.
From terminal, I run
conda activate dl_segm_auto
python training.py [parameters]
After the first epoch the entire remote session freezes.
Suggestions greatly appreciated!
The issue was caused by the matplotlib's backend taking over the screen on Linux. Any of the following can solve the problem:
Installing PyQt5 (which changes the python's/environment's default backend)
Running from Pycharm (which uses a backend selector on startup).
Having matplotlib.use('Qt5Agg') (and potentially others) at the start of plotting functions or top-level script.

Create a custom SageMaker image with recent Python release

I am using Sagemaker Notebook Instances on AWS.
Looks like we can only use Python 3.6 kernels.
I would like to be able to use Python 3.10 (latest version, or at least Python 3.9) in a notebook.
So far, what I have tried is based on life cycle: https://docs.aws.amazon.com/sagemaker/latest/dg/studio-byoi-create-sdk.html
But somehow, it didn't work (I was not able to use the recent kernel in the notebook)
I have found an interesting link: https://github.com/aws-samples/sagemaker-studio-custom-image-samples
but my knowledge is a bit limited and I do not know what exactly I should look at precisely to see the example I should follow.
Any advice/lead you could suggest please ?
Thanks
SageMaker Data Science Kernel supports Python 3.6 version at the moment.
If you need a persistent custom kernel in SageMaker studio, you can create an ECR repository and build a docker image with custom environment configurations. This image can then be attached to the SageMaker studio notebooks. Reference link!

Confused with setting up ML and DL on GPU

My goal is to set up my PC for machine and deep learning through my GPU. I've read about all the different components however I can not connect the dots for what I need to do.
OS: Ubuntu 20.04
GPU: Nvidia RTX 2070 Super
Anaconda: 4.8.3
I've installed the nvidia-cuda-toolkit (10.1.243), but now what?
How does this integrate with jupyter notebook?
The 3 python modules I want to work with are:
turicreate - I've gotten this to run off CPU but not GPU
scikit-learn
tensorflow
matlab
I know cuDNN and pyCUDA fit in there somewhere.
Any help is appreciated. Thanks
First of all - I have the experience limited to ubuntu 18.04 and 16.xx and python DL frameworks. But I hope some sugestions will be helpfull.
If I were familiar with docker I would rather consider to use docker instead of setting-up everything from scratch. This approach is described in section about tensorflow container
If you decided to setup all components yourself please see this guideline
I used some contents from it for 18.04, succesfully.
be carefull with automatic updates. After the configuration is finished and tested protect it from being overwritten with newest version of CUDAor TensorRT.
Answering one of your sub-questions - How does this integrate with jupyter notebook? - it does not, becuase it is unneccesary. CUDA library cooperates with a framework such as Tensorflow, not with the Jupyter. Jupyter is just an editor and execution controller on the server side.

When using tensorflow and numpy in one notebook: The kernel appears to have died. It will restart automatically

When I try to run code using keras and numpy or keras and Matplotlib in one Jupyter notebook I always get the message: The kernel appears to have died. It will restart automatically.
When I run the code in two different notebooks it works perfectly fine. I have installed it using anaconda and I am using macOS. I would really appreciate an answer, everything else I've found and tried so far did not work. Thank you!

Categories

Resources