IPython (jupyter) vs Python (PyCharm) performance - python

Are there any performance difference between a code run on a IPython (Jupyter for example) and the same code run on "standard" Python (PyCharm for example)?
I'm working on a neural network for a project where I need some kind of presentation and Jupyter + IPython does the job, but i was wondering if there are any kind of differences in the performances between Python and IPython, since i need to train my network and I obviously prefer the faster method.

According to this link
There shouldn't be a difference between both of them if you are
running a fresh run of the script. Although IPython has enhanced
features compared to the normal python interpreter (I would be stuck
with it).

Related

what is the difference between pytorch and python kernel in jupyter (or pyspark, tensorflow, XGBoost)?

I noticed when I create a new Jupyter notebook, that I can choose different kernels that are all python based.
I understand when you would choose an R kernel vs a python kernel because you would be writing different code in those situations.
However, what I am really curious about is the difference between a pytorch kernel (or other python based package) and the python kernel.
Are there certain advantages of using the pytorch kernel for a pytorch project or would the python kernel work just as well?

Intellisense not recognizing variables returned by sklearn as array type

(my environment is vs code using pylance)
For example, arrays returned from train_test_split() or LinearRegression.predict() are not recognized as an array and do not offer any autocomplete suggestions. In google colab and spyder after typing my_returned_array., I get a long list of available array functions, but in VS code with pylance I get nothing.
Is there some additional configuration I need to do or is there some other extension I need to use?
Google Colab and Spyder maintain the state of all variables as parts of the program are run. VS Code does a similar thing with the Jupyter Notebooks extension while running in debug mode, allowing me to explore the states of variables, but pylance doesn't seem to pick up state information from the Jupyter notebook. While this would be a wonderful feature, it is not a necessary behavior.

SampleRNN - Pytorch implementation beginner

I'm trying to start work with this: https://github.com/deepsound-project/samplernn-pytorch
I've installed all the library dependencies through Anaconda console but I'm then not sure how I'm to run the python training scripts.
I guess I just need general help with getting a git RNN in python working? I've found a lot of tutorials that show working from notebooks in Jupyter or even from scratch but can't find ones working from python code files?
I'm sorry if my terminology is backward, I'm an architect who is attempting coding, note a software engineer.
There are instructions for getting the SampleRNN implementation working in terminal on the git page. All of the commands listed on the page are for calling the Python scripts from terminal, not from a Jupyter Notebook. If you've installed all the correct dependencies then in theory all you should need to do is call the terminal scripts to try it out.
FYI it took me a while to find a combination of parameters with which this model would train without running into memory errors, but I was working with my own dataset, not the one provided. It's also very intensive - the default train time is 1000 epochs which even on my relatively capable GPU was prohibitively high, so you might want to reduce that value considerably just to reach the end of a training cycle unless you have a sweet setup :)

Is there any way to get two different languages' kernels run on the same Jupyter notebook?

I am creating a course for switching from Matlab to Python, and it would be great if I could have a (series of) Jupyter notebook(s) that run both Python and Matlab code. I successfully managed to create notebooks running Matlab code, however, is there any way to have two kernels running on the same notebook, one for Python and one for Matlab?

Does running IPython/Jupyter Notebook affect the speed of the program?

I am developing a program for simulation (kind of like numerical solver). I am developing it in an ipython notebook. I am wondering if the speed of the code running in the notebook is the same as the speed of the code running from terminal ?
Would browser memory or overhead from notebook and stuff like that makes code run slower in notebook compared to native run from the terminal?
One of the things that might slow things a lot would be if you had a lot of print statements in your simulation.
If you run the kernels server and browser on the same machine, assuming your simulation would have used all the cores of your computer, yes using notebook will slow things down. But no more than browsing facebook or Youtube while the simulation is running. Most of the overhead of using IPython is actually when you press shift-enter. In pure python prompt the REPL might react in 100ms, and in IPython 150 or alike. But if you are concern about performance, the overhead of IPython is not the first thing you should be concern about.
I have found that Jupyter is significantly slower than Ipython, whether or not many print statements are used. Nearly all functions suffer decreased performance, but especially if you are analyzing large dataframes or performing complex calculations, I would stick with Ipython.
I tested learning the same small neural net (1) under Jupyter and (2) running Python under Anaconda prompt (either with exec(open(foo.py).read()) under python or with python foo.py directly under Anaconda prompt).
It takes 107.4 sec or 108.2 sec under Anaconda prompt, and 105.7 sec under Jupyter.
So no, there is no significant difference, and the minor difference is in favor of Jupyter.

Categories

Resources