Does running IPython/Jupyter Notebook affect the speed of the program? - python

I am developing a program for simulation (kind of like numerical solver). I am developing it in an ipython notebook. I am wondering if the speed of the code running in the notebook is the same as the speed of the code running from terminal ?
Would browser memory or overhead from notebook and stuff like that makes code run slower in notebook compared to native run from the terminal?

One of the things that might slow things a lot would be if you had a lot of print statements in your simulation.
If you run the kernels server and browser on the same machine, assuming your simulation would have used all the cores of your computer, yes using notebook will slow things down. But no more than browsing facebook or Youtube while the simulation is running. Most of the overhead of using IPython is actually when you press shift-enter. In pure python prompt the REPL might react in 100ms, and in IPython 150 or alike. But if you are concern about performance, the overhead of IPython is not the first thing you should be concern about.

I have found that Jupyter is significantly slower than Ipython, whether or not many print statements are used. Nearly all functions suffer decreased performance, but especially if you are analyzing large dataframes or performing complex calculations, I would stick with Ipython.

I tested learning the same small neural net (1) under Jupyter and (2) running Python under Anaconda prompt (either with exec(open(foo.py).read()) under python or with python foo.py directly under Anaconda prompt).
It takes 107.4 sec or 108.2 sec under Anaconda prompt, and 105.7 sec under Jupyter.
So no, there is no significant difference, and the minor difference is in favor of Jupyter.

Related

Python using less than 1% of the CPU, how do I get Python to use more resources?

I am working on translating a rather complex script from Matlab to Python and the results are fine.
However Matlab takes around 5 seconds to complete, whether Python takes over 2 minutes for the same starting conditions.
Being surprised of Python's poor performance I took a look onto CPU usage and noticed that Python does not use more than 1% while executing. Basically CPU usage is around 0.2% and barely changes whether im running the script or not. My system has 8 logical cores, so this does not appear to be a multi-core issue. Also, no other hardware is seeing any high usage (at least that is what task manager is telling me).
I am running my program with IPython 3.9.10 through Spyder 5.3.0. The Python installation is rather fresh and I did not change much except for installing a few standard modules.
I did not post any code because it will be a bit much. But essentially it is only a big hunk of "basic" operations of scalars and vectors. Also, there is a function that gets solved with scipy.optimize.minimize and nothing in the program relies on other software or inputs.
So, all in all my question is, where comes this limitation from and are there ways to "tell" Python to use more resources?

IPython (jupyter) vs Python (PyCharm) performance

Are there any performance difference between a code run on a IPython (Jupyter for example) and the same code run on "standard" Python (PyCharm for example)?
I'm working on a neural network for a project where I need some kind of presentation and Jupyter + IPython does the job, but i was wondering if there are any kind of differences in the performances between Python and IPython, since i need to train my network and I obviously prefer the faster method.
According to this link
There shouldn't be a difference between both of them if you are
running a fresh run of the script. Although IPython has enhanced
features compared to the normal python interpreter (I would be stuck
with it).

Jupyter notebook is extremely slow when re-running cells

I have a relatively large Jupyter/Notebook (about 40GB of Pandas DFs in RAM). I'm running a Python 3.6 kernel installed with Conda.
I have about 115 cells that I'm executing. If I restart the kernel and run the cells, my whole notebook runs in about 3 minutes. If I re-run a simple cell that's not doing much work (i.e. a function definition), it takes an extremely long time to execute (~15 minutes).
I cannot find any documentation online that has Jupyer notebook installation best practices. My disk usage is low, available RAM is high and CPU load is very low.
My swap space does seem to be maxed out, but I'm not sure what would be causing this.
Any recommendations on troubleshooting a poor-performing Jupyter notebook server? This seems to be related to re-running cells only.
If the Variable Inspector nbextension is activated, it might slow down the notebook when you have large variables in memory (such as your Pandas dataframes).
See: https://github.com/ipython-contrib/jupyter_contrib_nbextensions/issues/1275
If that's the case, try disabling it in Edit -> nbextensions config.

Very high memory usage when profiling python in PyCharm

I'm trying to profile a python application in pycharm, however when the application terminates and the profiler results are displayed Pycharm requires all 16gb of RAM that I have, which makes pycharm unusable.
Said python application is doing reinforcement learning, so it does take a bit of time to run (~10 min or so), however while running it does not require large amounts of RAM.
I'm using the newest version of PyCharm on Ubuntu 16.04 and CProfile is used by Pycharm for profiling.
I'd be very glad if one of you knows a solution.
EDIT: It seems this was an issue within PyCharm, which has since been fixed (as of 2017/11/21)
It's a defect within PyCharm: https://youtrack.jetbrains.com/issue/PY-25768

Kernel crashes when increasing iterations

I am running a Python script using Spyder 2.3.9. I have a fairly large script and when running it through with (300x600) iterations (a loop inside another loop), everything appears to be working fine and takes approximately 40 minutes. But when I increase the number to (500x600) iterations, after 2 hours, the output yields:
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
I've been trying to go through the code but don't see anything that might be causing this in particular. I am using Python 2.7.12 64bits, Qt 4.8.7, PyQt4 (API v2) 4.11.4. (Anaconda2-4.0.0-MacOSX-x86_64)
I'm not entirely sure what additional information is pertinent, but if you have any suggestions or questions, I'd be happy to read them.
https://github.com/spyder-ide/spyder/issues/3114
It seems this issue has been opened on their GitHub profile, should be addressed soon given the repo record.
Some possible solutions:
It may be helpful, if possible, to modify your script for faster convergence. Very often, for most practical purposes, the incremental value of iterations after a certain point is negligible.
An upgrade or downgrade of the Spyder environment may help.
Check your local firewall for blocked connections to 127.0.0.1 from pythonw.exe.
If nothing works, try using Spyder on Ubuntu.

Categories

Resources