Why does my google colab session keep crashing? - python

I am using google colab on a dataset with 4 million rows and 29 columns. When I run the statement sns.heatmap(dataset.isnull()) it runs for some time but after a while the session crashes and the instance restarts. It has been happening a lot and I till now haven't really seen an output. What can be the possible reason ? Is the data/calculation too much ? What can I do ?

I'm not sure what is causing your specific crash, but a common cause is an out-of-memory error. It sounds like you're working with a large enough dataset that this is probable. You might try working with a subset of the dataset and see if the error recurs.
Otherwise, CoLab keeps logs in /var/log/colab-jupyter.log. You may be able to get more insight into what is going on by printing its contents. Either run:
!cat /var/log/colab-jupyter.log
Or, to get the messages alone (easier to read):
import json
with open("/var/log/colab-jupyter.log", "r") as fo:
for line in fo:
print(json.loads(line)['msg'])

Another cause - if you're using PyTorch and assign your model to the GPU, but don't assign an internal tensor to the GPU (e.g. a hidden layer).

This error mostly comes if you enable the GPU but do not using it. Change your runtime type to "None". You will not face this issue again.

I would first suggest closing your browser and restarting the notebook. Look at the run time logs and check to see if cuda is mentioned anywhere. If not then do a factory runtime reset and run the notebook. Check your logs again and you should find cuda somewhere there.

For me, passing specific arguments to the tfms augmentation failed the dataloader and crahed the session.
Wasted lot of time checking the images not coruppt and clean the gc and more...

What worked for me was to click on the RAM/Disk Resources drop down menu, then 'Manage Sessions' and terminate my current session which had been active for days. Then reconnect and run everything again.
Before that, my code kept crashing even though it was working perfectly the previous day, so I knew there was nothing wrong coding wise.
After doing this, I also realized that the parameter n_jobs in GridSearchCV (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) plays a massive role in GPU RAM consumption. For example, for me it works fine and execution doesn't crash if n_jobs is set to None, 1 (same as None), or 2. Setting it to -1 (using all processors) or >3 crashes everything.

Related

Python script gets "killed"

I am facing a problem with a python script getting killed. I had always used this script with no problem at all until two days ago, then it started to print, without any change in the code, the string 'killed' before aborting the execution.
Other people have tried to run the same code on their system and it works fine, as it used to do with me until two days ago.
I have read some old similar question, and I have got the problem could be an out-of-memory issue due to a bad memory management in my code. It sounds a little strange to me, since it used to work perfectly until some days ago and the problem appears on my system only.
Do you have any idea on how to inspect the problem and find a possible solution, please?
Python version: Python 2.7.14+
System: Scientific Linux CERN 7
In your case, it's highly probale that the script you're processing reached some given limit of the amount of resources it's able to use and that depends on your OS and other parameters, are you running something else with the script ? or are there many open files etc ?
The most likely reason for such an error is exceeding memory use, whiwh forces the system to not take risks and break when allocating more starts failing. Maybe you can print in parallel the total memory you're using to have a glimpse of what's happening since the information you've given are not enough to help you :
import os, psutil
process = psutil.Process(os.getpid())
then: (for python 3)
print(process.memory_info().rss)
or: (for python 2.7) (tested)
print(process.memory_info()[0])

Mallet stops working for large data sets?

I am trying to use LDA Mallet to assign my tweets to topics, and it works perfectly well when I feed it with up to 500,000 tweets, but it seems to stop working when I use my whole data set, which is about 2,500,000 tweets. Do you have any solutions for that?
I am monitoring my CPU and RAM usage when I run my codes as one way to make sure the code is actually running (I use Jupyter notebook). I use the code below to assign my tweets to topics.
import os
from gensim.models.wrappers import LdaMallet
os.environ.update({'MALLET_HOME':r'C:/new_mallet/mallet-2.0.8/'})
mallet_path = 'C:/new_mallet/mallet-2.0.8/bin/mallet'
ldamallet = LdaMallet(mallet_path, corpus=corpus, num_topics=10, id2word=id2word)
The code seems to work when my data set contains fewer than 500,000 tweets: it spits out the results, and I can see python and/or java use my RAM and CPU. However, when I feed the code my entire data set, Java and Python temporarily show some CPU and RAM usage in the first few seconds, but after that the CPU usage drops to below 1 percent and the RAM usage starts to shrink gradually. I tried running the code several times, but after waiting on the code for 6-7 hours, I saw no increase in the CPU usage and the RAM usage dropped after a while. Also, the code did not produce any results. I finally had to stop the code.
Has this happen to you? Do you have any solutions for it?
Thank you!
This sounds like a memory issue, but the interaction with gensim may be masking the error? I don't know enough about gensim's java interaction to be able to suggest anything. You might try running from the command line directly in hopes that errors might be propagated more clearly.

PyCharm: Storing variables in memory to be able to run code from a "checkpoint"

I've been searching everywhere for an answer to this but to no avail. I want to be able to run my code and have the variables stored in memory so that I can perhaps set a "checkpoint" which I can run from in the future. The reason is that I have a fairly expensive function that takes some time to compute (as well as user input) and it would be nice if I didn't have to wait for it to finish every time I run after I change something downstream.
I'm sure a feature like this exists in PyCharm but I have no idea what it's called and the documentation isn't very clear to me at my level of experience. It would save me a lot of time if someone could point me in the right direction.
Turns out this is (more or less) possible by using the PyCharm console. I guess I should have realized this earlier because it seems so simple now (though I've never used a console in my life so I guess I should learn).
Anyway, the console lets you run blocks of your code presuming the required variables, functions, libraries, etc... have been specified beforehand. You can actually highlight a block of your code in the PyCharm editor, right click and select "Run in console" to execute it.
This feature is not implement in Pycharm (see pycharm forum) but seems implemented in Spyder.

How to trace random MemoryError in python script?

I have a python script, which is used to perform a lab measurement using several devices. The whole setup is rather involved, including communication over serial devices, API calls as well as the use of self-written and commercial drivers. In the end, however, everything boils down to two nested loops, which vary some parameters, collect data and write it to a file.
My problem is that I observe random occurences of a MemoryError, typically after 10 hours, equivalent to ~15k runs of the loops. At the moment, I don't have an idea, where it comes from or how I can trace it further. So I would be happy for suggestions, how to work on my problem. My observations up to this moment are as follows.
The error occurs at random states of the program. Different runs will throw the MemoryError at different lines of my script.
There is never any helpful error message. Python only says MemoryError without any error string. The traceback leads me to some point in the script, where memory is needed (e.g. when building a list), but it appears to be no specific instruction, which is the problem.
My RAM is far from full. The python process in question typically consumes some ten MB of RAM when viewed in the task manager. In addition, the RAM usage appears to be stable for hours. Usually, it increases slowly for some time, just to drop to down to the previous level quickly, which I interpret as the garbage collector kicking in periodically.
So far I did not find any indications for a memory leak. I used memory_profiler to trace the memory usage of my functions and found it to be stable. In addition, I followed this blog entry to observe what the garbage collector does in detail. Again, I could not find any hints for undeleted objects.
I am stuck to Win7 x86 due to a driver, which will only work on a 32bit system. So I cannot follow suggestions like this to go to a 64 bit version of Windows. Anyway, I do not see, how this would help in my situation.
The iPython console, from which the script is being launched, often behaves strange after the error occurred. Sometimes, a new MemoryError is thrown even for very simple operations. Often, the console is marked by Windows as "not responding" after some time. A menu pops up, where besides the usual options to wait for the process or to terminate it, there is a third option to "restore" the program (whatever that means). Doing so usually causes the console to work normal again.
At this point, I am somewhat out of ideas on how to proceed. The general receipe to comment out parts of the script until it works is highly undesirable in my case. As stated above, each test run will take several hours, meaning a potential downtime of weeks for my lab equipment. Going that direction, appears unfeasable to me. Is there any more direct approach to learn, what is crashing behind the scenes? How can I understand that python apparently fails to malloc?

Python crashes in rare cases when running code - how to debug?

I have a problem that I seriously spent months on now!
Essentially I am running code that requires to read from and save to HD5 files. I am using h5py for this.
It's very hard to debug because the problem (whatever it is) only occurs in like 5% of the cases (each run takes several hours) and when it gets there it crashes python completely so debugging with python itself is impossible. Using simple logs it's also impossible to pinpoint to the exact crashing situation - it appears to be very random, crashing at different points within the code, or with a lag.
I tried using OllyDbg to figure out whats happening and can safely conclude that it consistently crashes at the following location: http://i.imgur.com/c4X5W.png
It seems to be shortly after calling the python native PyObject_ClearWeakRefs, with an access violation error message. The weird thing is that the file is successfully written to. What would cause the access violation error? Or is that python internal (e.g. the stack?) and not file (i.e. my code) related?
Has anyone an idea whats happening here? If not, is there a smarter way of finding out what exactly is happening? maybe some hidden python logs or something I don't know about?
Thank you
PyObject_ClearWeakRefs is in the python interpreter itself. But if it only happens in a small number of runs, it could be hardware related. Things you could try:
Run your program on a different machine. if it doesn't crash there, it is probably a hardware issue.
Reinstall python, in case the installed version has somehow become corrupted.
Run a memory test program.
Thanks for all the answers. I ran two versions this time, one with a new python install and my same program, another one on my original computer/install, but replacing all HDF5 read/write procedures with numpy read/write procedures.
The program continued to crash on my second computer at odd times, but on my primary computer I had zero crashes with the changed code. I think it is thus safe to conclude that the problems were HDF5 or more specifically h5py related. It appears that more people encountered issues with h5py in that respect. Given that any error in my application translates to potentially large financial losses I decided to dump HDF5 completely in favor of other stable solutions.
Use a try catch statement. This can be put into the program in order to stop the program from crashing when erroneous data is entered

Categories

Resources