When does Dash release memory? - python

I wrote a python Dash app and made it available within my organization using OpenShift. I’m not really knowledgeable about OpenShift but it seems to be running correctly, including when multiple users are involved.
My problem is with memory management. Each time a user initiates a new session, the memory used by Dash app increases by ~200MB when I look on OpenShift. When the user closes the browser tab, the consumed memory does not go down (not even after weeks). Essentially the amount of memory the Dash app consumes keeps growing.
I am probably missing something, but how do I get Dash to clear memory after the user closes the browser tab or after some time passes since the last action? The dcc.Store objects in my code have "storage_type = ‘memory’ ". But from what I understand the dcc.Store keeps all the stored data on the client side in the browser, so this should not increase the memory on the server.
I deployed my app with
app.run_server(debug=True, dev_tools_hot_reload=False, port=8080, host=“0.0.0.0”)
in case this matters.
Any help would be really appreciated! Right now I keep manually restarting the app to clear the memory but this is not practical at all. Thank you!

How much memory are you allocating to the container? Also, does the memory continually go up? Or once it reaches a certain level does it plateau? Are you tracking any GC behavior in Python?
I'm not an expert on Python memory management, and know nothing about Dash, but Python does manage its own memory heap and has a garbage collector. Thus it is completely normal behavior for Python to never deallocate memory, Python is essentially reserving the memory for potential future use. Once it needs memory it will garbage collect the unreferenced objects.
As long as you aren't running out of memory or seeing undesirable GC behavior, the best thing to do is just set reasonable memory requests/limits for the container and let the Python GC manage the memory it has been allocated.

I also faced a similar issue with dash on one of our company legacy apps. Unfortunately, I cannot share the code.
I tried to use gc.collect() after each callback, it is not recommended to do and it didn't help.
My problem was that the script didn't properly initialize Dash.
I added the following at the start of the script:
app=dash.Dash(__name__)
server=app.server
This kind of solved memory issue problems.
app.run_server should be used for development environment (there should be a warning when you run it). In case of prod, you need to use something like gunicorn. It has a learning curve, but it is not a steep one. If you insist on using app.run_server, you should set debug=False it will reduce memory consumption and increase speed.

Related

Profiling Django Startup Memory Footprint

I'm trying to optimize my Django 1.8 project's memory usage, and I have noticed that even from initialization, it's using 80+MB, which seems excessive. I observe this when I simply run ./manage shell --plain. By contrast, an empty Django project on startup uses only 30MB. I want to know what's consuming so much memory.
I have tried various things to reduce memory consumption, including a fair amount of stripping down of the project by removing its apps and URLs. I tried poking around gc.get_objects but it's not comprehensible. I got excited about tracemalloc so I build a custom Python 2.7.8 with tracemalloc included, only to realize that it won't start tracking until I call start() from the prompt, at which point the memory has already been consumed.
Questions:
What kinds of things could be causing this high memory floor?
What process can I use to determine the consumer?
And yes, I realize the versions badly need upgrades. Thanks!
UPDATE #1
I did manage to get some use of out tracemalloc. I inserted the import and start at the beginning of manage.py.
/Users/john/.venv/proj-tracemalloc/lib/python2.7/site-packages/zinnia/comparison.py:19: size=25.0 MiB (+25.0 MiB), count=27168 (+27168), average=993 B
/usr/local/tracemalloc-py2.7.8/py27/lib/python2.7/importlib/__init__.py:37: size=3936 KiB (+3936 KiB), count=9581 (+9581), average=420 B
...
It did reveal one interesting thing - this first line s a blogging app. This loop seems to use a lot of memory, although I guess it's temporary. By line 3 everything is 1.5MB and smaller.
PUNCTUATION = dict.fromkeys(
i for i in range(sys.maxunicode)
if unicodedata.category(six.unichr(i)).startswith('P')
)
UPDATE #2
I went through the tedious process of uninstalling packages, apps, and fixing broken code. In the end, it looks like maybe 10MB was due to my internal apps, and 25MB was due to the Django debugging toolbar. This got me down to 45MB. It doesn't seem unreasonable for my core app to be using up 15MB, taking it down to the 30MB core. I don't use the toolbar in production, but it definitely needs additional memory for actually working.
In the end, I didn't learn much, but at least nothing seems insanely wrong. I was disappointed with tracemalloc, but hopefully it will work better as integrated into Python 3.

Can I use gc.collect()?

TLDR: Adding a gc.collect() to my script has fixed the memory leak. How did this happen?
Long version: I was having a memory leak in my Flask server, after making a change in how the databases is updated. Before the change, the server processes would have a resident set size of 28kB. After applying the change, it would grow up to 250 Mb in two days.
I've did some tests with the heap, but I didn't get any clue where the dangling references might be. So I just added a gc.collect() after the database commit (which happens every 15 sec).
This somehow mysteriously solved it, because it has been running for an hour now and it stays below 29.5kb (before the fix it would be way higher now). I'm not sure why this change would solve this, since Python has automatic GC, and I'm just forcing an immediate one. Is using gc.collect() a viable solution for fixing the leak (e.g. no side-effects)?

Difference between shared and unshared memory size

I am trying to find out how to see within a Python script (without any external lib) the RAM currently used by this script.
Some search here point me to the resource module: http://docs.python.org/2/library/resource.html#resource-usage
And here, I see there is 2 kind of memory, shared and unshared.
I was wondering what they were describing ? Hard drive versus RAM ? or something about multi-thread memory ? Or something else ?
Also, I do not think this is actually helping me to find out the current RAM usage, right ?
Thanks
RAM is allocated in chunks called pages. Some of these pages can be marked read-only, such as those in the text segment that contain the program's instructions. If a page is read-only, it is available to be shared between more than one process. This is the shared memory you see. Unshared memory is everything else that is specific to the currently running process, such as allocations from the heap.

Python process consuming increasing amounts of system memory, but heapy shows roughly constant usage

I'm trying to identify a memory leak in a Python program I'm working on. I'm current'y running Python 2.7.4 on Mac OS 64bit. I installed heapy to hunt down the problem.
The program involves creating, storing, and reading large database using the shelve module. I am not using the writeback option, which I know can create memory problems.
Heapy usage shows during the program execution, the memory is roughly constant. Yet, my activity monitor shows rapidly increasing memory. Within 15 minutes, the process has consumed all my system memory (16gb), and I start seeing page outs. Any idea why heapy isn't tracking this properly?
Take a look at this fine article. You are, most likely, not seeing memory leaks but memory fragmentation. The best workaround I have found is to identify what the output of your large working set operation actually is, load the large dataset in a new process, calculate the output, and then return that output to the original process.
This answer has some great insight and an example, as well. I don't see anything in your question that seems like it would preclude the use of PyPy.

Huge memory usage of Python's json module?

When I load the file into json, pythons memory usage spikes to about 1.8GB and I can't seem to get that memory to be released. I put together a test case that's very simple:
with open("test_file.json", 'r') as f:
j = json.load(f)
I'm sorry that I can't provide a sample json file, my test file has a lot of sensitive information, but for context, I'm dealing with a file in the order of 240MB. After running the above 2 lines I have the previously mentioned 1.8GB of memory in use. If I then do del j memory usage doesn't drop at all. If I follow that with a gc.collect() it still doesn't drop. I even tried unloading the json module and running another gc.collect.
I'm trying to run some memory profiling but heapy has been churning 100% CPU for about an hour now and has yet to produce any output.
Does anyone have any ideas? I've also tried the above using cjson rather than the packaged json module. cjson used about 30% less memory but otherwise displayed exactly the same issues.
I'm running Python 2.7.2 on Ubuntu server 11.10.
I'm happy to load up any memory profiler and see if it does better then heapy and provide any diagnostics you might think are necessary. I'm hunting around for a large test json file that I can provide for anyone else to give it a go.
I think these two links address some interesting points about this not necessarily being a json issue, but rather just a "large object" issue and how memory works with python vs the operating system
See Why doesn't Python release the memory when I delete a large object? for why memory released from python is not necessarily reflected by the operating system:
If you create a large object and delete it again, Python has probably released the memory, but the memory allocators involved don’t necessarily return the memory to the operating system, so it may look as if the Python process uses a lot more virtual memory than it actually uses.
About running large object processes in a subprocess to let the OS deal with cleaning up:
The only really reliable way to ensure that a large but temporary use of memory DOES return all resources to the system when it's done, is to have that use happen in a subprocess, which does the memory-hungry work then terminates. Under such conditions, the operating system WILL do its job, and gladly recycle all the resources the subprocess may have gobbled up. Fortunately, the multiprocessing module makes this kind of operation (which used to be rather a pain) not too bad in modern versions of Python.

Categories

Resources