What is the "soft private memory limit" in GAE? - python

A user of my application attempted to send a file as an email attachment using my application. However, doing so raised the following exception which I'm having trouble deciphering
Exceeded soft private memory limit with 192.023 MB after servicing
2762 requests total
While handling this request, the process that handled this request was
found to be using too much memory and was terminated. This is likely to
cause a new process to be used for the next request to your application.
If you see this message frequently, you may have a memory leak in
your application.
What is the "soft private memory limit" and what was likely to bring about this exception?

The "soft private memory limit" is the memory limit at which App Engine will stop an instance from receiving any more requests, wait for any outstanding requests, and terminate the instance. Think of it as a graceful shutdown when you're using too much memory.
Hitting the soft limit once in a while is ok since all your requests finish as they should. However, every time this happens, your next request may start up a new instance which may have a latency impact.

I assume you are using the lowest-class frontend or backend instance. (F1 or B1 class) Both have 128 MB memory quota, so your app most likely went over this quota limit. However, this quota appears to be not strictly enforced and Google have some leniency in this (thus the term soft limit), I had several F1 app instances consuming ~200MB of memory for minutes before being terminated by the App Engine.
Try increasing your instance class to the next higher-level class (F2 or B2) that have 256MB memory quota and see if the error re-occurs. Also, do some investigation whether the error is reproducible every time you send e-mail with attachments. Because it's possible that what you are seeing is the symptom but not the cause, and the part of your app that consumes lots of memory lies somewhere else.

Related

Memory leak in simple Google App Engine example

I seem to have a memory leak in my Google App Engine app but I cannot figure out why.
After narrowing down the lines of code responsible, I have reduced the problem to a simple cron job that run regularly and all it does is load some entities using a query.
I include the memory usage using logging.info(runtime.memory_usage()) and I can see that the memory usage increases from one call to the next until it exceeds the soft private memory limit.
Below is the code I have used:
class User(ndb.Model):
_use_cache = False
_use_memcache = False
name = ndb.StringProperty(required=True)
...[OTHER_PROPERTIES]...
class TestCron(webapp2.RequestHandler):
def get(self):
is_cron = self.request.headers.get('X-AppEngine-Cron') == 'true'
if is_cron:
logging.info("Memory before keys:")
logging.info(runtime.memory_usage())
keys = models.User.query().fetch(1000, keys_only=True)
logging.info("Memory before get_multi:")
logging.info(runtime.memory_usage())
user_list = ndb.get_multi(keys)
logging.info("Memory after:")
logging.info(runtime.memory_usage())
logging.info(len(user_list))
app = webapp2.WSGIApplication([
('/test_cron', TestCron)
], debug=True)
And in cron.yaml I have:
- description: Test cron
url: /test_cron
schedule: every 1 mins from 00:00 to 23:00
When running this task every minute, every 2 iterations it has to restart a new instance. The first time it starts with 36mb, and on completion it says
This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
But on the second execution it starts with 107mb of memory used (meaning it didn't clear the memory from the previous iteration?), and it exceeds the soft private memory limit and terminates the process by saying:
Exceeded soft private memory limit of 128 MB with 134 MB after servicing 6 requests total
After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application.
Below is the full output of both logs:
The output just alternates between these 2 logs. Note that I have disabled the cache in the definition of the model, so shouldn't the memory usage be reset at every function call?

Why Does the Google Sheet API and Google Drive API consume so much memory upon requests [duplicate]

A user of my application attempted to send a file as an email attachment using my application. However, doing so raised the following exception which I'm having trouble deciphering
Exceeded soft private memory limit with 192.023 MB after servicing
2762 requests total
While handling this request, the process that handled this request was
found to be using too much memory and was terminated. This is likely to
cause a new process to be used for the next request to your application.
If you see this message frequently, you may have a memory leak in
your application.
What is the "soft private memory limit" and what was likely to bring about this exception?
The "soft private memory limit" is the memory limit at which App Engine will stop an instance from receiving any more requests, wait for any outstanding requests, and terminate the instance. Think of it as a graceful shutdown when you're using too much memory.
Hitting the soft limit once in a while is ok since all your requests finish as they should. However, every time this happens, your next request may start up a new instance which may have a latency impact.
I assume you are using the lowest-class frontend or backend instance. (F1 or B1 class) Both have 128 MB memory quota, so your app most likely went over this quota limit. However, this quota appears to be not strictly enforced and Google have some leniency in this (thus the term soft limit), I had several F1 app instances consuming ~200MB of memory for minutes before being terminated by the App Engine.
Try increasing your instance class to the next higher-level class (F2 or B2) that have 256MB memory quota and see if the error re-occurs. Also, do some investigation whether the error is reproducible every time you send e-mail with attachments. Because it's possible that what you are seeing is the symptom but not the cause, and the part of your app that consumes lots of memory lies somewhere else.

App Engine instance memory constantly increasing

I'd expect the memory usage of my app engine instances (Python) to be relatively flat after an initial startup period. Each request to my app is short lived, and it seems all memory usage of single request should be released shortly afterwards.
This is not the case in practice however. Below is a snapshot of instance memory usage provided by the console. My app has relatively low traffic so I generally have only one instance running. Over the two-day period in the graph, the memory usage trend is constantly increasing. (The two blips are where two instances were briefly running.)
I regularly get memory exceeded errors so I'd like to prevent this continuous increase of memory usage.
At the time of the snapshot:
Memcache is using less than 1MB
Task queues are empty
Traffic is low (0.2 count/second)
I'd expect the instance memory usage to fall in these circumstances, but it isn't happening.
Because I'm using Python with its automatic garbage collection, I don't see how I could have caused this.
Is this expected app engine behavior and is there anything I can do to fix it?
I found another answer that explains part of what is going on here. I'll give a summary based on that answer:
When using NDB, entities are stored in a context cache, and the context cache is part of your memory usage.
From the documentation, one would expect that memory to be released upon the completion of an HTTP request.
In practice, the memory is not released upon the completion of the HTTP request. Apparently, context caches are reused, and the cache is cleared before its next use, which can take a long time to happen.
For my situation, I am adding _use_cache=False to most entities to prevent them from being stored in the context cache. Because of the way my app works, I don't need the context caches for these entities, and this reduces memory usage.
The above is only a partial solution however!
Even with caching turned off for most of my entities, my memory usage is still constantly increasing! Below is snapshot over a 2.5 day period where the memory continuously increases from 36 MB to 81 MB. This is over the 4th of July weekend with low traffic.

App Engine Deferred: Tracking Down Memory Leaks

We have an App Engine application that writes many files of a relatively large size to Google Cloud Store. These files are CSVs that are dynamically created, so we use Python's StringIO.StringIO as a buffer and csv.writer as the interface for writing to that buffer.
In general, our process looks like this:
# imports as needed
# (gcs is the Google Cloud Store client)
buffer = StringIO.StringIO()
writer = csv.writer(buffer)
# ...
# write some rows
# ...
data = file_buffer.getdata()
filename = 'someFilename.csv'
try:
with gcs.open(filename, content_type='text/csv', mode='w') as file_stream:
file_stream.write(data)
file_stream.close()
except Exception, e:
# handle exception
finally:
file_buffer.close()
As we understand it, the csv.writer does not need to be closed itself. Rather, only the buffer above and the file_stream need be closed.
We run the above process in a deferred, invoked by App Engine's task queue. Ultimately, we get the following error after a few invocations of our task:
Exceeded soft private memory limit of 128 MB with 142 MB after servicing 11 requests total
Clearly, then, there is a memory leak in our application. However, if the above code is correct (which we admit may not be the case), then our only other idea is that some large amount of memory is being held through the servicing of our requests (as the error message suggests).
Thus, we are wondering if some entities are kept by App Engine during the execution of a deferred. We should also note that our CSVs are ultimately written successfully, despite these error messages.
The symptom described isn't necessarily an indication of an application memory leak. Potential alternate explanations include:
the app's baseline memory footprint (which for the scripting-language sandboxes like python can be bigger than the footprint at the instance startup time, see Memory usage differs greatly (and strangely) between frontend and backend) may be too high for the instance class configured for the app/module. To fix - chose a higher memory instance class (which, as a side effect, also means a faster class instance). Alternatively, if the rate of instance killing due to exceeding memory limits is tolerable, just let GAE recycle the instances :)
peaks of activity, especially if multi-threaded request handling is enabled, means higher memory consumption and also potential overloading of the memory garbage collector. Limiting the number of requests performed in parallel, adding (higher) delays in lower priority deferred task processing and other similar measures reducing the average request processing rate per instance can help give the garbage collector a chance to cleanup leftovers from requests. Scalability should not be harmed (with dynamic scaling) as other instances would be started to help with the activity peak.
Related Q&As:
How does app engine (python) manage memory across requests (Exceeded soft private memory limit)
Google App Engine DB Query Memory Usage
Memory leak in Google ndb library

GAE: Does execution continue after hitting "Exceeded soft private memory limit"?

One of my GAE task-queue requests exceeded the soft memory limit (log below). My understanding of the soft memory limit is that it lets the request complete and then after it finishes, it shuts down the instance.
However, from the logs, it looks like when I hit the soft memory limit, execution stops. I see no more logging code after the memory limit message and I've inspected my state and it does not look like the request is completing. I'm not sure if it matters, but this request is executing in within a deferred library TaskQueue.
So, if a TaskQueue hits a soft private memory limit, does execution continue until the request completes or does it immediately halt? Is it possible that only logging code is no longer recorded?
Log:
2012-04-11 23:45:13.203
Exceeded soft private memory limit with 145.848 MB after servicing 3 requests total
W 2012-04-11 23:45:13.203
After handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application.
What happens here is that handler at the end checks the memory status, if it is above the limit it will log an error and will shutdown the instance.
Since the task has completed successfully (you can see it terminates will status 200) it will not retry it.
When during handler execution the memory status is way above the memory limit, the handler will shutdown the instance and will return error 500, at this case the task will retry.
From my experience: if your instance hits soft memory hit, your request still would be finished, but the response status would be 500.

Categories

Resources