We are using the sched module in Python 2.6 to run a function every 60 seconds. Each call issues the sched.enter() function after executing with a delay of 60 and priority of 1. This has been working fine.
However, we have found a situation when the next execution of the sched function doesn't happen for several minutes, even up to 5-6 minutes later. This has been observed on a virtual machine.
What could be causing this? Is there any workaround to ensure the task gets executed regularly?
How long takes the processing that happens before sched.enter is called? I would suggest monitoring this, and maybe taking that processing time into account in the delay parameter of sched.enter.
The load of the VM and of the VM host may also be important factors. I've seen performance degradations in some apps caused by the VM host being under a too high load and swapping.
Related
I am trying to figure out the best way to run a python process typically taking 10-30 (max an hour ish) minutes on my local machine. The process will be manually triggered, and may not be triggered for ours or days.
I am a bit confused, because I read official ms-docs stating that one should avoid long running processes in function apps (https://learn.microsoft.com/en-us/azure/azure-functions/performance-reliability#avoid-long-running-functions) but at the same time, the functionTimeout for the Premium and Dedicated plans can be unlimited.
I am hesitant to use a standard web app with an API since it seems overkill to have it running 24/7.
Are there any ideal resources for this?
you can use consumption based Azure durable functions, they can run for hours or even days.
First, it looks like this thread but it is not: An unknown error has occurred in Cloud Function: GCP Python
I deployed a couple of times Cloud Functions and they are still working fine. Nevertheless, since last week, following the same procedure I can deploy correctly, but testing them I get the error "An unknown error has occurred in Cloud Functions. The attempted action failed. Please try again, send feedback".
In remote the script works perfectly and writes in Cloud Storage.
My Cloud Function is a zip with a python script, loading a csv in Cloud Storage.
The csv weights 160kB, the python script 5kB. So I used 128MiB of memory allocated.
The execution time is 38 secs, almost half of the default timeout.
It is configured to allow just traffic within the project.
Env variables are not the problem
It's triggered by pub/sub and what I want is to schedule it when I can make it work.
I'm quite puzzled. I have such a lack of ideas right now that I started to think everything works fine but the Google testing method is what is fails... Nevertheless when I run the pub/sub topic in Cloud Scheduler it launches the error log without much info 1. By any chance anyone had the same problem?
Thanks
Answer of myself from the past:
Finally "solved". I'm a processing a csv in the CF of 160kB, in my computer the execution time lasts 38 seconds. For some reason in the CF I need 512MB of Allocated Memory and a timeout larger than 60 secs.
Answer of myself from a closest past:
Don't test a CF using the test button, because sometimes it takes more than the max available timeout to finish, hence you'll get errors.
If you want to test it easily
Write prints after milestones in your code to check how the script is evolving.
Use the logs interface. The prints will be displayed there ;)
Also, logs show valuable info (sometimes even readable).
Also, if you're sending for example, to buckets, check them after the CF is finished, maybe you get a surprise.
To sum up, don't believe blindly in the testing button.
Answer of myself from the present (already regretting the prints thing):
There are nice python libraries to check logs, don't print stuff for that (if you have time).
I've already read how to avoid slow ("cold") startup times on AppEngine, and implemented the solution from the cookbook using 10 second polls, but it doesn't seem to help a lot.
I use the Python runtime, and have installed several handlers to handle my requests, none of them doing something particularly time consuming (mostly just a DB fetch).
Although the Hot Handler is active, I experience slow load times (up to 15 seconds or more per handler) and the log shows frequently the This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time ... message after the app was IDLE for a while.
This is very odd. Do I have to fetch each URL separately in the Hot Handler?
The "appropriate" way of avoiding slow too many slow startup times is to use the "always on" option. Of course, this is not a free option ($0.30 per day).
I have a [python] AppEngine app which creates multiple tasks and adds them to a custom task queue. dev_appserver.py seems to ignore the rate/scheduling parameters I specify in queue.yaml and executes all the tasks immediately. This is a problem [as least for dev/testing purposes] as my tasks call a rate-throttled url; immediate execution of all tasks breaches the throttling limits and returns me a bunch of errors.
Does anyone know if task scheduling if dev_appserver.py is disabled ? I can't find anything that suggests this in the AppEngine docs. Can anyone suggest a workaround ?
Thank you.
When your app is running in the development server, tasks are automatically executed at the appropriate time just as in production.
You can examine and manipulate tasks from the developer console:
http://localhost:8080/_ah/admin/taskqueue
Documentation here
The documentation lies: the development server doesn't appear to support rate limiting. (This is documented for the Java dev server, but not for Python). You can demonstrate this by pausing a queue by giving it a 0/s rate, but you'll find it executes tasks anyway. When such an app is uploaded to production, it behaves as expected.
I opened a defect.
Rate parameter is not used for setting absolute upper bounds of TaskQueue processing. In fact, if you use for example:
rate: 10/s
bucket_size: 20
the processing can burst up to 20/s. Something more useful would be:
max_concurrent_requests: 1
which sets the maximum number of execution to 1 at a time.
However, this will not stop tasks from executing. If you are adding multiple Tasks a time but know that they need to be executed at a later time, you should probably use countdown.
_countdown using deferred library
countdown using Task class
I've created a script to monitor the output of a serial port that receives 3-4 lines of data every half hour - the script runs fine and grabs everything that comes off the port which at the end of the day is what matters...
What bugs me, however, is that the cpu usage seems rather high for a program that's just monitoring a single serial port, 1 core will always be at 100% usage while this script is running.
I'm basically running a modified version of the code in this question: pyserial - How to Read Last Line Sent from Serial Device
I've tried polling the inWaiting() function at regular intervals and having it sleep when inWaiting() is 0 - I've tried intervals from 1 second down to 0.001 seconds (basically, as often as I can without driving up the cpu usage) - this will succeed in grabbing the first line but seems to miss the rest of the data.
Adjusting the timeout of the serial port doesn't seem to have any effect on cpu usage, nor does putting the listening function into it's own thread (not that I really expected a difference but it was worth trying).
Should python/pyserial be using this much cpu? (this seems like overkill)
Am I wasting my time on this quest / Should I just bite the bullet and schedule the script to sleep for the periods that I know no data will be coming?
Maybe you could issue a blocking read(1) call, and when it succeeds use read(inWaiting()) to get the right number of remaining bytes.
Would a system style solution be better? Create the python script and have it executed via Cron/Scheduled Task?
pySerial shouldn't be using that much CPU but if its just sitting there polling for an hour I can see how it may happen. Sleeping may be a better option in conjunction with periodic wakeup and polls.