So I'm getting the very common
"Web process failed to bind to $PORT within 60 seconds of launch"
But none of the solutions I've tried have worked, so my question is much more conceptual.
What is suppose to be binding? It is my understanding that I do not need to write code specifically to bind the worker dyno to the $PORT, but rather that this failure is caused primarily by computationally intensive processes.
I don't have any really great code snippets to show here, but I've included the link to the github repo for the project I'm working on.
https://github.com/therightnee/RainbowReader_MKII
There is a long start up time when the RSS feeds are first parsed, but I've never seen it go past 30 seconds. Even so, currently when you go to the page it should just render a template. Initially, in this setup, there is no data processing being done. Testing locally, everything runs great, and even with the data parsing it doesn't take more than a minute in any test case.
This leads me to believe that somewhere I need to be setting or using the $PORT variable in some way but I don't know.
Thanks!
Related
I've seen this stackoverflow question, as well as this one. The first says that the whitespace is from it being blocked by local work, but stepping through my program, the ~20 delay occurs right when I call dask.compute() and not in the surrounding code. The question asked said their issue was resolved by disabling garbage collection, but this did nothing for me. The second says to check the scheduler profiler, but that doesn't seem to be taking a long time either.
My task graph is dead simple - I'm calling a function on 500 objects with no task dependencies. (And repeat this 3 times, I'll link the functions once I figure out this issue). Here is my dask performance report html, and here is the section of code that is calling dask.compute().
Any suggestions as to what could be causing this? Any suggestions as to how I can better profile to figure this out?
This doesn't seem to be the main problem, but lines 585/587 will result in transfer of computations to the local machine, this could slow-down/introduce a bottleneck in the computations. If the results are not used locally downstream, then one option is to leave computations on the remote machines calling client.compute (assuming the client is named as client):
# changing 587: preprocessedcases = dask.compute(*preprocessedcases)
preprocessedcases = client.compute(*preprocessedcases)
First, it looks like this thread but it is not: An unknown error has occurred in Cloud Function: GCP Python
I deployed a couple of times Cloud Functions and they are still working fine. Nevertheless, since last week, following the same procedure I can deploy correctly, but testing them I get the error "An unknown error has occurred in Cloud Functions. The attempted action failed. Please try again, send feedback".
In remote the script works perfectly and writes in Cloud Storage.
My Cloud Function is a zip with a python script, loading a csv in Cloud Storage.
The csv weights 160kB, the python script 5kB. So I used 128MiB of memory allocated.
The execution time is 38 secs, almost half of the default timeout.
It is configured to allow just traffic within the project.
Env variables are not the problem
It's triggered by pub/sub and what I want is to schedule it when I can make it work.
I'm quite puzzled. I have such a lack of ideas right now that I started to think everything works fine but the Google testing method is what is fails... Nevertheless when I run the pub/sub topic in Cloud Scheduler it launches the error log without much info 1. By any chance anyone had the same problem?
Thanks
Answer of myself from the past:
Finally "solved". I'm a processing a csv in the CF of 160kB, in my computer the execution time lasts 38 seconds. For some reason in the CF I need 512MB of Allocated Memory and a timeout larger than 60 secs.
Answer of myself from a closest past:
Don't test a CF using the test button, because sometimes it takes more than the max available timeout to finish, hence you'll get errors.
If you want to test it easily
Write prints after milestones in your code to check how the script is evolving.
Use the logs interface. The prints will be displayed there ;)
Also, logs show valuable info (sometimes even readable).
Also, if you're sending for example, to buckets, check them after the CF is finished, maybe you get a surprise.
To sum up, don't believe blindly in the testing button.
Answer of myself from the present (already regretting the prints thing):
There are nice python libraries to check logs, don't print stuff for that (if you have time).
I have a django app with a celery instance that consumes and synchronizes a very large amount of data multiple times a day. I’ll note that I am using asyncio to call a library for an API that wasn’t made for async. I’ve noticed that after a week or so the server becomes painfully slow and can even become days behind in tasks after a few weeks.
Looking at my host’s profiler the RAM or CPU usage isn’t going wild, but I know it’s becoming slower and slower every week because that celery instance also handles emails at a specific time which send out hours and hours later as the weeks pass.
Restarting the instance seems to fix everything instantly, leading me to believe I have something like a memory leak (but the ram isn’t going wild) or something like unclosed threads (I have no idea how to detect this and the CPU isn’t going wild).
Any ideas?
This sounds like a very familiar issue with celery which is still opened on Github - here
We are experiencing similar issues and unfortunately didnt find a good workaround.
It seems that this comment found the issue, but we didnt have time to find and implement a workaround, so i cant say for sure - Please update if you found something helpful to solve. As this is Open Source, no one is responsible for making a fix but the community itself :)
I have a Python script that sometimes runs a process that lasts ~5-60 seconds. During this time, ten calls to session.publish() are ignored until the script is done. As soon as the script finishes, all ten messages are published in a flood.
I have corroborated this by opening the Crossbar.io router in debug mode, and it shows logs corresponding to the published messages after the time is over (not during its run as expected).
The script in question is long, complex and includes a combined frontend and backend for Crossbar/Twisted/AutobahnPython. I feel I would risk misreporting the problem if I tried to condense and include it here.
What reasons are there for publish to not happen instantaneously?
A couple of unsuccessful tries so far:
Source: Twisted needs 'non-blocking code'. So, I try to incorporate reactor.callLater but without success (I also don't really know how to do this for a publish event).
I looked into the idea of using Pool to spawn workers to perform the publish.
The AutobahnPython repo doesn't seem to have any examples that really include this kind of situation.
Thanks!
What reasons are there for publish to not happen instantaneously?
The reactor has to get a chance to run for I/O to happen. The example code doesn't let the reactor run because it keeps execution in a while loop in user code for a long time.
I've already read how to avoid slow ("cold") startup times on AppEngine, and implemented the solution from the cookbook using 10 second polls, but it doesn't seem to help a lot.
I use the Python runtime, and have installed several handlers to handle my requests, none of them doing something particularly time consuming (mostly just a DB fetch).
Although the Hot Handler is active, I experience slow load times (up to 15 seconds or more per handler) and the log shows frequently the This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time ... message after the app was IDLE for a while.
This is very odd. Do I have to fetch each URL separately in the Hot Handler?
The "appropriate" way of avoiding slow too many slow startup times is to use the "always on" option. Of course, this is not a free option ($0.30 per day).