My work is migration calendar data, we're using google calendar API.
The number of Target Data is 280,000.
The method to execute is as follows.
・Calender API V3 Event-insert
https://developers.google.com/google-apps/calendar/v3/reference/events/insert
・batch request
https://developers.google.com/google-apps/calendar/batch
・exponential backoff
It has been running many times for this test, and it was able to run without problems before.
However, it currently has an error indication "Calendar usage limits exceeded", a state where it can not be executed even once and lasted for about a week.
I understand that the cause is as follows.
https://support.google.com/a/answer/2905486?hl=en
QPD(quota per date) had been changed to 2,000,000 by the support .
Therefore, I think that the problem by quota can be solved.
However, Currently, I am currently in a state where I can not execute the API even once I run the program.
I want to eliminate this situation.
I think that it is probably necessary to cancel restrictions on google side.
Can I borrow your wisdom?
Related
We are having recurring problems with our container instances with python running on cloud run. We currently have 20 services deployed, which run fine weeks at a time and then get sudden spikes in request latency as well as ping checks failing and the container instance time going up. We cannot see any added traffic during these spells of longer latency in our systems. Common access points such as database and cache all seem normal.
The region is europe-west1
Does anyone have any tips on what to check? Our have experienced similar problems?
Latency:
Container instance time:
I had to buy support for Google Cloud to get a good answer to this. They told me to make adjustment to my cloud service instances, but none to any effect. They later admitted that this was due to a problem on their end. It is a shame that you as a user do not get any feedback on problems like these when using the Google Cloud Platform, a simple notification in the Google Cloud console for affected users would be of great help, but I think they may like to cover these things up as to not worsen the service accessibility numbers.
First, it looks like this thread but it is not: An unknown error has occurred in Cloud Function: GCP Python
I deployed a couple of times Cloud Functions and they are still working fine. Nevertheless, since last week, following the same procedure I can deploy correctly, but testing them I get the error "An unknown error has occurred in Cloud Functions. The attempted action failed. Please try again, send feedback".
In remote the script works perfectly and writes in Cloud Storage.
My Cloud Function is a zip with a python script, loading a csv in Cloud Storage.
The csv weights 160kB, the python script 5kB. So I used 128MiB of memory allocated.
The execution time is 38 secs, almost half of the default timeout.
It is configured to allow just traffic within the project.
Env variables are not the problem
It's triggered by pub/sub and what I want is to schedule it when I can make it work.
I'm quite puzzled. I have such a lack of ideas right now that I started to think everything works fine but the Google testing method is what is fails... Nevertheless when I run the pub/sub topic in Cloud Scheduler it launches the error log without much info 1. By any chance anyone had the same problem?
Thanks
Answer of myself from the past:
Finally "solved". I'm a processing a csv in the CF of 160kB, in my computer the execution time lasts 38 seconds. For some reason in the CF I need 512MB of Allocated Memory and a timeout larger than 60 secs.
Answer of myself from a closest past:
Don't test a CF using the test button, because sometimes it takes more than the max available timeout to finish, hence you'll get errors.
If you want to test it easily
Write prints after milestones in your code to check how the script is evolving.
Use the logs interface. The prints will be displayed there ;)
Also, logs show valuable info (sometimes even readable).
Also, if you're sending for example, to buckets, check them after the CF is finished, maybe you get a surprise.
To sum up, don't believe blindly in the testing button.
Answer of myself from the present (already regretting the prints thing):
There are nice python libraries to check logs, don't print stuff for that (if you have time).
I'm building an installation that will run for several days and needs to get notifications from a GMail inbox in real time. The Gmail API is great for many of the features I need, so I'd like to use it. However, it has no IDLE command like IMAP.
Right now I've created a GMail API implementation that polls the mailbox every couple of seconds. This works great, but times out after a while (I get "connection reset by peer"). So, is it reasonable to turn off the sesson and restart it every half an hour or so to keep it active (like with IDLE)? Is that a terrible, terrible hack that will have google busting down my door in the middle of the night?
Would the proper solution be to log in with IMAP as well and use IDLE to notify my GMail API module to start up and pull in changes when they occur? Or should I just suck it up and create an IMAP only implementation?
Would definitely recommend against IMAP, note that even with the IMAP IDLE command it isn't real time--it's just polling every few (5?) seconds under the covers and then pushing out to the connection. (Experiment yourself and see the delay there.)
Querying history.list() frequently is quite cheap and should be fine. If this is for a sizeable number of users you may want to do a little bit of optimization like intelligent backoff for inactive mailboxes (e.g. every time there's no updates backoff by an extra 5s up to some maximum like a minute or two)?
Google will definitely not bust down your door or likely even notice unless you're doing it every second with 1M users. :)
Real push notifications for the API is definitely something that's called for.
You are getting connection reset by peer because you are exceeding Google quota. Every GMail API request has quota defined here.
Am working on a gae app using python. The app involves some crowd-sourced data collection system and data used in the app is submitted by users all-over the country. Now, am using the default quotas (Free) but am faced with a problem of ensuring at least 99% up-time for my app.
The challenge is that Google blocks any further requests being routed to your app once you exhaust your allocated quotas, and during a recent testing spree, one person was able to build an automated posting script that quickly exhausted the CPU quota - after that, the app would only serve HTTP 403 Forbidden status code for the request instead of calling a request handler. Now, I have patched the system not to allow automated postings, but how can I guarantee that human users don't cause a similar "blackout" at production time?
I know of the Quota API, but am thinking that can only give me profiling info for my app, I want a way of slowing down the rate of requests (e.g per minute for the per minute quotas) without serving error pages or blackouts.
Any suggestions?
One common solution of this problem is to delegate the tasks to a rate limited taskqueue.
For example:
queue:
- name: mail-throttle
rate: 2000/d
bucket_size: 10
- name: background-processing-throttle
rate: 5/s
In this way you can control the usage of all the parts of your application forcing them to stay in the range of the available quotas.
A couple of caveats:
1. Queues deliver a best effort FIFO order
2. Enqueuing/Execution of a task counts toward several quotas
I created a Hello World website in Google App Engine. It is using Django 1.1 without any patch.
Even though it is just a very simple web page, it takes long time and often it times out.
Any suggestions to solve this?
Note: It is responding fast after the first call.
Now Google has added a payment option "Always On" which is 0.30$ a day.
Using this feature, your application will not have to cold start any more.
Always On
While warmup requests help your
application scale smoothly, they do
not help if your application has very
low amounts of traffic. For
high-priority applications with low
traffic, you can reserve instances via
App Engine's Always On feature.
Always On is a premium feature which
reserves three instances of your
application, never turning them off,
even if the application has no
traffic. This mitigates the impact of
loading requests on applications that
have small or variable amounts of
traffic. Additionally, if an Always On
instance dies accidentally, App Engine
automatically restarts the instance
with a warmup request. As a result,
Always On applications should be sure
to do as much initialization as
possible during warmup requests.
Even after enabling Always On, your
application may experience loading
requests if there is a sudden increase
in traffic.
To enable Always On, go to the Billing
Settings page in your application's
Admin Console, and click the Always On
checkbox.
http://code.google.com/intl/de-DE/appengine/docs/adminconsole/instances.html
This is a horrible suggestion but I'll make it anyway:
Build a little client application or just use wget with cron to periodically access your app, maybe once every 5 minutes or so. That should keep Google from putting it into a dormant state.
I say this is a horrible suggestion because it's a waste of resources and an abuse of Google's free service. I'd expect you to do this only during a short testing/startup phase.
To summarize this thread so far:
Cold starts take a long time
Google discourages pinging apps to keep them warm, but people do not know the alternative
There is an issue filed to pay for a warm instance (of the Java)
There is an issue filed for Python. Among other things, .py files are not precompiled.
Some apps are disproportionately affected (can't find Google Groups ref or issue)
March 2009 thread about Python says <1s (!)
I see less talk about Python on this issue.
If it's responding quickly after the first request, it's probably just a case of getting the relevant process up and running. Admittedly it's slightly surprising that it takes so long that it times out. Is this after you've updated the application and verified that the AppEngine dashboard shows it as being ready?
"First hit slowness" is quite common in many web frameworks. It's a bit of a pain during development, but not a problem for production.
One more tip which might increase the response time.
Enabling billing does increase the quotas, and, to my personal experience, increase the overall response of an application as well. Probably because of the higher priority for billing-enabled applications google has. For instance, an app with billing disabled, can send up to 5-10 emails/request, an app with billing enabled easily copes with 200 emails/request.
Just be sure to set low billing levels - you never know when Slashdot, Digg or HackerNews notices your site :)
I encounteres the same with pylons based app. I have the initial page server as static, and have a dummy ajax call in it to bring the app up, before the user types in credentials. It is usually enough to avoid a lengthy response... Just an idea that you might use before you actually have a million users ;).
I used pingdom for obvious reasons - no cold starts is a bonus. Of course the customers will soon come flocking and it will be a non-issue
You may want to try CloudUp. It pings your google apps periodically to keep them active. It's free and you can add as many apps as you want. It also supports azure and heroku.