I have set up a python script that can pull data from Gmail account, but I would like to set it up in a way that it would only pull new message since the last time I made the API call (I will be pinging the server regularly).
I have looked at Push notification and Pub/Sub, but I am not quite sure if these are relevant or I should be looking at something else. Gmail also has Users.history: list function, but I am wondering if this can be used in any useful way.
You could list messages as you usually do, but saying that you want messages after a certain timestamp. This way, you can poll for new messages e.g. every minute, giving the last time you checked for messages in seconds since the epoch:
Request
q = is:unread AND after:<time_since_epoch_in_seconds>
GET https://www.googleapis.com/gmail/v1/users/me/messages?q=is%3Aunread+AND+after%3A1446461721&access_token={YOUR_API_KEY}
Response
{
"messages": [
{
"id": "150c7d689ef7cdf7",
"threadId": "150c7d689ef7cdf7"
}
],
"resultSizeEstimate": 1
}
Then you just save the timestamp when you issued the request, and use this timestamp one minute later.
Related
In Firebase Console I set up audiences based on various user properties and now am able to send notifications to different user segments via console. Is there a way to do the same by http requests to fcm servers? There should be a trick with "to" field, but I couldn't figure it out.
firebaser here
There is currently no way to send a notification to a user segment programmatically. It can only be done from the Firebase Console as you've found.
We're aware that allowing this through an API would expand the potential for Firebase Notifications a lot. So we're considering adding it to the API. But as usual: no commitment and no timelines, since those tend to change as priorities shift.
This has been a popular request, but unfortunately it is not yet possible. We are looking into this.
Please check Firebase Cloud Messaging announcements for any updates in the future.
You can try with topic subscriptions. It is not perfect solution but the best for me at this time.
{
"to": "/topics/audience1_subscription"
"data" : {
"title" : "Sample title",
"body" : "Sample body"
},
}
Yes. No solid solutions are available as of now but I have a workaround solution for it. Which is not able to handle every scenario but it will get the work done.
For that, you need to figure out the audience within the app and you need to segment them with topics. Then you can send a push notification for that particular topic via API.
Let's take an example.
Send notifications to users who didn't open the app in the last 7 days
Subscribe to a topic name "app-open?date=09-21-2022"
each time user opens the app. Just unsubscribe from the topic of the last app opened and subscribe to a new topic with the current date.
Then you just need to build a topic string based on the current day - 7 to send.
And you can create multiple topics for the same user for different behaviors and use them as topics to send push notifications via API to segmented users.
As there is no limit on topics per user or topics per project. You can create as many as topics you want and use them as your need.
Yes.There is trick with the "to" field as mentioned in below.
web URL is: https://fcm.googleapis.com/fcm/send
Content-Type: application/json
Authorization: key="YOUR_SEVER_KEY"
JSON DATA FORMAT:
{"to": "USER_FIREBASE_TOKEN",
"data": {"message": "This is a Firebase Cloud Messaging Topic Message",}
"notification": {"body": "This is firebase body",}}";
I'm writing a Python program that is trying to monitor my Gmail Inbox. Whenever a new email arrives, my program should receive the actual email content. I think the best way to do this is via Google push notifications using Gmail API.
I have made a topic and subscription, as well as manually sent and received messages using these. I have completed the Google pub-sub setup and have called watch( ) on my Inbox. If I understand this correctly, a successful watch( ) call means that my Inbox will be constantly monitored. Whenever I receive a new email, a message of the form {emailAddress, historyId} should be sent to my topic.
From this, how would I be able to actually get the email content? According to the tutorial, I would have to do something like history.list( ) to get the "change details for the user since their last known historyId." What exactly will these "change details" be? Will they be the actual email content?
Should my next step be to set up a REST pull subscription? I am thinking of using this link:https://cloud.google.com/pubsub/docs/reference/rest/v1/projects.subscriptions/pull so that my program can actually receive the messages sent to my topic.
To get the content of an email after a push notification
As you have seen, the users.watch subscription will send you:
{
"historyId": string,
"expiration": string
}
With this historyId you can call the users.history.list endpoint to get:
{
"history": [
{
"id": "[historyId]",
"messages": [
{
"id": "[MSG ID]",
"threadId": "[THREAD ID]"
}
]
}
]
}
From there you would call the users.messages.get endpoint to get the actual message.
NOTE: For some history IDs you will get more than one message/thread/or event per historyId, so depending on your needs you would need to handle this with your reciever.
There are a few options but you could potentially use a cloud function to receive the notifications.
You could also simply set up a cron job to run your Python script every X minutes, again, depends on your specific needs.
References
users.watch
users.history.list
users.messages.get
I'm working on a python/flask application and I have my logging handled on a different server. The way I currently set it up is to have a function which sends a request to the external server whenever somebody visits a webpage.
This, of course extends my TTB because execution only continues after the request to the external server is completed. I've heard about threading but read that that also takes a little extra time.
Summary of current code:
log_auth_token = os.environ["log_auth"]
def send_log(data):
post_data = {
"data": data,
"auth": log_auth_token
}
r = requests.post("https://example.com/log", data=data)
#app.route('/log')
def log():
send_log("/log was just accessed")
return("OK")
In short:
Intended behavior: User requests webpage -> User recieves response -> Request is logged.
Current behavior: User requests webpage -> Request is logged -> User recieves response.
What would be the fastest way to achieve my intended behavior?
What would be the fastest way to achieve my intended behavior?
Log locally and periodically send the log files to a separate server. More specifically, you need to create rotating log files and archive them so you don't end up with 1 huge file. In order to do this you need to configure your reverse proxy (like NGINX).
Or log locally and create an application that allows you to read the log files remotely.
Sending a log per server call to a separate server simply isn't efficient unless you have another process do that. Users shouldn't have to wait for your log action to complete
I need a small billing report for the usage of the VMs inside openstack after it is stopped, so far I already find the way to get flavor information (vCPU, disk, memory) from instance name.
And I want to know the VM's startup time to calculate now.
Are there any good ways to fetch it from openstack python API ?
It will be nice if you can paste the code as well.
(I got the answer from china-openstack community, and shared here)
In the novaclient usage module, all the instance (active or terminated) can be fetched by list API, the detail information is fetched via get API, it is not clear what information are exposed via this python document.
Fortunately the openstack api : os-simple-tenant-usage tells the data structure, the uptime is what I want.
"tenant_usage": {
"server_usages": [
{
... (skipped)
"uptime": 3600,
"vcpus": 1
}
],
openstack dashboard (at least Folsom version) use this API as well.
I just wanted to retrieve server's uptime. I mean real uptime for the time the server has been UP, not since its creation.
I created a new machine, the machine was running and I was getting some update value; this was nicely incremented
Then I stopped the machine and issued the request again: The response correctly reports "state": "stopped", but the uptime attr. is still being incremented. ==> Again, in this extension it is not uptime, it is time from creation
Request to the os-simple-tenant-usage extension (after obtaining an auth. token):
GET http://rdo:8774/v2/4e1900cf21924a098709c23480e157c0/os-simple-tenant-usage/4e1900cf21924a098709c23480e157c0 (with the correct tenant ID)
Response (notice the machine is stopped and uptime is a non-zero value):
{
"tenant_usage": {
"total_memory_mb_usage": 0.000007111111111111112,
"total_vcpus_usage": 1.388888888888889e-8,
"start": "2014-02-25T14:20:19.660179",
"tenant_id": "4e1900cf21924a098709c23480e157c0",
"stop": "2014-02-25T14:20:19.660184",
"server_usages": [
{
"instance_id": "ca4465a8-38ca-40de-b138-82efcc88c7cf",
"uptime": 1199,
"started_at": "2014-02-25T14:00:20.000000",
"ended_at": null,
"memory_mb": 512,
"tenant_id": "4e1900cf21924a098709c23480e157c0",
"state": "stopped",
"hours": 1.388888888888889e-8,
"vcpus": 1,
"flavor": "m1.tiny",
"local_gb": 1,
"name": "m1"
}
],
"total_hours": 1.388888888888889e-8,
"total_local_gb_usage": 1.388888888888889e-8
}
}
So despite its name uptime it is just time since the server creation.
Why not just use metadata :
Custom server metadata can also be supplied at launch time.
At creation you can save a date time, then when it starts up you can calculate a difference.
I am using facepy facebook api to fetch messages from my facebook account. I have got myself long live access token with validity of 60days using the API. Now, in my program before querying for message I want to check wether my token has expired or not and if expired fetch a new one.
I am using get_extended_access_token which also returns a datetime instance describing when token expires. Now I think it is not a effecient way to use get_extended_access_token because everytime I am going to query for new message it will also fetch the access token(I know it is same as before) but I think this is a overhead.
So, I googled and found that we can also use
https://graph.facebook.com/debug_token?input_token=INPUT_TOKEN&access_token=ACCESS_TOKEN
to debug the token
So I supplied my long live access token instead of INPUT_TOKEN and ACCESS_TOKEN and it gave me a json response:
{
"data": {
"app_id": XXXXX,
"is_valid": true,
"application": "YYYYY",
"user_id": ZZZZZZ,
"issued_at": 1349261684,
"expires_at": 1354445684,
"scopes": [
"read_mailbox"
]
}
}
Now if you look at expires_at field it is showing 1354445684 seconds and when I tried to convert it into days/months it gave me 15676 days and when I checked the same token in graph explorer using the debug option it showed
expires_at: 1354445684(about 2 months)
Now, what I don't understand is how 1354445684 is equivalent to 2 months and how to achieve this in python.
Also comment on which is the better approach to check wether token has expired or not API or using the facebook url?
Now if you look at expires_at field it is showing 1354445684 seconds and when I tried to convert it into days/months it gave me 15676 days
Then you’ve done (or understood) something wrong.
expires_at 1354445684 is a Unix Timestamp, and equals Sun, 02 Dec 2012 10:54:44 +0000 translated into a human-readable date.
And that is pretty much two month from the issued_at timestamp 1349261684, a.k.a. Wed, 03 Oct 2012 10:54:44 +0000
I recommend you rely on exceptions to verify whether the token was valid instead of making a separate request to guarantee it:
from facepy import GraphAPI
graph = GraphAPI(token)
try:
graph.get('me')
except GraphAPI.OAuthError:
# Redirect the user to renew his or her token