I use an API named Sheetsu to retrieve data from a Googlesheet.
Here is an example of sheetsu in python to retreive data depending the parameter date:
data = client.search(sheet="Session", date=03-06-2018)
So this code allow me to retrieve every rows from my sheet call session where colomn date equal 03-06-2018.
What i don't manage to do is to retrieve, with a time value like 16:30:00, every row where the value 16:30:00 is between 2 datetime.
So i would like to know if it is possible to retrieve the data with sheetsu or i should use an another API or if i could use a librairies like datetime to pick the data from sheetsu.
Google Sheets API is available and there's a Dialogflow-Google Sheets sample up on Github I'd recommend taking a look at to get started. You'll need to ensure that your service account client_email has permission to access that specific spreadsheetId of interest. This sample goes into the necessary auth steps but take a look at the Sheets documentation as well.
Related
I have deployed a GCP cloud function that updates Firestore time_created and time_updated fields in Firestore. My front-end app first creates these fields in Firestore but my function updates them after processing the documents. A snippet of code below generates the timestamp and I use Firestore update function to update the document. There are a few instances where the fields in Firestore will be updated as a dictionary with keys as "seconds" and "nano_seconds" and their values but not as Timestamp. I have been wondering and trying to track down where the issue is coming from. I suspect datetime.now() sometimes does not generate a timestamp value. Help me if you have an idea or seen something like this before. I have attached a snapshot below. The image attached shows an instance of the wrongly formatted date returned from Firestore to my Front-end.
Documents affected have the field showing as this:
time_created: {'seconds': 1637694047.0, 'nanoseconds': 580592000.0}
from datetime import datetime
update_doc = {
u"time_created": datetime.now(),
u"time_updated": datetime.now()
}
Per #mark-tolonen, please don't include images in questions when it's trivial to copy-and-paste the test. Various reasons.
I experienced a different issue with Firestore timestamps and using the Go SDK. When I read your question, I wondered if the issues were related but, I think not.
That said, you can perform some diagnosis. You can emit the Python datetime.now() values of course to ensure you know what's being applied.
You could (!) then use the underlying REST API directly to mimic|repro the calls that your code is making to determine whether the error arises in the API itself or the Python SDK (or your code).
Here's projects.databases.documents.patch which I think underlies the Set. There's also projects.databases.documents.create. In both cases, APIs Explorer provides a way for you to try the API methods in the browser and will yield the e.g. curl equivalents for you.
NOTE
The API requires a parent parameter, defined to be something of the form projects/{project_id}/databases/{databaseId}/documents. Replace project_id with your Project ID and use (default) (with the parenthesis) for the value of {databaseId}.
I would like to create an excel / google sheet using python that contains about 10-12k rows fetched from a mongo database.
The flow is tha the user requests this data from an admin panel, which calls an api to generate this report and emails it to the team.
The sheet will contain 25-30 columns and about 10-12k rows.
How can I do this effectively with Python 3+?
It would be expensive to call the api for each order so I figured I'll paginate the data, however what is the best way to go about it?
I have a dataframe in pyspark called tweets and I have in it the column "tweet_id" and i want to get the full tweets using their id and put them in a new dataframe (can i do that using tweepy,twarc or twython?).
I recommend you to use Tweepy for handeling tweets with Python. Here are the docs.
If you want to hydrate a tweet id with its metadata there is a method for that purpose: api.get_status(). As the docs describes:
Returns a single status specified by the ID parameter.
You'll find the information about the method and all the parameters available here.
Is it possible to get lifetime data from using facebookads api on python? I tried to use date_preset:lifetime and time_increment:1, but got a server error instead. And, then I found this on their website:
"We use data-per-call limits to prevent a query from retrieving too much data beyond what the system can handle. There are 2 types of data limits:
By number of rows in response, and
By number of data points required to compute the total, such as summary row."
Any way I can do this? And, another question, is there like any way to pull raw data from facebook ad account, like a dump of all the data that resides on facebook for an ad account?
The first thing is to try is to add the limit parameter, which limits the number of results returned per page.
However, if the account has a large amount of history, the likelihood is that the total amount of data is too great, and in this case, you'll have to query ranges of data.
As you're looking for data by individual day, I'd start trying to query for month blocks, and if this is still too much data, query for each date individually.
I have a large amount of users (over 400k) that have been sent a survey to complete. As part of logging into my site I'm using the surveymonkey api to check to see if they completed their assigned survey. I'm keying on email address. I'm thinking of using:
https://developer.surveymonkey.com/mashery/get_respondent_list
however, I don't want to page through all 400k users to find a specific email - anyway to do this search more efficiently?
Using django backend to ping the surveymonkey api
get_respondent_list allows you to search for respondents by modified date/time range. For 400K respondents, you should store the results in a local database and only query the API when the email address you're looking for isn't found locally.
To avoid having to parse the whole list every time, you should only get new respondents since the last time your checked using that date/time range feature and add the new respondents to your DB. There is some example code which illustrates polling for new respondents based on date/time range on SurveyMonkey's public GitHub here:
https://github.com/SurveyMonkey/python_guides/blob/master/guides/polling.py