Create a report of 10k rows from MongoDB - Python API - python

I would like to create an excel / google sheet using python that contains about 10-12k rows fetched from a mongo database.
The flow is tha the user requests this data from an admin panel, which calls an api to generate this report and emails it to the team.
The sheet will contain 25-30 columns and about 10-12k rows.
How can I do this effectively with Python 3+?
It would be expensive to call the api for each order so I figured I'll paginate the data, however what is the best way to go about it?

Related

How to deal with SQL database in GitHub

I'm doing a web scraping + data analysis project that consists of scraping product prices every day, clean the data, and store that data into a PostgreSQL database. The final user won't have access to the data in this database (scraping every day becomes huge, so eventually I won't be able to upload in GitHub), but I want to explain how to replicate the project. The steps are basically:
Scraping with Selenium (Python), and save the raw data into CSV files (already in GitHub);
Read these CSV files, clean the data, and store it into the database (the cleaning script already in GitHub);
Retrieve the data from the database to create dashboards and anything that I want (not yet implemented).
To clarify, my question is about how can I teach someone that sees my project, to replicate it, given that this person won't have the database info (tables, columns). My idea is:
Add SQL queries in a folder, showing how to create the database skeleton (same tables and columns);
Add in README info like creating the environment variables to access the database;
It is okay doing that? I'm looking for best practices in this context. Thanks!

How to get individual row from bigquery table less then a second?

I have a aggregated data table in bigquery that has millions of rows. This table is growing everyday.
I need a way to get 1 row from this aggregate table in milliseconds to append data in real time event.
What is the best way to tackle this problem?
BigQuery is not build to respond in miliseconds, so you need an other solution in between. It is perfectly fine to use BigQuery to do the large aggregration calculation. But you should never serve directly from BQ where response time is an issue of miliseconds.
Also be aware, that, if this is an web application for example, many reloads of a page, could cost you lots of money.. as you pay per Query.
There are many architectual solution to fix such issues, but what you should use is hard to tell without any project context and objectives.
For realtime data we often use PubSub to connect somewhere in between, but that might be an issue if the (near) realtime demand is an aggregrate.
You could also use materialized views concept, by exporting the aggregrated data to a sub component. For example cloud storage -> pubsub , or a SQL Instance / Memory store.. or any other kind of microservice.

BigQuery - Update Tables With Changed/Deleted Records

Presently, we send entire files to the Cloud (Google Cloud Storage) to be imported into BigQuery and do a simple drop/replace. However, as the file sizes have grown, our network team doesn't particularly like the bandwidth we are taking while other ETLs are also trying to run. As a result, we are looking into sending up changed/deleted rows only.
Trying to find the path/help docs on how to do this. Scope - I will start with a simple example. We have a large table with 300 million records. Rather than sending 300 million records every night, send over X million that have changed/deleted. I then need to incorporate the change/deleted records into the BigQuery tables.
We presently use Node JS to move from Storage to BigQuery and Python via Composer to schedule native table updates in BigQuery.
Hope to get pointed in the right direction for how to start down this path.
Stream the full row on every update to BigQuery.
Let the table accommodate multiple rows for the same primary entity.
Write a view eg table_last that picks the most recent row.
This way you have all your queries near-realtime on real data.
You can deduplicate occasionally the table by running a query that rewrites self table with latest row only.
Another approach is if you have 1 final table, and 1 table which you stream into, and have a MERGE statement that runs scheduled every X minutes to write the updates from streamed table to final table.

Python: How to update (overwrite) Google BigQuery table using pandas dataframe

I have a table in Google BigQuery(GBQ) with almost 3 million records(rows) so-far that were created based on data coming from MySQL db every day. This data inserted in GBQ table using Python pandas data frame(.to_gbq()).
What is the optimal way to sync changes from MySQL to GBQ, in this direction, with python.
Several different ways to import data from MySQL to BigQuery that might suit your needs are described in this article. For example Binlog replication:
This approach (sometimes referred to as change data capture - CDC) utilizes MySQL’s binlog. MySQL’s binlog keeps an ordered log of every DELETE, INSERT, and UPDATE operation, as well as Data Definition Language (DDL) data that was performed by the database. After an initial dump of the current state of the MySQL database, the binlog changes are continuously streamed and loaded into Google BigQuery.
Seems to be exactly what you are searching for.

Dialogflow - Retrieve data between timperiod from a Googlesheet

I use an API named Sheetsu to retrieve data from a Googlesheet.
Here is an example of sheetsu in python to retreive data depending the parameter date:
data = client.search(sheet="Session", date=03-06-2018)
So this code allow me to retrieve every rows from my sheet call session where colomn date equal 03-06-2018.
What i don't manage to do is to retrieve, with a time value like 16:30:00, every row where the value 16:30:00 is between 2 datetime.
So i would like to know if it is possible to retrieve the data with sheetsu or i should use an another API or if i could use a librairies like datetime to pick the data from sheetsu.
Google Sheets API is available and there's a Dialogflow-Google Sheets sample up on Github I'd recommend taking a look at to get started. You'll need to ensure that your service account client_email has permission to access that specific spreadsheetId of interest. This sample goes into the necessary auth steps but take a look at the Sheets documentation as well.

Categories

Resources