Batch updating specific google spreadsheet cells using python gdata - python

I found some information here about updating several cells at once using the python gdata library.
However, the example code refers to cells based on a single index, for instance updating only the first entry of the spreadsheet:
batchRequest = gdata.spreadsheet.SpreadsheetsCellsFeed()
cells.entry[0].cell.inputValue = 'x'
batchRequest.AddUpdate(cells.entry[0])
Suppose I want to update specific cells knowing their location, e.g. R1C3 and R2C2. How would I go about doing this? In other words, what do I replace cells.entry[0] with to access a specific row and column?
This related answer would be promising, except all the links are dead.

Related

Is there a way to paste a pandas dataframe as formulas starting on a specific cell in a google sheet?

I have been researching this for a while but am coming up with nothing. I have a process take data from a dataframe and add it to a Google Sheet starting on a specific cell (ex B3). The sheet is templated with columns that have formulas in the middle of the output and I have blank columns in the dataframe to match their position. I want to paste the values as "paste as formulas" Google Sheet paste option to not overwrite any data.
After doing a lot of research I think my two options are break the dataframe into two separate dfs to not override the formulas and add them into on each side of the columns that have formulas.
The second is to create a tab, add the data to this tab, then copy and paste as values using the sheets api.
Not sure if there is a way to do this all at once. Just wondering if this is possible and I am missing arguments in some functions somewhere that allow this.

Excel data extraction using regular expressions through Python

This is part 1 of a series of questions I will make on this forum so bear with me on this one. I am a novice programmer who took on a large project because i like to torture myself, so please be kind.
I am writing a Python script to process an Excel document full of accounts (See example below), each one being the same format, extract specific type of data from it, and then export that data to a SQL table. This is the process flow I have in mind when illustrating the script on paper:
The input is a large Excel document containing bookkeeping accounts with this format below:
Account format example and the data to be extracted highlighted, I believe the software used to produce this is an antiquated accounting software named "Zeus"](https://i.stack.imgur.com/Htdze.png)
The data to be extracted is the account name and number (they're on the same cell so I find it easier to extract them altogether so that I can use them as a primary key in a SQL table; will talk about that on another post) and the whole table of details of the account as highlighted above. Mind you, there are thousands of bookkeeping accounts of this format on the document and multiple of these are used for the same account name and number, meaning they have the same header, but different details.
The data processing will go like this:
Use regular expressions to match, extract, and store in an array, each account name and number (so that I can keep record of every account number and use them as a primary key in a SQL table)
Extract and match the content of each account details table to their respective account name and number (haven't figured out how to do that yet, however, I will be using a relationship table to link them to their primary key once data is exported).
Export the extracted data into a database software (mySQL or MS Access... will most likely use MS Access).
After data is extracted and processed, a Excel report is to be created consisting on a table with the name and number of the account on the first column and then the details of the account on the following columns (will post about that later on).
Part 1: Excel data extraction/"scraping"
Quick note: I have tried multiple methods such as (MS Access, VBA and MS Power Automate) to do this and avoid having to manually code everything, ended up failing miserably, so I decided to bite the bullet and just do it.
So here's the question: after doing some research, I came across multiple methods to extract data from an excel, and several methods to use regex to do web scraping and PDF data extraction.
Is there a way to extract data from an Excel document through Python using regex match? If so, how could I do that?
PS: I will be documenting my journey through this forum on another post in order to help other fellow data entry workers.
Look into these python modules:
import xlwt
from xlwt.Workbook import *
import xlsxwriter
import numpy as np
import pandas as pd
from pandas import ExcelWriter
Then you can use pandas dataframe like:
data = pd.read_excel('testacct.xlsx')
This will put the entire spreadsheet into a dict with generic column names:
If there are multiple sheets, then the df object will be a list of dicts. Each column is a list or row data.
You can traverse the rows like:
cols = data.keys()
for row in range(len(data[cols[0]])):
for col in cols:
print(data[col][row])
print("--")
You can join the column data and strip out spaces.
Then you can use regex to any of the header values.

Gspread update value without getting it first / addition to cell directly

Currently using gspread and python. Currently to update values on my spreadsheet I have to grab values from sheet, update them and then write back.
What i am looking for is...say I have a cell with a value of 5, is there a way to make an api call that says add +5 to "x" cell? I would no longer need to grab the values first, which even saves an api call. I don't know if this command is available or anything similar is, but would appreciate any help here!
Poring over both the documentation for gspread and the Sheets API I'm fairly confident that this cannot be done. All the methods that update values on a sheet require you to specify a range and the list of values, but you are unable to refer to the current values within the range without first retrieving them.
Your best way to reduce API calls is to just retrieve and update the data in batches as much as possible using batch_get() and batch_update().
If you'd like to submit feedback to Google to implement something like this in a future API update you can check out their issue tracker.

google spreadsheets api python - append content to a blank row

i am doing an app with the google drive api, the finall step is to stake a bunch of data and append it to a existing spreadsheet, the problem is that when i append a row it is writed in the "A1" cell and it need to be writed in the following blank row e.i, not rewriting on the old data instead it have to write to a new blank cell
You should look into sheets api instead, specifically spreadsheets.values.append.
Given the above, spreadsheets.values.append request will add new row/s of values after starting with row 3.
Detailed documentation should be here.
Behavior and sample output of spreadsheets.values.append pasted above is discussed here.

gspread: Retrieve Filtered Data from GoogleSheets

I have a google sheet with a filter/query that only shows the data that verifies the filter's conditions. To retrieve the data for python I use gspread, but hiddenrows are appearing too (as if there was no filter at all).
How can I differentiate the rows selected from the ones who weren't?
I don't understand if this can work without adding more functions to gspread, or if I need to create a new function. If so, what should the function be?
I found that function fetch_sheet_metadata() which is inside class spreadsheet of gspread gives out the filters and hiddenValues for each filter, which is enough to solve my problem.

Categories

Resources