So I am using Google Drive API to extract metadata from a drive folder, and then storing that data in a csv file. I need to use Batch processing for getting data from multiple files but, how i cannot understand how using the batch method.
This is the code I am using right now.
service = build('drive', 'v3', credentials=creds)
# Call the Drive v3 API
results = service.files().list(q = f"parents = '{drive_folder_id}'", pageSize=20,
fields=drive_data_fields).execute()
# gets 4 responses: kind, nextPageToken,files,incompleteSearch
items = results.get('files', [])
if not items:
print('No files found.')
return
print('Files:')
df = pd.DataFrame(items)
df.to_csv(drive_data_file)
the service.files().list(q = f"parents = '{drive_folder_id}'", pageSize=5, fields=drive_data_fields) returns the metadata of files in folder of drive. What I need is somehow send this as a batch request which returns the metadata as well. However batch.add does not give a response, result returns None.
This is how I want to use the code
service = build('drive', 'v3', credentials=creds)
# Call the Drive v3 API
batch = service.new_batch_http_request()
batch.add(service.files().list(q = f"parents = '{drive_folder_id}'", pageSize=5, fields=drive_data_fields))
result = batch.execute()
#above should return a response that I can write to a csv file.
Problem is, that I have to get the data so that I can write it, but here result returns None, and can't understand how batches response works.
Read the docs, but couldn't get an answer, any help will be appreciated.
"""
BEFORE RUNNING:
---------------
1. If not already done, enable the Google Sheets API
and check the quota for your project at
https://console.developers.google.com/apis/api/sheets
2. Install the Python client library for Google APIs by running
`pip install --upgrade google-api-python-client`
"""
# TODO: Change placeholder below to generate authentication credentials. See
# https://developers.google.com/sheets/quickstart/python#step_3_set_up_the_sample
#
# Authorize using one of the following scopes:
# 'https://www.googleapis.com/auth/drive'
# 'https://www.googleapis.com/auth/drive.file'
# 'https://www.googleapis.com/auth/spreadsheets'
SCOPES = ['https://www.googleapis.com/auth/spreadsheets']
creds = None
if os.path.exists('google.json'):
creds = Credentials.from_authorized_user_file('google.json', SCOPES)
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'CLIENT.json',SCOPES)
creds = flow.run_local_server(port=0)
with open('google.json', 'w') as token:
token.write(creds.to_json())
service = discovery.build('sheets', 'v4', credentials=creds)
spreadsheet_body = {
'sheets': [{
'properties': {
'title': str(files[0])
}
}]
}
request = service.spreadsheets().create(body=spreadsheet_body)
if request == str(files[0]):
pass
else:
response = request.execute()
pprint(response)
How can I create condition? if google sheet name exist if TRUE then don't proceed to create. I read the documentation and I didn't see any possible answer or I am just mistaken to understand the documentation please help thank you.
I believe your goal is as follows.
You want to check whether a file (Google Spreadsheet) is existing in Google Drive using a filename.
You want to achieve this using googleapis for python.
In this case, how about the following sample script? In this case, in order to search the file using the filename, Drive API is used.
Sample script:
filename = str(files[0])
service = build("drive", "v3", credentials=creds)
results = service.files().list(pageSize=1, fields="files(id, name)", q="name='" + filename + "' and mimeType='application/vnd.google-apps.spreadsheet' and trashed=false",).execute()
files = results.get("files", [])
if not files:
# When the file of filename is not found, this script is run.
print("No files were found.")
else:
# When the file of filename is found, this script is run.
print("Files were found.")
When this script is run, you can check whether the file is existing in Google Drive in the filename.
In this case, please add a scope of "https://www.googleapis.com/auth/drive.metadata.readonly" as follows. And, please reauthorize the scopes. So, please remove the file of google.json and run the script again.
SCOPES = [
"https://www.googleapis.com/auth/spreadsheets",
"https://www.googleapis.com/auth/drive.metadata.readonly",
]
From your question, I couldn't know whether you are trying to use the script in the shared Drive. So, in this modification, the script cannot be used for the shared Drive. But, if you want to use this script in the shared Drive, please include corpora="allDrives", includeItemsFromAllDrives=True, supportsAllDrives=True to the request.
Reference:
Files: list
I am using Python to access my Google Drive account via the Google Drive API. I would like to list only those files whose names start with a certain long character string. To do this, I am using the files.list contains clause, as described in this documentation. I have discovered that this approach does not work as expected when the search string exceeds 26 characters.
To replicate, do the following:
Set up a Google Drive service account.
Create a folder in your Google Drive account and give the service account access.
Upload two empty files to this folder, one named abcdefghijklmnopqrstuvwxyz1 and another named abcdefghijklmnopqrstuvwxyza1.
Use the API to list the files in the folder, first matching on files that contain abcdefghijklmnopqrstuvwxyz and then on files that contain abcdefghijklmnopqrstuvwxyza. The first search expectedly returns both files, however the second search should return the second file but instead returns none.
Here is the Python code I used to test:
import httplib2
from oauth2client.service_account import ServiceAccountCredentials
from apiclient import discovery
SCOPES = 'https://www.googleapis.com/auth/drive'
CLIENT_SECRET_FILE = 'client_secret.json'
credentials = ServiceAccountCredentials.from_json_keyfile_name(CLIENT_SECRET_FILE, SCOPES)
http = credentials.authorize(httplib2.Http())
service = discovery.build('drive', 'v3', http=http)
folder = '1mkebcpZ-mIhq4dLWwePqHqwzw6kUAj1-'
prefix = 'abcdefghijklmnopqrstuvwxyz'
results = service.files().list(
fields="files(id, name)",
q="'%s' in parents and name contains '%s'" % (folder, prefix)
).execute()
print('First result:')
print(results['files'])
prefix = 'abcdefghijklmnopqrstuvwxyza'
results = service.files().list(
fields="files(id, name)",
q="'%s' in parents and name contains '%s'" % (folder, prefix)
).execute()
print('Second result:')
print(results['files'])
Here is my output:
First result:
[{u'id': u'1Wwzym1ZVMTWzuX-K0eT0-xJrWI8t_YGN', u'name': u'abcdefghijklmnopqrstuvwxyza1'}, {u'id': u'16i4SSwYSbhV9hCFUaCwE7nrYdxBntIi_', u'name': u'abcdefghijklmnopqrstuvwxyz1'}]
Second result:
[]
Why doesn't the file abcdefghijklmnopqrstuvwxyza1 appear in the second result?
With the following, I can programmatically create a spreadsheet in Google sheets, but the owner of the sheet is the developer account (a crazy string ending in "gserviceaccount.com"), and my normal account can't view the spreadsheet. What else do I need to do in order to add Google users to the read/write permissions?
from oauth2client.service_account import ServiceAccountCredentials
from googleapiclient import discovery
# ... json_key is the json blob that has the credentials
scope = ['https://spreadsheets.google.com/feeds']
credentials = ServiceAccountCredentials.from_json_keyfile_dict(json_key, scope)
service = discovery.build('sheets', 'v4', credentials=credentials)
spreadsheet = {
"properties": {"title": "my test spreadsheet"}
}
service.spreadsheets().create(body=spreadsheet).execute()
Edit:
I tried changing the scope to ['https://www.googleapis.com/auth/drive'] but the answer below still doesn't work for me. When I run
print [xx for xx in dir(service) if not xx.startswith('_')]
I get
['new_batch_http_request', u'spreadsheets']
In other words, permissions() isn't a method in service as I have service defined. What should I be doing differently?
I figured it out from reading the comment left by Chris. All that was missing from his comments is you do in fact need to use particular scopes in his drive_service. Notice the changes in scope I use to build the different objects:
from oauth2client.service_account import ServiceAccountCredentials
from googleapiclient.discovery import build
key = '/path/to/service_account.json'
# Build 'Spreadsheet' object
spreadsheets_scope = [ 'https://www.googleapis.com/auth/spreadsheets' ]
sheets_credentials = ServiceAccountCredentials.from_json_keyfile_name(key, spreadsheets_scope)
sheets_service = build('sheets', 'v4', credentials=sheets_credentials)
# returns 'Spreadsheet' dict
# https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets#resource-spreadsheet
spreadsheet = sheets_service.spreadsheets().create(
body={
"properties": {
'title': 'spreadsheets test',
},
"sheets": [],
}
).execute()
# id for the created file
spreadsheetId = spreadsheet['spreadsheetId']
# url of your file
spreadsheetUrl = spreadsheet['spreadsheetUrl']
# Build 'Permissions' object
drive_scope = [ 'https://www.googleapis.com/auth/drive' ]
drive_credentials = ServiceAccountCredentials.from_json_keyfile_name(key, drive_scope)
drive_service = build('drive', 'v3', credentials=drive_credentials)
# returns 'Permissions' dict
permissions = drive_service.permissions().create(
fileId=spreadsheetId,
transferOwnership=True,
body={
'type': 'user',
'role': 'owner',
'emailAddress': 'example#email.com',
}
).execute()
# apply permission
drive_service.files().update(
fileId=spreadsheetId,
body={'permissionIds': [permissions['id']]}
).execute()
print ('\nOpen me:\n\n%s\n' % spreadsheetUrl)
So the logic is, a 'Spreadsheet Resource' is made from build with all its properties and sheet data, with the owner set to your service account. Next, a 'Drive Resource' is made, this is the Resource with the permissions() method. execute() returns a newly created permissions id used to update() the spreadsheet file.
Service is just a generic name for the result of a discovery.build call. In this case, not having the 'permissions' method is just that its not available on the same service. The following code should be sufficient if changing owner isn't required. To add someone with read and write access, the following works for me:
def grant_permissions(spreadsheet_id, writer):
drive_service = discovery.build('drive', 'v3')
permission = drive_service.permissions().create(
fileId=spreadsheet_id,
body={
'type': 'user',
'role': 'writer',
'emailAddress': writer,
}
).execute()
drive_service.files().update(
fileId=spreadsheet_id,
body={'permissionIds': [permission['id']]}
).execute()
To actually change the owner, the transfer ownership flag must be set:
def change_owner(spreadsheet_id, writer):
drive_service = discovery.build('drive', 'v3')
permission = drive_service.permissions().create(
fileId=spreadsheet_id,
transferOwnership=True,
body={
'type': 'user',
'role': 'owner',
'emailAddress': writer,
}
).execute()
drive_service.files().update(
fileId=spreadsheet_id,
body={'permissionIds': [permission['id']]}
).execute()
The service account being used must have the right permissions though. I believe the ones that worked for me was checking the g suite box when first creating the service account.
Try to use the method Permissions: insert from the documentation. You will be able to insert a permission for a file or a Team Drive.
Here is the sample code provided from the documentation:
from apiclient import errors
# ...
def insert_permission(service, file_id, value, perm_type, role):
"""Insert a new permission.
Args:
service: Drive API service instance.
file_id: ID of the file to insert permission for.
value: User or group e-mail address, domain name or None for 'default'
type.
perm_type: The value 'user', 'group', 'domain' or 'default'.
role: The value 'owner', 'writer' or 'reader'.
Returns:
The inserted permission if successful, None otherwise.
"""
new_permission = {
'value': value,
'type': perm_type,
'role': role
}
try:
return service.permissions().insert(
fileId=file_id, body=new_permission).execute()
except errors.HttpError, error:
print 'An error occurred: %s' % error
return None
Use Try it now to test live data and see the API request and response.
For further reading, check this SO post.
Can you produce a Python example of how to download a Google Sheets spreadsheet given its key and worksheet ID (gid)? I can't.
I've scoured versions 1, 2 and 3 of the API. I'm having no luck, I can't figure out their compilcated ATOM-like feeds API, the gdata.docs.service.DocsService._DownloadFile private method says that I'm unauthorized, and I don't want to write an entire Google Login authentication system myself. I'm about to stab myself in the face due to frustration.
I have a few spreadsheets and I want to access them like so:
username = 'mygooglelogin#gmail.com'
password = getpass.getpass()
def get_spreadsheet(key, gid=0):
... (help!) ...
for row in get_spreadsheet('5a3c7f7dcee4b4f'):
cell1, cell2, cell3 = row
...
Please save my face.
Update 1: I've tried the following, but no combination of Download() or Export() seems to work. (Docs for DocsService here)
import gdata.docs.service
import getpass
import os
import tempfile
import csv
def get_csv(file_path):
return csv.reader(file(file_path).readlines())
def get_spreadsheet(key, gid=0):
gd_client = gdata.docs.service.DocsService()
gd_client.email = 'xxxxxxxxx#gmail.com'
gd_client.password = getpass.getpass()
gd_client.ssl = False
gd_client.source = "My Fancy Spreadsheet Downloader"
gd_client.ProgrammaticLogin()
file_path = tempfile.mktemp(suffix='.csv')
uri = 'http://docs.google.com/feeds/documents/private/full/%s' % key
try:
entry = gd_client.GetDocumentListEntry(uri)
# XXXX - The following dies with RequestError "Unauthorized"
gd_client.Download(entry, file_path)
return get_csv(file_path)
finally:
try:
os.remove(file_path)
except OSError:
pass
The https://github.com/burnash/gspread library is a newer, simpler way to interact with Google Spreadsheets, rather than the old answers to this that suggest the gdata library which is not only too low-level, but is also overly-complicated.
You will also need to create and download (in JSON format) a Service Account key: https://console.developers.google.com/apis/credentials/serviceaccountkey
Here's an example of how to use it:
import csv
import gspread
from oauth2client.service_account import ServiceAccountCredentials
scope = ['https://spreadsheets.google.com/feeds']
credentials = ServiceAccountCredentials.from_json_keyfile_name('credentials.json', scope)
docid = "0zjVQXjJixf-SdGpLKnJtcmQhNjVUTk1hNTRpc0x5b9c"
client = gspread.authorize(credentials)
spreadsheet = client.open_by_key(docid)
for i, worksheet in enumerate(spreadsheet.worksheets()):
filename = docid + '-worksheet' + str(i) + '.csv'
with open(filename, 'wb') as f:
writer = csv.writer(f)
writer.writerows(worksheet.get_all_values())
In case anyone comes across this looking for a quick fix, here's another (currently) working solution that doesn't rely on the gdata client library:
#!/usr/bin/python
import re, urllib, urllib2
class Spreadsheet(object):
def __init__(self, key):
super(Spreadsheet, self).__init__()
self.key = key
class Client(object):
def __init__(self, email, password):
super(Client, self).__init__()
self.email = email
self.password = password
def _get_auth_token(self, email, password, source, service):
url = "https://www.google.com/accounts/ClientLogin"
params = {
"Email": email, "Passwd": password,
"service": service,
"accountType": "HOSTED_OR_GOOGLE",
"source": source
}
req = urllib2.Request(url, urllib.urlencode(params))
return re.findall(r"Auth=(.*)", urllib2.urlopen(req).read())[0]
def get_auth_token(self):
source = type(self).__name__
return self._get_auth_token(self.email, self.password, source, service="wise")
def download(self, spreadsheet, gid=0, format="csv"):
url_format = "https://spreadsheets.google.com/feeds/download/spreadsheets/Export?key=%s&exportFormat=%s&gid=%i"
headers = {
"Authorization": "GoogleLogin auth=" + self.get_auth_token(),
"GData-Version": "3.0"
}
req = urllib2.Request(url_format % (spreadsheet.key, format, gid), headers=headers)
return urllib2.urlopen(req)
if __name__ == "__main__":
import getpass
import csv
email = "" # (your email here)
password = getpass.getpass()
spreadsheet_id = "" # (spreadsheet id here)
# Create client and spreadsheet objects
gs = Client(email, password)
ss = Spreadsheet(spreadsheet_id)
# Request a file-like object containing the spreadsheet's contents
csv_file = gs.download(ss)
# Parse as CSV and print the rows
for row in csv.reader(csv_file):
print ", ".join(row)
You might try using the AuthSub method described in the Exporting Spreadsheets section of the documentation.
Get a separate login token for the spreadsheets service and substitue that for the export. Adding this to the get_spreadsheet code worked for me:
import gdata.spreadsheet.service
def get_spreadsheet(key, gid=0):
# ...
spreadsheets_client = gdata.spreadsheet.service.SpreadsheetsService()
spreadsheets_client.email = gd_client.email
spreadsheets_client.password = gd_client.password
spreadsheets_client.source = "My Fancy Spreadsheet Downloader"
spreadsheets_client.ProgrammaticLogin()
# ...
entry = gd_client.GetDocumentListEntry(uri)
docs_auth_token = gd_client.GetClientLoginToken()
gd_client.SetClientLoginToken(spreadsheets_client.GetClientLoginToken())
gd_client.Export(entry, file_path)
gd_client.SetClientLoginToken(docs_auth_token) # reset the DocList auth token
Notice I also used Export, as Download seems to give only PDF files.
(Jul 2016) All other answers are pretty much outdated or will be, either because they use GData ("Google Data") Protocol, ClientLogin, or AuthSub, all of which have been deprecated. The same is true for all code or libraries that use the Google Sheets API v3 or older.
Modern Google API access occurs using API keys (for accessing public data), OAuth2 client IDs (for accessing data owned by users), or service accounts (for accessing data owned by applications/in the cloud) primarily with the Google Cloud client libraries for GCP APIs and Google APIs Client Libraries for non-GCP APIs. For this task, it would be the latter for Python.
To make it happen your code needs authorized access to the Google Drive API, perhaps to query for specific Sheets to download, and then to perform the actual export(s). Since this is likely a common operation, I wrote a blogpost sharing a code snippet that does this for you. If you wish to pursue this even more, I've got another pair of posts along with a video that outlines how to upload files to and download files from Google Drive.
Note that there is also a Google Sheets API v4, but it's primarily for spreadsheet-oriented operations, i.e., inserting data, reading spreadsheet rows, cell formatting, creating charts, adding pivot tables, etc., not file-based request like exporting where the Drive API is the correct one to use.
I wrote a blog post that demos exporting a Google Sheet as CSV from Drive. The core part of the script:
# setup
FILENAME = 'inventory'
SRC_MIMETYPE = 'application/vnd.google-apps.spreadsheet'
DST_MIMETYPE = 'text/csv'
DRIVE = discovery.build('drive', 'v3', http=creds.authorize(Http()))
# query for file to export
files = DRIVE.files().list(
q='name="%s" and mimeType="%s"' % (FILENAME, SRC_MIMETYPE), orderBy='modifiedTime desc,name').execute().get('files', [])
# export 1st match (if found)
if files:
fn = '%s.csv' % os.path.splitext(files[0]['name'].replace(' ', '_'))[0]
print('Exporting "%s" as "%s"... ' % (files[0]['name'], fn), end='')
data = DRIVE.files().export(fileId=files[0]['id'], mimeType=DST_MIMETYPE).execute()
if data:
with open(fn, 'wb') as f:
f.write(data)
print('DONE')
To learn more about using Google Sheets with Python, see my answer for a similar question. You can also download a Sheet in XLSX and other formats supported by Drive.
If you're completely new to Google APIs, then you need to take a further step back and review these videos first:
How to use Google APIs & create API projects -- the UI has changed but the concepts are still the same
Walkthrough of authorization boilerplate code (Python) -- you can use any supported language to access Google APIs; if you don't do Python, use it as pseudocode to help get you started
Listing your files in Google Drive and code deep dive post
If you already have experience with Google Workspace (formerly G Suite, Google Apps, Google "Docs") APIs and want to see more videos on using both APIs:
Sheets API video library
Drive API video library
Google Workspace (G Suite) Dev Show video series I produced
This no longer works as of gdata 2.0.1.4:
gd_client.SetClientLoginToken(spreadsheets_client.GetClientLoginToken())
Instead, you have to do:
gd_client.SetClientLoginToken(gdata.gauth.ClientLoginToken(spreadsheets_client.GetClientLoginToken()))
I wrote pygsheets as an alternative to gspread, but using google api v4. It has an export method to export spreadsheet.
import pygsheets
gc = pygsheets.authorize()
# Open spreadsheet and then workseet
sh = gc.open('my new ssheet')
wks = sh.sheet1
#export as csv
wks.export(pygsheets.ExportType.CSV)
The following code works in my case (Ubuntu 10.4, python 2.6.5 gdata 2.0.14)
import gdata.docs.service
import gdata.spreadsheet.service
gd_client = gdata.docs.service.DocsService()
gd_client.ClientLogin(email,password)
spreadsheets_client = gdata.spreadsheet.service.SpreadsheetsService()
spreadsheets_client.ClientLogin(email,password)
#...
file_path = file_path.strip()+".xls"
docs_token = gd_client.auth_token
gd_client.SetClientLoginToken(spreadsheets_client.GetClientLoginToken())
gd_client.Export(entry, file_path)
gd_client.auth_token = docs_token
I've simplified #Cameron's answer even further, by removing the unnecessary object orientation. This makes the code smaller and easier to understand. I also edited the url, which might work better.
#!/usr/bin/python
import re, urllib, urllib2
def get_auth_token(email, password):
url = "https://www.google.com/accounts/ClientLogin"
params = {
"Email": email, "Passwd": password,
"service": 'wise',
"accountType": "HOSTED_OR_GOOGLE",
"source": 'Client'
}
req = urllib2.Request(url, urllib.urlencode(params))
return re.findall(r"Auth=(.*)", urllib2.urlopen(req).read())[0]
def download(spreadsheet, worksheet, email, password, format="csv"):
url_format = 'https://docs.google.com/spreadsheets/d/%s/export?exportFormat=%s#gid=%s'
headers = {
"Authorization": "GoogleLogin auth=" + get_auth_token(email, password),
"GData-Version": "3.0"
}
req = urllib2.Request(url_format % (spreadsheet, format, worksheet), headers=headers)
return urllib2.urlopen(req)
if __name__ == "__main__":
import getpass
import csv
spreadsheet_id = "" # (spreadsheet id here)
worksheet_id = '' # (gid here)
email = "" # (your email here)
password = getpass.getpass()
# Request a file-like object containing the spreadsheet's contents
csv_file = download(spreadsheet_id, worksheet_id, email, password)
# Parse as CSV and print the rows
for row in csv.reader(csv_file):
print ", ".join(row)
I'm using this:
curl 'https://docs.google.com/spreadsheets/d/1-lqLuYJyHAKix-T8NR8wV8ZUUbVOJrZTysccid2-ycs/gviz/tq?tqx=out:csv' on a sheet that is set to publicly readable.
So you would need a python version of curl, if you can work with public sheets.
If you have a sheet with some tabs you don't want to reveal, create a new sheet, and import the ranges you want to publish into tabs on it.
Downloading a spreadsheet from google doc is pretty simple using sheets.
You can follow the detailed documentation on
https://pypi.org/project/gsheets/
or follow the below-given steps. I recommend reading through the documentation for better coverage.
pip install gsheets
Log in to the Google Developers Console with the Google account whose spreadsheets you want to access. Create (or select) a project and enable the Drive API and Sheets API (under Google Apps APIs).
Go to the Credentials for your project and create New credentials > OAuth client ID > of type Other. In the list of your OAuth 2.0 client IDs click Download JSON for the Client ID you just created. Save the file as client_secrets.json in your home directory (user directory).
Use the following code snippet.
from gsheets import Sheets
sheets = Sheets.from_files('client_secret.json')
print(sheets) # will ensure authenticate connection
s = sheets.get("{SPREADSHEET_URL}")
print(s) # will ensure your file is accessible
s.sheets[1].to_csv('Spam.csv', encoding='utf-8', dialect='excel') # will download the file as csv
This isn't a complete answer, but Andreas Kahler wrote up an interesting CMS solution using Google Docs + Google App Engline + Python. Not having any experience in the area, I cannot see exactly what portion of the code may be of use to you, but check it out. I know it interfaces with a Google Docs account and plays with files, so I have a feeling you'll recognize what's going on. It should at least point you in the right direction.
Google AppEngine + Google Docs + Some Python = Simple CMS
Gspread is indeed a big improvement over GoogleCL and Gdata (both of which I've used and thankfully phased out in favor of Gspread). I think that this code is even quicker than the earlier answer to get the contents of the sheet:
username = 'sdfsdfsds#gmail.com'
password = 'sdfsdfsadfsdw'
sheetname = "Sheety Sheet"
client = gspread.login(username, password)
spreadsheet = client.open(sheetname)
worksheet = spreadsheet.sheet1
contents = []
for rows in worksheet.get_all_values():
contents.append(rows)
(Mar 2019, Python 3) My data is usually not sensitive and I use usually table format similar to CSV.
In such case, one can simply publish to the web the sheet and than use it as a CSV file on a server.
(One publishes it using File -> Publish to the web ... -> Sheet 1 -> Comma separated values (.csv) -> Publish).
import csv
import io
import requests
url = "https://docs.google.com/spreadsheets/d/e/<GOOGLE_ID>/pub?gid=0&single=true&output=csv" # you can get the whole link in the 'Publish to the web' dialog
r = requests.get(url)
r.encoding = 'utf-8'
csvio = io.StringIO(r.text, newline="")
data = []
for row in csv.DictReader(csvio):
data.append(row)