I am relatively new to python and am struggling to figure out a way to copy and paste data from one google sheet to another using gspread. Does anyone know how to do this without using win32 to copy to an excel as a bridge?? Please see the code and error msg below:
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import pandas as pd
import numpy as np
Scope = ["https://spreadsheets.google.com/feeds",'https://www.googleapis.com/auth/spreadsheets',"https://www.googleapis.com/auth/drive.file","https://www.googleapis.com/auth/drive"]
creds = ServiceAccountCredentials.from_json_keyfile_name(r'C:\Users\Documents\Scripts\FX Rates Query\key.json', Scope)
client = gspread.authorize(creds)
sheet = client.open("Capital").sheet1
data=sheet.get_all_records()
df = pd.DataFrame(data)
df.to_excel(r'C:\Users\Documents\Reserves_extract.xlsx')
sheet1 = client.open("Cash Duration ").sheet1
mgnt_fees = sheet1.col_values(5)
fees = pd.DataFrame(mgnt_fees)
fees1 = fees[fees!=0]
print(fees1)
update = sheet1.update('B7',fees1)
##^^ERROR MSG IS COMING FROM HERE
Error msg:
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type DataFrame is not JSON serializable
From your replying of I would like to copy a specific column from google spreadsheet A to google spreadsheet B, in this case, how about the following modification?
Modified script:
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import pandas as pd
import numpy as np
Scope = ["https://spreadsheets.google.com/feeds",'https://www.googleapis.com/auth/spreadsheets',"https://www.googleapis.com/auth/drive.file","https://www.googleapis.com/auth/drive"]
creds = ServiceAccountCredentials.from_json_keyfile_name(r'C:\Users\Documents\Scripts\FX Rates Query\key.json', Scope)
client = gspread.authorize(creds)
# I modified below script.
spreadsheetA = client.open("Capital")
spreadsheetB = client.open("Cash Duration ")
srcCol = "'" + spreadsheetA.sheet1.title + "'!A1:A" # This is the column "A" of the 1st tab of Spreadsheet A.
dstCol = "'" + spreadsheetB.sheet1.title + "'!B1:B" # This is the column "B" of the 1st tab of Spreadsheet B.
src = spreadsheetA.values_get(srcCol)
del src['range']
spreadsheetB.values_update(dstCol, params={'valueInputOption': 'USER_ENTERED'}, body=src)
In this modified script, column "A" of the 1st tab of Spreadsheet A is copied to column "B" of the 1st tab of Spreadsheet B. Please modify this for your actual situation.
References:
values_get
values_update
Related
When I run this script, I do not get an output. It appears as if it is successful as I do not get any errors telling me otherwise. When I run the notebook, a cell appears below the 5th cell, indicating that the script ran successfully, but there's nothing populated. All of my auth is correct as when I use the same auth in postman to pull tag data values, it's successful. This script used to run fine and output a table in addition to a graph.
What gives? Any help would be greatly appreciated.
Sample dataset when pulling tag data values from the Azure API
"c": 100,
"s": "opc",
"t": "2021-06-11T16:45:55.04Z",
"v": 80321248.5
#Code
import pandas as pd
from modules.services_factory import ServicesFactory
from modules.data_service import TagDataValue
from modules.model_service import ModelService
from datetime import datetime
import dateutil.parser
pd.options.plotting.backend = "plotly"
#specify tag list, start and end times here
taglist = ['c41f-ews-systemuptime']
starttime = '2021-06-10T14:00:00Z'
endtime = '2021-06-10T16:00:00Z'
# Get data and model services.
services = ServicesFactory('local.settings.production.json')
data_service = services.get_data_service()
tagvalues = []
for tag in taglist:
for tagvalue in data_service.get_tag_data_values(tag, dateutil.parser.parse(starttime), dateutil.parser.parse(endtime)):
tagvaluedict = tagvalue.__dict__
tagvaluedict['tag_id'] = tag
tagvalues.append(tagvaluedict)
df = pd.DataFrame(tagvalues)
df = df.pivot(index='t',columns='tag_id')
fig = df['v'].plot()
fig.update_traces(connectgaps=True)
fig.show()
i am trying to use python's package for influxdb to upload dataframe into the database
i am using the write_points class to write point into the database as given in the documentation(https://influxdb-python.readthedocs.io/en/latest/api-documentation.html)
every time i try to use the class it only updates the last line of the dataframe instead of the complete dataframe.
is this a usual behavior or there is some problem here?
given below is my script:
from influxdb import InfluxDBClient, DataFrameClient
import pathlib
import numpy as np
import pandas as pd
import datetime
db_client = DataFrameClient('dbserver', port, 'username', 'password', 'database',
ssl=True, verify_ssl=True)
today = datetime.datetime.now().strftime('%Y%m%d')
path = pathlib.Path('/dir1/dir/2').glob(f'pattern_to_match*/{today}.filename.csv')
for file in path:
order_start = pd.read_csv(f'{file}')
if not order_start.empty:
order_start['data_line1'] = (order_start['col1'] - \
order_start['col2'])*1000
order_start['data_line2'] = (order_start['col3'] - \
order_start['col4'])*1000
d1 = round(order_start['data_line1'].quantile(np.arange(0,1.1,0.1)), 3)
d2 = round(order_start['data_line2'].quantile(np.arange(0,1.1,0.1)), 3)
out_file = pd.DataFrame()
out_file = out_file.append(d1)
out_file = out_file.append(d2)
out_file = out_file.T
out_file.index = out_file.index.set_names(['percentile'])
out_file = out_file.reset_index()
out_file['percentile'] = out_file.percentile.apply(lambda x: f'{100*x:.0f}%')
out_file['tag_col'] = str(file).split('/')[2]
out_file['time'] = pd.to_datetime('today').strftime('%Y%m%d')
out_file = out_file.set_index('time')
out_file.index = pd.to_datetime(out_file.index)
db_client.write_points(out_file, 'measurement', database='database',
retention_policy='rp')
can anyone please help?
I have this code that basically give me the lat and long when I give it the direction. It is planted to use the API of google but now I need to use the Tomtom API. I donĀ“t know how to do it.
import openpyxl
from openpyxl import load_workbook
import pandas as pd
ds_address=('C:/Users...
wb = load_workbook(filename = ds_address)
data = wb['Sheet1']
#########################
######### 02 - API GOOGLE
#########################
import googlemaps
YOUR_API_KEY =
gmaps = googlemaps.Client(key=YOUR_API_KEY)
def geo_coding_google(adrss):
geocode_result = gmaps.geocode(adrss)
lat,lng = geocode_result[0]['geometry']['location']['lat'],geocode_result[0]['geometry']['location']['lng']
return(lat,lng)
#########################
######### 03 - Build Data
#########################
out = []
for r in range(2,101):#42207):
try:
id=str(data.cell(row=r, column=1).value).lower().encode('utf-8').strip()
adrss=str(data.cell(row=r, column=2).value).lower().encode('utf-8').strip()
adrss=str(adrss)+",medellin,antioquia,colombia"
print(adrss)
g_c=geo_coding_google(str(adrss))
#### Save Data
out.append({'id':id,'direccion':adrss,'lat':g_c[0],'lon':g_c[1]})
except IndexError:
out.append({'id':"NA",'direccion':"NA",'lat':"NA"[0],'lon':"NA"[1]})
pass
#########################
######### 04 - Export data
#########################
df = pd.DataFrame(out)
file_name="C:/Users
Thanks
You need a library that supports TomTom Geocoding endpoint.
For example: https://geocoder.readthedocs.io/providers/TomTom.html
>>> import geocoder
>>> g = geocoder.tomtom(adrss, key='<API KEY>')
>>> result = g.latlng
I'm trying to autofill with "fill series" formatting the value of cell A11 into A12 on two worksheets. This needs to be achieved using win32com module. My code is:
from win32com.client import Dispatch
from win32com.client import constants
xl = Dispatch('Excel.Application')
xl.Visible = True
wb = xl.Workbooks.Open ('S:\\Height Peak.xls')
ws = wb.Worksheets(['Sheet1','Sheet2'])
ws.Select()
ws.Range('A10:A11').AutoFill(ws.Range('A11:A12'), xlFillSeries)
As soon as I run the code, I'm encountering the following error:
AttributeError: unknown.Range
There were 3 Problems:
1) You need to iterate over your worksheets!
2) The source Range
needs to be a subrange of the fill Range. That is not documented well
and I basically just figured that out from looking at examples in the
docs.
3) You import constants, but you need to actually specify your
constants' source! (see below
Code:
from win32com.client import Dispatch
from win32com.client import constants as const
xl = Dispatch('Excel.Application')
xl.Visible = True
wb = xl.Workbooks.Open ('S:\\Height Peak.xls')
ws = wb.Worksheets
for sheet in ws:
if sheet.Name.endswith("1") or sheet.Name.endswith("2"):
sourceRange = sheet.Range('A1:A10')
fillRange = sheet.Range('A1:A12')
sourceRange.AutoFill(fillRange, const.xlFillSeries)
I am running a pandas.io.ga query and absolutely no object is being returned? I have copied the secret_clients.json file to two folders just to be safe because I don't know which pandas/io folder to do it to, i suspect the problem is that it can't find the client_secrets.json file - but with NO ERROR at all I have no clue?!
sudo cp ~/Desktop/client_secrets.json /Users/atrombley/anaconda/pkgs/pandas-0.14.1-np19py27_0/lib/python2.7/site-packages/pandas/io/
sudo cp ~/Desktop/client_secrets.json /Users/atrombley/anaconda/lib/python2.7/site-packages/pandas/io/
import numpy as np; import pandas as pd; import pandas.io.ga as ga; import os
print ga.read_ga(
account_id = "private",
property_id = "private",
metrics = ['users', 'pageviews'],
dimensions = ['dayOfWeek'],
start_date = "2015-01-01",
end_date = "2015-01-02",
index_col = 0,
)
>>> ## THIS IS JUST BLANK? NOTHING PRINTED?