Good day!
I'm trying to run this program:
import pandas as pd
import gmaps #import the gmaps library
import gmaps.datasets #imports the gmaps.datasets library
gmaps.configure(api_key='< API Key >') # Fill in with your API key
earthquake_df = gmaps.datasets.load_dataset_as_df('earthquakes') #equate your raw data to the designated dataset
earthquake_df.head()
locations = earthquake_df[['latitude', 'longitude']] #locate the
weights = earthquake_df['magnitude']
fig = gmaps.figure()
fig.add_layer(gmaps.heatmap_layer(locations, weights=weights))
fig
But no displaying image, instead this error occurs:
Something went wrong authenticating with Google Maps. This may be because you did not pass in an API key, or the key you passed in was incorrect. Check the browser console, look for errors that start with Google Maps API error and compare the message against the Google Maps documentation.
I don't know what's that means? Please, anyone, help me! Thanks!
Related
I am using python to send a query to Athena and get table DDL. I am using start_query_execution and get_query_execution functions in the awswrangler package.
import boto3
import awswrangler as wr
import time
import pandas as pd
boto3.setup_default_session(region_name="us-east-1")
sql="show create table 'table-name'"
query_exec_id = wr.athena.start_query_execution(sql=sql, database='database-name')
time.sleep(20)
res=wr.athena.get_query_execution(query_execution_id=query_exec_id)
The code above creates a dict object that stores query results in an s3 link.
The link can be accessed by
res['ResultConfiguration']['OutputLocation']. It's a text link: s3://.....txt
Can someone help me figure how to access the output in the link. I tried using readlines() but it seemes to error out.
Here is what I did
import urllib3
target_url = res['ResultConfiguration']['OutputLocation']
f = urllib3.urlopen(target_url)
for l in f.readlines():
print (l)
Or if someone can suggest an easier way to get table DDL in python.
Keep in mind that the returned link will time out after a short while... and make sure your credentials allow you to get the data from the URL specified. If you drop the error message here we can help you better. –
Oh... "It's a text link: s3://.....txt" is not a standard URL. You cant read that with urllib3. You can use awswrangler to read the bucket. –
I think the form is
wr.s3.read_fwf(...)
I want to import a public dataset from Kaggle (https://www.kaggle.com/unsdsn/world-happiness?select=2017.csv) into a local jupyter notebook. I don't want to use any credencials in the process.
I saw diverse solutions including: pd.read_html, pd.read_csv, pd.read_table (pd = pandas).
I also found the solutions that imply a login.
The first set of solutions are the ones I am interested in, though I see that they work on other websites because there is a link to the raw data.
I have been clincking everywhere in the kaggle interface but find no direct url to raw data.
Bottom line: Is it possible to use say pd.read_csv to directly get data from the website into your local notebook? If so, how?
You can automate kaggle.cli
follow the instructions to download and save kaggle.json for authentication https://github.com/Kaggle/kaggle-api
import kaggle.cli
import sys
import pandas as pd
from pathlib import Path
from zipfile import ZipFile
# download data set
# https://www.kaggle.com/unsdsn/world-happiness?select=2017.csv
dataset = "unsdsn/world-happiness"
sys.argv = [sys.argv[0]] + f"datasets download {dataset}".split(" ")
kaggle.cli.main()
zfile = ZipFile(f"{dataset.split('/')[1]}.zip")
dfs = {f.filename:pd.read_csv(zfile.open(f)) for f in zfile.infolist() }
dfs["2017.csv"]
Is there a way to refresh Tableau embedded datasource using python. I am currently using Tableau server client library to refresh published datasources which is actually working fine. Can someone help me to figure out a way?
The way you can reach them is kinda annoying from my perpective.
You need to use populate_connections() function to load embedded datasources. It would be easier if you know the name of the workbook.
import tableauserverclient as TSC
#sign in using personal access token
server = TSC.Server(server_address='server_name', use_server_version=True)
server.auth.sign_in_with_personal_access_token(auth_req=TSC.PersonalAccessTokenAuth(token_name='tokenName', personal_access_token='tokenValue', site_id='site_name'))
#use RequestOptions() with a filter to pull an specific workbook
def get_workbook(name):
req_opt = TSC.RequestOptions()
req_opt.filter.add(TSC.Filter(req_opt.Field.Name, req_opt.Operator.Equals, name))
return server.workbooks.get(req_opt)[0][0] #workbooks.get () function is intended to return a list items that you can iterate, but here we are assuming it will be find only one result
workbook = get_workbook(name='workbook_name') #gets the workbook
server.workbooks.populate_connections(workbook) #this function will load all the embedded datasources in the workbook
for datasource in workbook.connections: #iterate in datasource list
#Note: each element of this list is not an TSC.DatabaseItem, so, you will need to load a valid one using the "datasource_id" attribute from the element.
#If you try server.datasources.refresh(datasource) it will fail
ds = server.datasources.get_by_id(datasource.datasource_id) #loads a valid TSC.DatabaseItem
server.datasources.refresh(ds) #finally, you will be able to refresh it
...
The best practice is do not embeddeding datasources but publish them independently.
Update:
There is an easy way to achieve this. There are two types of extract tasks, Workbook and Data source. So, for embedded data sources, you need to perform a workbook refresh.
workbook = get_workbook(name='workbook_name')
server.workbooks.refresh(workbook.id)
You can use "tableauserverclient" Python package. You can pip install it from PyPy.
After installing it, you can consult the docs.
I will attach an example I used some time ago:
import tableauserverclient as TSC
tableau_auth = TSC.TableauAuth('user', 'pass', 'homepage')
server = TSC.Server('server')
with server.auth.sign_in(tableau_auth):
all_datasources, pagination_item = server.datasources.get()
print("\nThere are {} datasources on
site:".format(pagination_item.total_available))
print([datasource.name for datasource in all_datasources])
Im using Python's geocoder lib , and Im trying to find city names based on lat and lng. The problem is that I always get None values, any ideas why?
import geocoder
lat = 44.0207472303
lng = 20.9033038427
print(lat, lng)
city_name = geocoder.google([lat, lng], method = 'reverse')
city_name = str(city_name.city)
print(city_name) #None
In order for you to use Google's geocoding API with gecoder, you need to have the Google API keys set in the environment variables. See here.
To get the Geocoding API keys, you must first activate the API in the Google Cloud Platform Console and obtain the proper authentication credentials. You can get API credentials here,
https://cloud.google.com/maps-platform/#get-started
After getting the API key, you need to add the following code in order to use it.
import os
os.environ["GOOGLE_API_KEY"] = "API_KEY"
Source
I need to pull information on a long list of JIRA issues that live in a CSV file. I'm using the JIRA REST API in Python in a small script to see what kind of data I can expect to retrieve:
#!/usr/bin/python
import csv
import sys
from jira.client import JIRA
*...redacted*
csvfile = list(csv.reader(open(sys.argv[1])))
for row in csvfile:
r = str(row).strip("'[]'")
i = jira.issue(r)
print i.id,i.fields.summary,i.fields.fixVersions,i.fields.resolution,i.fields.resolutiondate
The ID (Key), Summary, and Resolution dates are human-readable as expected. The fixVersions and Resolution fields are resources as follows:
[<jira.resources.Version object at 0x105096b11>], <jira.resources.Resolution object at 0x105096d91>
How do I use the API to get the set of available fixVersions and Resolutions, so that I can populate this correctly in my output CSV?
I understand how JIRA stores these values, but the documentation on the jira-python code doesn't explain how to harness it to grab those base values. I'd be happy to just snag the available fixVersion and Resolution values globally, but the resource info I receive doesn't map to them in an obvious way.
You can use fixVersion.name and resolution.name to get the string versions of those values.
User mdoar answered this question in his comment:
How about using version.name and resolution.name?