Google Analytics API not yielding Custom Segments [duplicate] - python

This question already has answers here:
Google Analytics API - retrieve Custom Segment Id by its name
(2 answers)
Closed 4 years ago.
I am using a Python script to get Segment info from Google Analytics. All of the built in Segments that come with Google Analytics are printing fine, but the Custom Segments I have created are not showing up.
Here are the relevant portions of the Script:
def get_service(api_name, api_version, scope, key_file_location,
service_account_email):
"""Get a service that communicates to a Google API.
Args:
api_name: The name of the api to connect to.
api_version: The api version to connect to.
scope: A list auth scopes to authorize for the application.
key_file_location: The path to a valid service account p12 key file.
service_account_email: The service account email address.
Returns:
A service that is connected to the specified API.
"""
credentials = ServiceAccountCredentials.from_p12_keyfile(
service_account_email, key_file_location, scopes=scope)
http = credentials.authorize(httplib2.Http())
# Build the service object.
service = build(api_name, api_version, http=http)
return service
def get_segments(service):
try:
segments = service.management().segments().list().execute()
except TypeError, error:
# Handle errors in constructing a query.
print 'There was an error in constructing your query : %s' % error
except HttpError, error:
# Handle API errors.
print ('There was an API error : %s : %s' %(error.resp.status, error.resp.reason))
# Example #2:
# The results of the list method are stored in the segments object.
# The following code shows how to iterate through them.
for segment in segments.get('items', []):
print 'Segment Id = %s' % segment.get('id')
print 'Segment kind = %s' % segment.get('kind')
print 'Segment segmentId = %s' % segment.get('segmentId')
print 'Segment Name = %s' % segment.get('name')
print 'Segment Definition = %s' % segment.get('definition')
if segment.get('created'):
print 'Created = %s' % segment.get('created')
print 'Updated = %s' % segment.get('updated')
print
def main():
# Define the auth scopes to request.
scope = ['https://www.googleapis.com/auth/analytics.readonly']
# Use the developer console and replace the values with your
# service account email and relative location of your key file.
service_account_email = '************'
key_file_location = '*********'
# Authenticate and construct service.
service = get_service('analytics', 'v3', scope, key_file_location,
service_account_email)
get_segments(service)
if __name__ == '__main__':
main()

You need to have Collaborate permission enabled for the custom segments you have created
Manage Segments#Set Segment availability
Visit this link and go to 'Set Segment availability' section in there.
Apply 'Collaborate permission' option as shown in the link.
After this, your segments will be pulled from the API code you posted ;)
Please note: Analytics Core Reporting API doesn't have access to the custom segments. They can only be accessed by the Analytics Management API

Related

How to add permission to builtin role in storage account using Azure Python SDK?

I have a storage account and I want to give permission to one of my App Service in "Storage Account Key Operator Service Role".
Similar to the below action in Azure portal.
Any good workarounds are also welcome.
Here are some workarounds for you.
1.Use powershell, refer to this link.
New-AzureRmRoleAssignment -ObjectId <ObjectId> -RoleDefinitionName "Storage Account Key Operator Service Role" -Scope "<your storage account resourceID>"
2.Use Azure CLI, refer to this link.
az role assignment create --role "Storage Account Key Operator Service Role" --assignee-object-id "<object-id>" --scope "<your storage account resourceID>"
3.Use Rest API, refer to this link.
PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignments/{roleAssignmentName}?api-version=2015-07-01
4.Use ARM template, refer to this link.
After spending so much time I was able to use python for authorizing app service.Here's the approach I followed
The credentials you are using should belong to subscription owner because contributor is not allowed to make access changes.
Here's the python packages that one needs to install
azure-mgmt-authorization==0.50.0
azure-graphrbac==0.51.0
Here's the code snippet
subscription_id = config['SUBSCRIPTION_ID']
credentials = ServicePrincipalCredentials(
client_id=config['AZURE_CLIENT_ID'],
secret=config['AZURE_CLIENT_SECRET'],
tenant=config['AZURE_TENANT_ID']
)
graph_credentials = ServicePrincipalCredentials(
client_id=config['AZURE_CLIENT_ID'],
secret=config['AZURE_CLIENT_SECRET'],
tenant=config['AZURE_TENANT_ID'],
resource="https://graph.windows.net"
)
def get_object_id(full_app_name, resource_name_prefix, resource_type="Microsoft.Web/sites"):
gcli = GraphRbacManagementClient(graph_credentials, config['AZURE_TENANT_ID'])
sp = gcli.service_principals.list(filter="displayName eq '%s'" % full_app_name)
sp = next(sp, False)
if sp:
print("Found Service Principal %s" % sp.display_name)
return sp.object_id
else:
raise Exception("Service Principal not found")
def delete_keylistrole_appservice(resource_group_name, storage_name, role_assignment_name):
resource_provider = "Microsoft.Storage"
resource_type = "storageAccounts"
scope = '/subscriptions/%s/resourceGroups/%s/providers/%s/%s/%s' % (
subscription_id, resource_group_name, resource_provider, resource_type, storage_name)
auth_cli = AuthorizationManagementClient(credentials, subscription_id, api_version="2015-07-01")
resp = auth_cli.role_assignments.delete(scope, role_assignment_name)
print("%s App Service access revoked %s Storage account" % (role_assignment_name, storage_name))
def assign_keylistrole_appservice(resource_group_name, storage_name, app_service_name):
resource_provider = "Microsoft.Storage"
resource_type = "storageAccounts"
scope = '/subscriptions/%s/resourceGroups/%s/providers/%s/%s/%s' % (
subscription_id, resource_group_name, resource_provider, resource_type, storage_name)
role_assignment_name = str(uuid.uuid4())
role_id = "/subscriptions/%s/providers/Microsoft.Authorization/roleDefinitions/%s" % (subscription_id, "81a9662b-bebf-436f-a333-f67b29880f12")
principal_id = get_object_id(app_service_name)
props = RoleAssignmentProperties(role_definition_id=role_id, principal_id=principal_id)
auth_cli = AuthorizationManagementClient(credentials, subscription_id, api_version="2015-07-01")
resp = auth_cli.role_assignments.create(scope, role_assignment_name, properties=props)
print("%s App Service authorized to access %s Storage account" % (app_service_name, storage_name))
return role_assignment_name
Note the graph_credentials they are different than credentials as they require resource="https://graph.windows.net"

Youtube API commentThreads.update return 400 error when replying to topLevelComment

Reporting a possible bug on Youtube Api commentThreads.update
Name of API affected: CommentThreads: update
Issue summary: CommentThreads: update When calling this api to update the topLevelComment returned by CommentThreads.list, I got a 400 error. I was using this API in an internal tool to allow me respond to comments. It was working fine for the last months... and in the last days (~2016-11-07) with no changes on any code on my side, the API started to return a 400 error. I have all my internal comment systems halted right now as I could not figure out how to fix this. Every thing I tried and tested brings me to the 400 error and the message below.
"While this can be a transient error, it usually indicates that the requests input is invalid."
Even using the API console, I got the same errror!
My bet? This is a transient error that I hope Google can fix.
Steps to reproduce issue:
Retrieve "heldForReview" threads using CommentThreads.list
Try to update the returned topLevelComment changing ["topLevelComment"]["snippet"]["textOriginal"] = "some text". Optionally, change the "moderationStatus" to "published". The error is the same.
This behaviour can be easily spotted using the Python API samples, specifically the file comment_threads.py from https://github.com/youtube/api-samples/blob/master/python/comment_threads.py
Expected output: The top level comment from Thread should become visible (after setting the moderationStatus to published) and have the "some text" as the first child comment as its answer.
Actual results:
{
"error": {
"errors": [
{
"domain": "youtube.commentThread",
"reason": "processingFailure",
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the \u003ccode\u003ecommentThread\u003c/code\u003e resource in the request body to ensure that it is valid.",
"locationType": "other",
"location": "body"
}
],
"code": 400,
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the \u003ccode\u003ecommentThread\u003c/code\u003e resource in the request body to ensure that it is valid."
}
}
Notes: I am providing a sample code below.
#!/usr/bin/python
# Usage example:
# python comment_threads.py --channelid='<channel_id>' --videoid='<video_id>' --text='<text>'
import httplib2
import os
import sys
from apiclient.discovery import build_from_document
from apiclient.errors import HttpError
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import argparser, run_flow
# The CLIENT_SECRETS_FILE variable specifies the name of a file that contains
# the OAuth 2.0 information for this application, including its client_id and
# client_secret. You can acquire an OAuth 2.0 client ID and client secret from
# the {{ Google Cloud Console }} at
# {{ https://cloud.google.com/console }}.
# Please ensure that you have enabled the YouTube Data API for your project.
# For more information about using OAuth2 to access the YouTube Data API, see:
# https://developers.google.com/youtube/v3/guides/authentication
# For more information about the client_secrets.json file format, see:
# https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
CLIENT_SECRETS_FILE = "client_secrets.json"
# This OAuth 2.0 access scope allows for full read/write access to the
# authenticated user's account and requires requests to use an SSL connection.
YOUTUBE_READ_WRITE_SSL_SCOPE = "https://www.googleapis.com/auth/youtube.force-ssl"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the APIs Console
https://console.developers.google.com
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__),CLIENT_SECRETS_FILE))
# Authorize the request and store authorization credentials.
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=YOUTUBE_READ_WRITE_SSL_SCOPE,message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
# Trusted testers can download this discovery document from the developers page
# and it should be in the same directory with the code.
with open("youtube-v3-discoverydocument.json", "r") as f:
doc = f.read()
return build_from_document(doc, http=credentials.authorize(httplib2.Http()))
# Call the API's commentThreads.list method to list the existing comments.
def get_comments(youtube, video_id, channel_id):
results = youtube.commentThreads().list(
part="snippet",
videoId=video_id,
channelId=channel_id,
# ONLY FILTER COMMENTS THAT ARE HELD FOR REVIEW
# THOSE COMMENTS ONLY HAVE ONE TOP LEVEL COMMENT
moderationStatus="heldForReview",
textFormat="plainText").execute()
for item in results["items"]:
comment = item["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Comment by %s: %s" % (author, text)
return results["items"]
# Call the API's commentThreads.insert method to insert a comment.
def insert_comment(youtube, channel_id, video_id, text):
insert_result = youtube.commentThreads().insert(
part="snippet",
body=dict(
snippet=dict(
channelId=channel_id,
videoId=video_id,
topLevelComment=dict(
snippet=dict(
textOriginal=text
)
)
)
)
).execute()
comment = insert_result["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Inserted comment for %s: %s" % (author, text)
# Call the API's commentThreads.update method to update an existing comment.
def update_comment(youtube, comment):
comment["snippet"]["topLevelComment"]["snippet"]["textOriginal"] = 'updated'
update_result = youtube.commentThreads().update(
part="snippet",
body=comment
).execute()
comment = update_result["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Updated comment for %s: %s" % (author, text)
if __name__ == "__main__":
# The "channelid" option specifies the YouTube channel ID that uniquely
# identifies the channel for which the comment will be inserted.
argparser.add_argument("--channelid",
help="Required; ID for channel for which the comment will be inserted.")
# The "videoid" option specifies the YouTube video ID that uniquely
# identifies the video for which the comment will be inserted.
argparser.add_argument("--videoid",
help="Required; ID for video for which the comment will be inserted.")
# The "text" option specifies the text that will be used as comment.
argparser.add_argument("--text", help="Required; text that will be used as comment.")
args = argparser.parse_args()
if not args.channelid:
exit("Please specify channelid using the --channelid= parameter.")
if not args.videoid:
exit("Please specify videoid using the --videoid= parameter.")
if not args.text:
exit("Please specify text using the --text= parameter.")
youtube = get_authenticated_service(args)
try:
# All the available methods are used in sequence just for the sake of an example.
# Insert channel comment by omitting videoId
# REMOVED AS IT IS NOT RELEVANT TO THE ERRRO REPORT. insert_comment(youtube, args.channelid, None, args.text)
# Insert video comment
# REMOVED AS IT IS NOT RELEVANT TO THE ERRRO REPORT. insert_comment(youtube, args.channelid, args.videoid, args.text)
video_comments = get_comments(youtube, args.videoid, None)
if video_comments:
update_comment(youtube, video_comments[0])
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
else:
print "Inserted, listed and updated top-level comments."
Of course, you will need your authorization files inside the same directory.

BigQuery with Python

Is there anyway to run queries repeatedly on google BigQuery using a Python Script?
I want to query a dataset using Google BigQuery Platform for a weeks data and I want to this over a year. It is a bit too tedious to query the dataset 52 times. Instead I would prefer to write a Python script(As I know Python).
I hope someone could point me in the right direction regarding this.
BigQuery supplies client libraries for several languages -- see https://cloud.google.com/bigquery/client-libraries -- and in particular for Python, with docs at https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/?_ga=1.176926572.834714677.1415848949 (you'll need to follow the hyperlinks to understand the docs).
https://cloud.google.com/bigquery/bigquery-api-quickstart gives an example of a command-line program, in either Java or Python, that uses the Google BigQuery API to run a query on one of the available Sample Datasets and display the result. After imports, and setting a few constants, the Python script boils down to
storage = Storage('bigquery_credentials.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
# Run oauth2 flow with default arguments.
credentials = tools.run_flow(FLOW, storage, tools.argparser.parse_args([]))
http = httplib2.Http()
http = credentials.authorize(http)
bigquery_service = build('bigquery', 'v2', http=http)
try:
query_request = bigquery_service.jobs()
query_data = {'query':'SELECT TOP( title, 10) as title, COUNT(*) as revision_count FROM [publicdata:samples.wikipedia] WHERE wp_namespace = 0;'}
query_response = query_request.query(projectId=PROJECT_NUMBER,
body=query_data).execute()
print 'Query Results:'
for row in query_response['rows']:
result_row = []
for field in row['f']:
result_row.append(field['v'])
print ('\t').join(result_row)
except HttpError as err:
print 'Error:', pprint.pprint(err.content)
except AccessTokenRefreshError:
print ("Credentials have been revoked or expired, please re-run"
"the application to re-authorize")
As you see, just 30 lines, mostly concerned with getting and checking authorization and handling errors. The "core" part, net of such considerations, is really just half those lines:
bigquery_service = build('bigquery', 'v2', http=http)
query_request = bigquery_service.jobs()
query_data = {'query':'SELECT TOP( title, 10) as title, COUNT(*) as revision_count FROM [publicdata:samples.wikipedia] WHERE wp_namespace = 0;'}
query_response = query_request.query(projectId=PROJECT_NUMBER,
body=query_data).execute()
print 'Query Results:'
for row in query_response['rows']:
result_row = []
for field in row['f']:
result_row.append(field['v'])
print ('\t').join(result_row)
You can use google data flow for python and if its a one time thing run it from your terminal or equivalent. alternately you can have a shell script in appenginecron that loops through code 52 times to get your data. google dataflow scheduling.

Authorization headaches for Freebase API (Python)

I'm trying to get a very simple Python script to talk to Freebase.
All the examples I've found use the simple / api key authorization model. So I made a Google Developer account, made a project, and tried to get a key as Google says to. It demands I provide a list of numeric IP addresses that I'll call from. Not feasible, since I don't have a fixed IP (I do have dyndns set up, but that doesn't help since Google won't take a domain name, only numerics).
So I tried OAuth2, which is overkill for what I need (I'm not accessing any non-public user data). But I couldn't find even one online example of using OAuth2 for Freebase. I tried adjusting other examples, but after bouncing around between appengine, Decorator, several obsolete Python libraries, and several other approaches, I got nowhere.
Can anyone either explain or point to a good example of how to do this (without spending 10x more time on authorization, than on the app I'm trying to authorize)? A working example with OAuth2, preferably without many layers of "simplifying" APIs; or a tip on how to get around the fixed-IP requirement for API key authorization, would be fantastic. Thanks!
Steve
I had to do this for Google Drive, but as far as I know this should work for any Google API.
When you create a new Client ID in the developer console, you should have the option to create a Service Account. This will create a public/private key pair, and you can use that to authenticate without any OAuth nonsense.
I stole this code out of our GDrive library, so it may be broke and it is GDrive specific, so you will need to replace anything that says "drive" with whatever Freebase wants.
But I hope it's enough to get you started.
# Sample code that connects to Google Drive
from apiclient.discovery import build
import httplib2
from oauth2client.client import SignedJwtAssertionCredentials, VerifyJwtTokenError
SERVICE_EMAIL = "you#gmail.com"
PRIVATE_KEY_PATH ="./private_key.p12"
# Load private key
key = open(PRIVATE_KEY_PATH, 'rb').read()
# Build the credentials object
credentials = SignedJwtAssertionCredentials(SERVICE_EMAIL, key, scope='https://www.googleapis.com/auth/drive')
try:
http = httplib2.Http()
http = credentials.authorize(http)
except VerifyJwtTokenError as e:
print(u"Unable to authorize using our private key: VerifyJwtTokenError, {0}".format(e))
raise
connection = build('drive', 'v2', http=http)
# You can now use connection to call anything you need for freebase - see their API docs for more info.
Working from #Rachel's sample code, with a bit of fiddling I got to this, which works, and illustrates the topic, search, and query features.
Must install libraries urllib and json, plus code from https://code.google.com/p/google-api-python-client/downloads/list
Must enable billing from 'settings' for the specific project
The mglread() interface for Python is broken as of April 2014.
The documented 'freebase.readonly' scope doesn't work.
from apiclient.discovery import build
import httplib2
from oauth2client.client import SignedJwtAssertionCredentials, VerifyJwtTokenError
# Set up needed constants
#
SERVICE_EMAIL = args.serviceEmail
PRIVATE_KEY_PATH = args.privateKeyFile
topicID = args.topicID
query = args.query
search_url = 'https://www.googleapis.com/freebase/v1/search'
topic_url = 'https://www.googleapis.com/freebase/v1/topic'
mql_url = "https://www.googleapis.com/freebase/v1/mqlread"
key = open(PRIVATE_KEY_PATH, 'rb').read()
credentials = SignedJwtAssertionCredentials(SERVICE_EMAIL, key,
scope='https://www.googleapis.com/auth/freebase')
try:
http = httplib2.Http()
http = credentials.authorize(http)
except VerifyJwtTokenError as e:
print(u"Unable to authorize via private key: VerifyJwtTokenError, {0}".format(e))
raise
connection = build('freebase', 'v1', http=http)
# Search for a topic by Freebase topic ID
# https://developers.google.com/freebase/v1/topic-overview
#
params = { 'filter': 'suggest' }
url = topic_url + topicID + '?' + urllib.urlencode(params)
if (args.verbose): print("URL: " + url)
resp = urllib.urlopen(url).read()
if (args.verbose): print("Response: " + resp)
respJ = json.loads(resp)
print("Topic property(s) for '%s': " % topicID)
for property in respJ['property']:
print(' ' + property + ':')
for value in respJ['property'][property]['values']:
print(' - ' + value['text'])
print("\n")
# Do a regular search
# https://developers.google.com/freebase/v1/search-overview
#
params = { 'query': query }
url = search_url + '?' + urllib.urlencode(params)
if (args.verbose): print("URL: " + url)
resp = urllib.urlopen(url).read()
if (args.verbose): print("Response: " + resp)
respJ = json.loads(resp)
print("Search result for '%s': " % query)
theKeys = {}
for res in respJ['result']:
print ("%-40s %-15s %10.5f" %
(res['name'], res['mid'], res['score']))
params = '{ "id": "%s", "type": []}' % (res['mid'])
# Run a query on the retrieved ID, to get its types:
url = mql_url + '?query=' + params
resp = urllib.urlopen(url).read()
respJ = json.loads(resp)
print(" Type(s): " + `respJ['result']['type']`)
otherKeys = []
for k in res:
if (k not in ['name', 'mid', 'score']): otherKeys.append(k)
if (len(otherKeys)): print(" Other keys: " + ", ".join(otherKeys))
sys.exit(0)

Getting started with Google Analytics Reporting API for Python

I am just starting with the Google Analytics Reporting API and used the Hello API tutorial to get started. (https://developers.google.com/analytics/solutions/articles/hello-analytics-api)
Unfortunately, I am stuck before I even start. I read it (twice). Created the project, updates the client_secrets.jason file... but when I run the main, it crashes.
File "C:\Python27\New Libraries Downloaded\analytics-v3-python-cmd-line\hello_analytics_api_v3.py", line 173, in <module>
main(sys.argv)
File "C:\Python27\New Libraries Downloaded\analytics-v3-python-cmd-line\hello_analytics_api_v3.py", line 56, in main
service, flags = sample_tools.init(argv, 'analytics', 'v3', __doc__, __file__, scope='https://www.googleapis.com/auth/analytics.readonly')
NameError: global name '__file__' is not defined
I'm new (really really new) to this, so any help (and a more detailed tutorial) would be much appreciated.
Thanks !
EDIT: I have't changed anything from the original code in the tutorial. I'll worry about modifications after I get this running. Thanks !
CODE: hello_analytics_api_v3.py
import argparse
import sys
from apiclient.errors import HttpError
from apiclient import sample_tools
from oauth2client.client import AccessTokenRefreshError
def main(argv):
# Authenticate and construct service.
service, flags = sample_tools.init(argv, 'analytics', 'v3', __doc__, __file__, scope='https://www.googleapis.com/auth/analytics.readonly')
# Try to make a request to the API. Print the results or handle errors.
try:
first_profile_id = get_first_profile_id(service)
if not first_profile_id:
print 'Could not find a valid profile for this user.'
else:
results = get_top_keywords(service, first_profile_id)
print_results(results)
except TypeError, error:
# Handle errors in constructing a query.
print ('There was an error in constructing your query : %s' % error)
except HttpError, error:
# Handle API errors.
print ('Arg, there was an API error : %s : %s' % (error.resp.status, error._get_reason()))
except AccessTokenRefreshError:
# Handle Auth errors.
print ('The credentials have been revoked or expired, please re-run ','the application to re-authorize')
def get_first_profile_id(service):
"""Traverses Management API to return the first profile id.
This first queries the Accounts collection to get the first account ID.
This ID is used to query the Webproperties collection to retrieve the first
webproperty ID. And both account and webproperty IDs are used to query the
Profile collection to get the first profile id.
Args:
service: The service object built by the Google API Python client library.
Returns:
A string with the first profile ID. None if a user does not have any
accounts, webproperties, or profiles.
"""
accounts = service.management().accounts().list().execute()
if accounts.get('items'):
firstAccountId = accounts.get('items')[0].get('id')
webproperties = service.management().webproperties().list(
accountId=firstAccountId).execute()
if webproperties.get('items'):
firstWebpropertyId = webproperties.get('items')[0].get('id')
profiles = service.management().profiles().list(
accountId=firstAccountId,
webPropertyId=firstWebpropertyId).execute()
if profiles.get('items'):
return profiles.get('items')[0].get('id')
return None
def get_top_keywords(service, profile_id):
"""Executes and returns data from the Core Reporting API.
This queries the API for the top 25 organic search terms by visits.
Args:
service: The service object built by the Google API Python client library.
profile_id: String The profile ID from which to retrieve analytics data.
Returns:
The response returned from the Core Reporting API.
"""
return service.data().ga().get(
ids='ga:' + profile_id,
start_date='2012-01-01',
end_date='2012-01-15',
metrics='ga:visits',
dimensions='ga:source,ga:keyword',
sort='-ga:visits',
filters='ga:medium==organic',
start_index='1',
max_results='25').execute()
def print_results(results):
"""Prints out the results.
This prints out the profile name, the column headers, and all the rows of
data.
Args:
results: The response returned from the Core Reporting API.
"""
print
print 'Profile Name: %s' % results.get('profileInfo').get('profileName')
print
# Print header.
output = []
for header in results.get('columnHeaders'):
output.append('%30s' % header.get('name'))
print ''.join(output)
# Print data table.
if results.get('rows', []):
for row in results.get('rows'):
output = []
for cell in row:
output.append('%30s' % cell)
print ''.join(output)
else:
print 'No Rows Found'
if __name__ == '__main__':
main(sys.argv)
according to the error the program doesn't recognize 'file'. In IPython this error comes up (not 100% sure why) but this error shouldn't come up when running a file. In a file the 'file' argument will return the full path and the file name.
Try creating a file and running from there or simply paste in a the full path and file name instead.
Also be sure that the client secrets are located in the same folder as your script!

Categories

Resources