I am just starting with the Google Analytics Reporting API and used the Hello API tutorial to get started. (https://developers.google.com/analytics/solutions/articles/hello-analytics-api)
Unfortunately, I am stuck before I even start. I read it (twice). Created the project, updates the client_secrets.jason file... but when I run the main, it crashes.
File "C:\Python27\New Libraries Downloaded\analytics-v3-python-cmd-line\hello_analytics_api_v3.py", line 173, in <module>
main(sys.argv)
File "C:\Python27\New Libraries Downloaded\analytics-v3-python-cmd-line\hello_analytics_api_v3.py", line 56, in main
service, flags = sample_tools.init(argv, 'analytics', 'v3', __doc__, __file__, scope='https://www.googleapis.com/auth/analytics.readonly')
NameError: global name '__file__' is not defined
I'm new (really really new) to this, so any help (and a more detailed tutorial) would be much appreciated.
Thanks !
EDIT: I have't changed anything from the original code in the tutorial. I'll worry about modifications after I get this running. Thanks !
CODE: hello_analytics_api_v3.py
import argparse
import sys
from apiclient.errors import HttpError
from apiclient import sample_tools
from oauth2client.client import AccessTokenRefreshError
def main(argv):
# Authenticate and construct service.
service, flags = sample_tools.init(argv, 'analytics', 'v3', __doc__, __file__, scope='https://www.googleapis.com/auth/analytics.readonly')
# Try to make a request to the API. Print the results or handle errors.
try:
first_profile_id = get_first_profile_id(service)
if not first_profile_id:
print 'Could not find a valid profile for this user.'
else:
results = get_top_keywords(service, first_profile_id)
print_results(results)
except TypeError, error:
# Handle errors in constructing a query.
print ('There was an error in constructing your query : %s' % error)
except HttpError, error:
# Handle API errors.
print ('Arg, there was an API error : %s : %s' % (error.resp.status, error._get_reason()))
except AccessTokenRefreshError:
# Handle Auth errors.
print ('The credentials have been revoked or expired, please re-run ','the application to re-authorize')
def get_first_profile_id(service):
"""Traverses Management API to return the first profile id.
This first queries the Accounts collection to get the first account ID.
This ID is used to query the Webproperties collection to retrieve the first
webproperty ID. And both account and webproperty IDs are used to query the
Profile collection to get the first profile id.
Args:
service: The service object built by the Google API Python client library.
Returns:
A string with the first profile ID. None if a user does not have any
accounts, webproperties, or profiles.
"""
accounts = service.management().accounts().list().execute()
if accounts.get('items'):
firstAccountId = accounts.get('items')[0].get('id')
webproperties = service.management().webproperties().list(
accountId=firstAccountId).execute()
if webproperties.get('items'):
firstWebpropertyId = webproperties.get('items')[0].get('id')
profiles = service.management().profiles().list(
accountId=firstAccountId,
webPropertyId=firstWebpropertyId).execute()
if profiles.get('items'):
return profiles.get('items')[0].get('id')
return None
def get_top_keywords(service, profile_id):
"""Executes and returns data from the Core Reporting API.
This queries the API for the top 25 organic search terms by visits.
Args:
service: The service object built by the Google API Python client library.
profile_id: String The profile ID from which to retrieve analytics data.
Returns:
The response returned from the Core Reporting API.
"""
return service.data().ga().get(
ids='ga:' + profile_id,
start_date='2012-01-01',
end_date='2012-01-15',
metrics='ga:visits',
dimensions='ga:source,ga:keyword',
sort='-ga:visits',
filters='ga:medium==organic',
start_index='1',
max_results='25').execute()
def print_results(results):
"""Prints out the results.
This prints out the profile name, the column headers, and all the rows of
data.
Args:
results: The response returned from the Core Reporting API.
"""
print
print 'Profile Name: %s' % results.get('profileInfo').get('profileName')
print
# Print header.
output = []
for header in results.get('columnHeaders'):
output.append('%30s' % header.get('name'))
print ''.join(output)
# Print data table.
if results.get('rows', []):
for row in results.get('rows'):
output = []
for cell in row:
output.append('%30s' % cell)
print ''.join(output)
else:
print 'No Rows Found'
if __name__ == '__main__':
main(sys.argv)
according to the error the program doesn't recognize 'file'. In IPython this error comes up (not 100% sure why) but this error shouldn't come up when running a file. In a file the 'file' argument will return the full path and the file name.
Try creating a file and running from there or simply paste in a the full path and file name instead.
Also be sure that the client secrets are located in the same folder as your script!
Related
As stated in the question, I would like to check the status of a list of twitter user ids. I had about 20k twitter users. I was able to get the timelines of about half of them. The other have are probably either suspended, deactivated, or have 0 tweets. I found this script online that supposedly allow for checking the status of a twitter user. Here is the script (https://github.com/dbrgn/Twitter-User-Checker/blob/master/checkuser.py):
`
#!/usr/bin/env python2
# Twitter User Checker
# Author: Danilo Bargen
# License: GPLv3
import sys
import tweepy
import urllib2
try:
import json
except ImportError:
import simplejson as json
from datetime import datetime
auth = tweepy.AppAuthHandler("xxx", "xxxx")
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
if (not api):
print ("Can't Authenticate")
sys.exit(-1)
# Continue with rest of code
try:
user = sys.argv[1]
except IndexError:
print 'Usage: checkuser.py [username]'
sys.exit(-1)
url = 'https://api.twitter.com/1.1/users/show.json?id=%s' % user
try:
request = urllib2.urlopen(url)
status = request.code
data = request.read()
except urllib2.HTTPError, e:
status = e.code
data = e.read()
data = json.loads(data)
print data
if status == 403:
print "helloooooooo"
# if 'suspended' in data['error']:
# print 'User %s has been suspended' % user
# else:
# print 'Unknown response'
elif status == 404:
print 'User %s not found' % user
elif status == 200:
days_active = (datetime.now() - datetime.strptime(data['created_at'],
'%a %b %d %H:%M:%S +0000 %Y')).days
print 'User %s is active and has posted %s tweets in %s days' % \
(user, data['statuses_count'], days_active)
else:
print 'Unknown response'
`
I get the following error:
File "twitter_status_checker.py", line 16, in <module>
auth = tweepy.AppAuthHandler("xxx", "xxxx")
File "/Users/aloush/anaconda/lib/python2.7/site-packages/tweepy/auth.py", line 170, in __init__
'but got %s instead' % data.get('token_type'))
tweepy.error.TweepError: Expected token_type to equal "bearer", but got None instead
Could anyone help me fix the error as well as allow the script to check for a list of users rather than one user for each run.
Here is a list of the HTTP Status Codes I would like to check for: https://dev.twitter.com/overview/api/response-codes
Thank you.
It seems that you failed to authenticate twitter. For the latest version (3.5), tweepy uses OAuthHander to authenticate. Please check how to use Tweepy. And also the linked script you used is to check the account one by one, which could be very slow.
To check the status of a large set of Twitter accounts by Tweepy, particularly if you want to know the reason why it is inactive (e.g., not found, suspended), you need to be aware of the followings:
Which API should be used?
Twitter provides two related APIs, one is user/show and the other is user/lookup. The former one returns the profile of one specified user, while the later one returns profile of a block of up to 100 users. The corresponding tweepy APIs are API.get_user and API.lookup_users (I cannot find it in the documentation, but it does exist in code). Definitely, you should use the second one. However, when there exist some inactive users, the lookup_users API returns only these are active. This means that you have to call get_user API to get the very detail reason for inactive accounts.
How to determine the status of a user?
Of course, you should pay attention to the response code provided by Twitter. However, when using tweepy, instead of the HTTP ERROR CODES, you should pay more attention on the ERROR MESSAGE. Here are some common cases:
If the profile is successfully fetched, it is an active user;
Otherwise, we could check the error code:
50 User not found.
63 User has been suspended.
... maybe more code about the user account
For tweepy, when the profile is failed to fetch, a TweepyError is raised. And the TweepyError.message[0] is the error message from twitter API.
Okay, here are the logic to process
(1) Divide the large block of user into pieces of size of 100;
(2) for each of these pieces, do (3) and (4);
(3) call lookup_users, the returned users will be treated as the active users and the remaining users will be treated as inactive users;
(4) call get_user for each of the inactive users to get the detailed reason.
Here is a sample code for you:
import logging
import tweepy
logger = logging.getLogger(__name__)
def to_bulk(a, size=100):
"""Transform a list into list of list. Each element of the new list is a
list with size=100 (except the last one).
"""
r = []
qt, rm = divmod(len(a), size)
i = -1
for i in range(qt):
r.append(a[i * size:(i + 1) * size])
if rm != 0:
r.append(a[(i + 1) * size:])
return r
def fast_check(api, uids):
""" Fast check the status of specified accounts.
Parameters
---------------
api: tweepy API instance
uids: account ids
Returns
----------
Tuple (active_uids, inactive_uids).
`active_uids` is a list of active users and
`inactive_uids` is a list of inactive uids,
either supended or deactivated.
"""
try:
users = api.lookup_users(user_ids=uids,
include_entities=False)
active_uids = [u.id for u in users]
inactive_uids = list(set(uids) - set(active_uids))
return active_uids, inactive_uids
except tweepy.TweepError as e:
if e[0]['code'] == 50 or e[0]['code'] == 63:
logger.error('None of the users is valid: %s', e)
return [], inactive_uids
else:
# Unexpected error
raise
def check_inactive(api, uids):
""" Check inactive account, one by one.
Parameters
---------------
uids : list
A list of inactive account
Returns
----------
Yield tuple (uid, reason). Where `uid` is the account id,
and `reason` is a string.
"""
for uid in uids:
try:
u = api.get_user(user_id=uid)
logger.warning('This user %r should be inactive', uid)
yield (u, dict(code=-1, message='OK'))
except tweepy.TweepyError as e:
yield (uid, e[0][0])
def check_one_block(api, uids):
"""Check the status of user for one block (<100). """
active_uids, inactive_uids = fast_check(api, uids)
inactive_users_status = list(check_inactive(api, inactive_uids))
return active_uids, inactive_users_status
def check_status(api, large_uids):
"""Check the status of users for any size of users. """
active_uids = []
inactive_users_status = []
for uids in to_bulk(large_uids, size=100):
au, iu = check_one_block(api, uids)
active_uids += au
inactive_users_status += iu
return active_uids, inactive_users_status
def main(twitter_crendient, large_uids):
""" The main function to call check_status. """
# First prepare tweepy API
auth = tweepy.OAuthHandler(twitter_crendient['consumer_key'],
twitter_crendient['consumer_secret'])
auth.set_access_token(twitter_crendient['access_token'],
twitter_crendient['access_token_secret'])
api = tweepy.API(auth, wait_on_rate_limit=True)
# Then, call check_status
active_uids, inactive_user_status = check_status(api, large_uids)
Because of the lack of data, I never test the code. There may be bugs, you should take care of them.
Hope this is helpful.
This question already has answers here:
Google Analytics API - retrieve Custom Segment Id by its name
(2 answers)
Closed 4 years ago.
I am using a Python script to get Segment info from Google Analytics. All of the built in Segments that come with Google Analytics are printing fine, but the Custom Segments I have created are not showing up.
Here are the relevant portions of the Script:
def get_service(api_name, api_version, scope, key_file_location,
service_account_email):
"""Get a service that communicates to a Google API.
Args:
api_name: The name of the api to connect to.
api_version: The api version to connect to.
scope: A list auth scopes to authorize for the application.
key_file_location: The path to a valid service account p12 key file.
service_account_email: The service account email address.
Returns:
A service that is connected to the specified API.
"""
credentials = ServiceAccountCredentials.from_p12_keyfile(
service_account_email, key_file_location, scopes=scope)
http = credentials.authorize(httplib2.Http())
# Build the service object.
service = build(api_name, api_version, http=http)
return service
def get_segments(service):
try:
segments = service.management().segments().list().execute()
except TypeError, error:
# Handle errors in constructing a query.
print 'There was an error in constructing your query : %s' % error
except HttpError, error:
# Handle API errors.
print ('There was an API error : %s : %s' %(error.resp.status, error.resp.reason))
# Example #2:
# The results of the list method are stored in the segments object.
# The following code shows how to iterate through them.
for segment in segments.get('items', []):
print 'Segment Id = %s' % segment.get('id')
print 'Segment kind = %s' % segment.get('kind')
print 'Segment segmentId = %s' % segment.get('segmentId')
print 'Segment Name = %s' % segment.get('name')
print 'Segment Definition = %s' % segment.get('definition')
if segment.get('created'):
print 'Created = %s' % segment.get('created')
print 'Updated = %s' % segment.get('updated')
print
def main():
# Define the auth scopes to request.
scope = ['https://www.googleapis.com/auth/analytics.readonly']
# Use the developer console and replace the values with your
# service account email and relative location of your key file.
service_account_email = '************'
key_file_location = '*********'
# Authenticate and construct service.
service = get_service('analytics', 'v3', scope, key_file_location,
service_account_email)
get_segments(service)
if __name__ == '__main__':
main()
You need to have Collaborate permission enabled for the custom segments you have created
Manage Segments#Set Segment availability
Visit this link and go to 'Set Segment availability' section in there.
Apply 'Collaborate permission' option as shown in the link.
After this, your segments will be pulled from the API code you posted ;)
Please note: Analytics Core Reporting API doesn't have access to the custom segments. They can only be accessed by the Analytics Management API
I mentioned trying to get google java sample code working in one of my previous questions on StackOverflow, but abandoned trying to do that after realizing how deprecated the samples were. Since I dabbled in python a bit about 4 years ago, I decided that I would take a look at the Google Blogger API for Python.
While most of the API calls make sense, I cannot seem to get this sample to run correctly!
Here is the sample I am trying to run:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2014 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Simple command-line sample for Blogger.
Command-line application that retrieves the users blogs and posts.
Usage:
$ python blogger.py
You can also get help on all the command-line flags the program understands
by running:
$ python blogger.py --help
To get detailed log output run:
$ python blogger.py --logging_level=DEBUG
"""
from __future__ import print_function
__author__ = 'jcgregorio#google.com (Joe Gregorio)'
import sys
from oauth2client import client
from googleapiclient import sample_tools
def main(argv):
# Authenticate and construct service.
service, flags = sample_tools.init(
argv, 'blogger', 'v3', __doc__, __file__,
scope='https://www.googleapis.com/auth/blogger')
try:
users = service.users()
# Retrieve this user's profile information
thisuser = users.get(userId='self').execute()
print('This user\'s display name is: %s' % thisuser['displayName'])
blogs = service.blogs()
# Retrieve the list of Blogs this user has write privileges on
thisusersblogs = blogs.listByUser(userId='self').execute()
for blog in thisusersblogs['items']:
print('The blog named \'%s\' is at: %s' % (blog['name'], blog['url']))
posts = service.posts()
# List the posts for each blog this user has
for blog in thisusersblogs['items']:
print('The posts for %s:' % blog['name'])
request = posts.list(blogId=blog['id'])
while request != None:
posts_doc = request.execute()
if 'items' in posts_doc and not (posts_doc['items'] is None):
for post in posts_doc['items']:
print(' %s (%s)' % (post['title'], post['url']))
request = posts.list_next(request, posts_doc)
except client.AccessTokenRefreshError:
print ('The credentials have been revoked or expired, please re-run'
'the application to re-authorize')
if __name__ == '__main__':
main(sys.argv)
I've run this sample in both PyCharm and the terminal, and while the code compiles and runs (which is more than I could say for the Java sample!) I cannot seem to follow where the sample is getting its information.
The sample requires a client_secrets.json file, which I populated with my client id and client secrets keys obtained from the Google API console, however, I don't see how the sample is supposed to get data on the current blogger user, as there doesn't seem to be an input to select a user, input an email address, or anything of the like. The service apparently obtains the current user, but it doesn't actually appear to do so.
client_secrets.json:
{
"web": {
"client_id": "[[INSERT CLIENT ID HERE]]",
"client_secret": "[[INSERT CLIENT SECRET HERE]]",
"redirect_uris": [],
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token"
}
}
In fact, upon running this code, I receive the following error:
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /google-api-python-client-master/samples/blogger/blogger.py
This user's display name is: Unknown
Traceback (most recent call last):
File "/google-api-python-client-master/samples/blogger/blogger.py", line 83, in <module>
main(sys.argv)
File "/google-api-python-client-master/samples/blogger/blogger.py", line 62, in main
for blog in thisusersblogs['items']:
KeyError: 'items'
Process finished with exit code 1
If someone could help me understand what I'm missing in my understanding of how this sample works, I would definitely appreciate it. My python is definitely rusty, but I'm hoping playing around with this sample code will help me in using it again.
The sample code is self explanatory:
#libraries used to connect with googles api
from oauth2client import client
from googleapiclient import sample_tools
def main(argv):
# Authenticate and construct service.
service, flags = sample_tools.init(
argv, 'blogger', 'v3', __doc__, __file__,
scope='https://www.googleapis.com/auth/blogger')
This above uses Oath2 Flow, you are redirected and need to authenticate (at least the first time you run this)
try:
users = service.users() #googleapiclient.discovery.Resource object
# Retrieve this user's profile information
thisuser = users.get(userId='self').execute()
print('This user\'s display name is: %s' % thisuser['displayName'])
blogs = service.blogs() #googleapiclient.discovery.Resource object
# Retrieve the list of Blogs this user has write privileges on
thisusersblogs = blogs.listByUser(userId='self').execute() #retrieves all blogs from the user (you = self)
for blog in thisusersblogs['items']: #for loop that iterates over a JSON (dictionary) to get key value 'items'
print('The blog named \'%s\' is at: %s' % (blog['name'], blog['url']))
posts = service.posts() #googleapiclient.discovery.Resource object for posts
# List the posts for each blog this user has
for blog in thisusersblogs['items']:
print('The posts for %s:' % blog['name'])
request = posts.list(blogId=blog['id']) #uses #googleapiclient.discovery.Resource object for posts to get blog by id
while request != None:
posts_doc = request.execute()
if 'items' in posts_doc and not (posts_doc['items'] is None):
for post in posts_doc['items']:
print(' %s (%s)' % (post['title'], post['url']))
request = posts.list_next(request, posts_doc)
except client.AccessTokenRefreshError:
print ('The credentials have been revoked or expired, please re-run'
'the application to re-authorize')
if __name__ == '__main__':
main(sys.argv)
Running this returns all your posts as follows:
This user's display name is: "something"
The blog named 'myTest' is at: http://BLOGNAME.blogspot.com/
The posts for myTest:
POST NAME (http://BLOGNAME.blogspot.com/2016/06/postname.html)
Maybe you want to get started with basic requests, rather than code samples, to get familiar with the API?
https://developers.google.com/blogger/docs/3.0/using#RetrievingABlog
start with the basics, Ex:
Retrieving a blog
You can retrieve information for a particular blog by sending an HTTP
GET request to the blog's URI. The URI for a blog has the following
format:
https://www.googleapis.com/blogger/v3/blogs/blogId
Depending on the pyton version you are using, you can import different libraries to do your requests, Ex.
from http://docs.python-requests.org/en/master/user/quickstart/#make-a-request
import requests
r = requests.get('https://www.googleapis.com/blogger/v3/blogs/blogId')
print r.text
this should retunr a JSON :
{
"kind": "blogger#blog",
"id": "2399953",
"name": "Blogger Buzz",
"description": "The Official Buzz from Blogger at Google",
"published": "2007-04-23T22:17:29.261Z",
"updated": "2011-08-02T06:01:15.941Z",
"url": "http://buzz.blogger.com/",
"selfLink": "https://www.googleapis.com/blogger/v3/blogs/2399953",
"posts": {
"totalItems": 494,
"selfLink": "https://www.googleapis.com/blogger/v3/blogs/2399953/posts"
},
"pages": {
"totalItems": 2,
"selfLink": "https://www.googleapis.com/blogger/v3/blogs/2399953/pages"
},
"locale": {
"language": "en",
"country": "",
"variant": ""
}
}
you probably want to check https://developers.google.com/blogger/docs/3.0/reference/#Blogs
Adding a basic working example for working with Google drive with Python client library.
Recommendation for Python Client library
I'll add another example for using google drive with python client library.
First of all I would Highly suggest using the PyDrive library which makes the OAuth process much easier (Install with pip: pip install PyDrive).
More details in here.
How to run a basic python code that interacts with Google drive?
1 ) Go to APIs Console and make your own project.
2 ) Search for ‘Google Drive API’, select the entry, and click ‘Enable’.
3 ) Select ‘Credentials’ from the left menu, click ‘Create Credentials’, select ‘OAuth client ID’.
4 ) Now, the product name and consent screen need to be set -> click ‘Configure consent screen’ and follow the instructions. Once finished:
a. Select ‘Application type’ to be Web application.
b. Enter an appropriate name.
c. Input http://localhost:8080 for Authorized JavaScript origins.
d. Input http://localhost:8080/ for Authorized redirect URIs.
e. Click ‘Create’.
5 ) Click ‘Download JSON’ on the right side of Client ID to download client_secret_<long ID>.json.
The downloaded file has all authentication information of your application.
Rename the file to “client_secrets.json” and place it in your working directory.
Basic code example
Add your code and run it - for example basic code that returns all files in root directory:
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
gauth = GoogleAuth()
gauth.LocalWebserverAuth()
drive = GoogleDrive(gauth)
# Auto-iterate through all files that matches this query
file_list = drive.ListFile({'q': "'root' in parents and trashed=false"}).GetList()
for file1 in file_list:
print('title: %s, id: %s' % (file1['title'], file1['id']))
If you receive 403 authorization error in the authentication process
You will have to add your gmail account at least as a test user in order to proceed with the authentication flow:
Additional Snapshots
Image for steps 4.c and 4.d:
Image for step 5:
I am Trying to use the youtube-api V3 search. I have a flask web application and am trying to output the results into rows on the webpage.
I started with the Google developers pages (works perfect in terminal) then found this great sounding example which would output thumbnails and hyper links from the results. However it ran on Google app engine and I am trying to create a Flask application.
So with my very limited knowledge I have tried to merge them to fit my needs.
#!/usr/bin/python
from apiclient.discovery import build
from apiclient.errors import HttpError
from oauth2client.tools import argparser
import sys
import os
import urllib
# Set API_KEY to the "API key" value from the "Access" tab of the
# Google APIs Console http://code.google.com/apis/console#access
# Please ensure that you have enabled the YouTube Data API and Freebase API
# for your project.
API_KEY = "REPLACE ME" #Yes I did replace this with my API KEY
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
QUERY_TERM = "dog"
def search_by_keyword():
youtube = build(
YOUTUBE_API_SERVICE_NAME,
YOUTUBE_API_VERSION,
developerKey=API_KEY
)
search_response = youtube.search().list(
q=QUERY_TERM,
part="id,snippet",
maxResults=25
).execute()
videos = []
for search_result in search_response.get("items", []):
if search_result["id"]["kind"] == "youtube#video":
videos.append("%s (%s)" % (search_result["snippet"]["title"],
search_result["id"]["videoId"]))
print "Videos:\n", "\n".join(videos), "\n"
if __name__ == "__main__":
try:
youtube_search()
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
What I actually want it to do is return results to my webpage rather than just print but thought it was best to take one step at a time. If I can make it work in terminal I could then sort it on a webpage, but not as simple as I thought.
Currently I receive error;
Traceback (most recent call last):
File "search.py", line 38, in <module>
print "Videos:\n", "\n".join(videos), "\n"
NameError: name 'videos' is not defined
Which is my first main hurdle as I was expecting it to work as it did in both examples I took it from.
My second problem is how do I go about converting results (assuming I could fix above) into a table on the webpage?
I plan to have
#app.route('/search')
include the above code and then;
return render_template ('search.html')
But unsure how to pass the results back onto it, do I still need the print for this or is it another term I should be using? I want to be able to use the VideoID returned from each to sort thumbnails and links.
your list of videos is a local variable and doesn't exist outside your function "search_by_keyword()" to access the videos variable, you could return that variable like so:
def search_by_keyword():
youtube = build(
YOUTUBE_API_SERVICE_NAME,
YOUTUBE_API_VERSION,
developerKey=API_KEY
)
search_response = youtube.search().list(
q=QUERY_TERM,
part="id,snippet",
maxResults=25
).execute()
videos = []
for search_result in search_response.get("items", []):
if search_result["id"]["kind"] == "youtube#video":
videos.append("%s (%s)" % (search_result["snippet"]["title"],
search_result["id"]["videoId"]))
return videos
in your main function you could use something like:
videos = search_by_keyword()
in that way you could print the videos to the console or send it to a template with Flask.
This code is available online to run a map of your connections in linkedin
This uses linkedin api.
I'm able to connect fine and everything runs okay till the last script of actually writing the data to a csv.
Whenever I run the code
import oauth2 as oauth
import urlparse
import simplejson
import codecs
CONSUMER_KEY = "xxx"
CONSUMER_SECRET = "xxx"
OAUTH_TOKEN = "xxx"
OAUTH_TOKEN_SECRET = "xxx"
OUTPUT = "linked.csv"
def linkedin_connections():
# Use your credentials to build the oauth client
consumer = oauth.Consumer(key=CONSUMER_KEY, secret=CONSUMER_SECRET)
token = oauth.Token(key=OAUTH_TOKEN, secret=OAUTH_TOKEN_SECRET)
client = oauth.Client(consumer, token)
# Fetch first degree connections
resp, content = client.request('http://api.linkedin.com/v1/people/~/connections?format=json')
results = simplejson.loads(content)
# File that will store the results
output = codecs.open(OUTPUT, 'w', 'utf-8')
# Loop thru the 1st degree connection and see how they connect to each other
for result in results["values"]:
con = "%s %s" % (result["firstName"].replace(",", " "), result["lastName"].replace(",", " "))
print >>output, "%s,%s" % ("John Henry", con)
# This is the trick, use the search API to get related connections
u = "https://api.linkedin.com/v1/people/%s:(relation-to-viewer:(related-connections))?format=json" % result["id"]
resp, content = client.request(u)
rels = simplejson.loads(content)
try:
for rel in rels['relationToViewer']['relatedConnections']['values']:
sec = "%s %s" % (rel["firstName"].replace(",", " "), rel["lastName"].replace(",", " "))
print >>output, "%s,%s" % (con, sec)
except:
pass
if __name__ == '__main__':
linkedin_connections()
for result in results["values"]:
KeyError: 'values'
When I run this I get an error message:
Traceback (most recent call last):
File "linkedin-2-query.py", line 51, in <module>
linkedin_connections()
File "linkedin-2-query.py", line 35, in linkedin_connections
for result in results["values"]:
KeyError: 'values'
Any suggestions or help would be greatly appreciated!
I encountered the same issue working through the post Visualizing your LinkedIn graph using Gephi – Part 1.
Python raises a KeyError whenever a dict() object is requested (using the format a = adict[key]) and the key is not in the dictionary. KeyError - Python Wiki
After searching a bit and adding some print statements, I realize that my OAuth session has expired, so the OAuth token in my linkedin-2-query.py script was is longer valid.
Since the OAuth token is invalid, the LinkedIn API does not return a dictionary with the key "values" like the script expects. Instead, the API returns the string 'N'. Python tries to find the dict key "values"in the string 'N', fails, and generates the KeyError: 'values'.
So a new, valid OAuth token & secret should get the API to return a dict containing connection data.
I run the linkedin-1-oauth.py script again, and then visit the LinkedIn Application details page to find my new OAuth token. (The screenshot omits the values for my app. You should see alphanumeric values for each Key, Token, & Secret.)
...
I then update my linkedin-2-query.py script with the new OAuth User Token and OAuth User Secret
OAUTH_TOKEN = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # your updated OAuth User Token
OAUTH_TOKEN_SECRET = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # your updated OAuth User Secret
After updating the OAuth token & secret, I immediately run my linkedin-2-query.py script. Hooray, it runs without errors and retrieves my connection data from the API.