I mentioned trying to get google java sample code working in one of my previous questions on StackOverflow, but abandoned trying to do that after realizing how deprecated the samples were. Since I dabbled in python a bit about 4 years ago, I decided that I would take a look at the Google Blogger API for Python.
While most of the API calls make sense, I cannot seem to get this sample to run correctly!
Here is the sample I am trying to run:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2014 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Simple command-line sample for Blogger.
Command-line application that retrieves the users blogs and posts.
Usage:
$ python blogger.py
You can also get help on all the command-line flags the program understands
by running:
$ python blogger.py --help
To get detailed log output run:
$ python blogger.py --logging_level=DEBUG
"""
from __future__ import print_function
__author__ = 'jcgregorio#google.com (Joe Gregorio)'
import sys
from oauth2client import client
from googleapiclient import sample_tools
def main(argv):
# Authenticate and construct service.
service, flags = sample_tools.init(
argv, 'blogger', 'v3', __doc__, __file__,
scope='https://www.googleapis.com/auth/blogger')
try:
users = service.users()
# Retrieve this user's profile information
thisuser = users.get(userId='self').execute()
print('This user\'s display name is: %s' % thisuser['displayName'])
blogs = service.blogs()
# Retrieve the list of Blogs this user has write privileges on
thisusersblogs = blogs.listByUser(userId='self').execute()
for blog in thisusersblogs['items']:
print('The blog named \'%s\' is at: %s' % (blog['name'], blog['url']))
posts = service.posts()
# List the posts for each blog this user has
for blog in thisusersblogs['items']:
print('The posts for %s:' % blog['name'])
request = posts.list(blogId=blog['id'])
while request != None:
posts_doc = request.execute()
if 'items' in posts_doc and not (posts_doc['items'] is None):
for post in posts_doc['items']:
print(' %s (%s)' % (post['title'], post['url']))
request = posts.list_next(request, posts_doc)
except client.AccessTokenRefreshError:
print ('The credentials have been revoked or expired, please re-run'
'the application to re-authorize')
if __name__ == '__main__':
main(sys.argv)
I've run this sample in both PyCharm and the terminal, and while the code compiles and runs (which is more than I could say for the Java sample!) I cannot seem to follow where the sample is getting its information.
The sample requires a client_secrets.json file, which I populated with my client id and client secrets keys obtained from the Google API console, however, I don't see how the sample is supposed to get data on the current blogger user, as there doesn't seem to be an input to select a user, input an email address, or anything of the like. The service apparently obtains the current user, but it doesn't actually appear to do so.
client_secrets.json:
{
"web": {
"client_id": "[[INSERT CLIENT ID HERE]]",
"client_secret": "[[INSERT CLIENT SECRET HERE]]",
"redirect_uris": [],
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token"
}
}
In fact, upon running this code, I receive the following error:
/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /google-api-python-client-master/samples/blogger/blogger.py
This user's display name is: Unknown
Traceback (most recent call last):
File "/google-api-python-client-master/samples/blogger/blogger.py", line 83, in <module>
main(sys.argv)
File "/google-api-python-client-master/samples/blogger/blogger.py", line 62, in main
for blog in thisusersblogs['items']:
KeyError: 'items'
Process finished with exit code 1
If someone could help me understand what I'm missing in my understanding of how this sample works, I would definitely appreciate it. My python is definitely rusty, but I'm hoping playing around with this sample code will help me in using it again.
The sample code is self explanatory:
#libraries used to connect with googles api
from oauth2client import client
from googleapiclient import sample_tools
def main(argv):
# Authenticate and construct service.
service, flags = sample_tools.init(
argv, 'blogger', 'v3', __doc__, __file__,
scope='https://www.googleapis.com/auth/blogger')
This above uses Oath2 Flow, you are redirected and need to authenticate (at least the first time you run this)
try:
users = service.users() #googleapiclient.discovery.Resource object
# Retrieve this user's profile information
thisuser = users.get(userId='self').execute()
print('This user\'s display name is: %s' % thisuser['displayName'])
blogs = service.blogs() #googleapiclient.discovery.Resource object
# Retrieve the list of Blogs this user has write privileges on
thisusersblogs = blogs.listByUser(userId='self').execute() #retrieves all blogs from the user (you = self)
for blog in thisusersblogs['items']: #for loop that iterates over a JSON (dictionary) to get key value 'items'
print('The blog named \'%s\' is at: %s' % (blog['name'], blog['url']))
posts = service.posts() #googleapiclient.discovery.Resource object for posts
# List the posts for each blog this user has
for blog in thisusersblogs['items']:
print('The posts for %s:' % blog['name'])
request = posts.list(blogId=blog['id']) #uses #googleapiclient.discovery.Resource object for posts to get blog by id
while request != None:
posts_doc = request.execute()
if 'items' in posts_doc and not (posts_doc['items'] is None):
for post in posts_doc['items']:
print(' %s (%s)' % (post['title'], post['url']))
request = posts.list_next(request, posts_doc)
except client.AccessTokenRefreshError:
print ('The credentials have been revoked or expired, please re-run'
'the application to re-authorize')
if __name__ == '__main__':
main(sys.argv)
Running this returns all your posts as follows:
This user's display name is: "something"
The blog named 'myTest' is at: http://BLOGNAME.blogspot.com/
The posts for myTest:
POST NAME (http://BLOGNAME.blogspot.com/2016/06/postname.html)
Maybe you want to get started with basic requests, rather than code samples, to get familiar with the API?
https://developers.google.com/blogger/docs/3.0/using#RetrievingABlog
start with the basics, Ex:
Retrieving a blog
You can retrieve information for a particular blog by sending an HTTP
GET request to the blog's URI. The URI for a blog has the following
format:
https://www.googleapis.com/blogger/v3/blogs/blogId
Depending on the pyton version you are using, you can import different libraries to do your requests, Ex.
from http://docs.python-requests.org/en/master/user/quickstart/#make-a-request
import requests
r = requests.get('https://www.googleapis.com/blogger/v3/blogs/blogId')
print r.text
this should retunr a JSON :
{
"kind": "blogger#blog",
"id": "2399953",
"name": "Blogger Buzz",
"description": "The Official Buzz from Blogger at Google",
"published": "2007-04-23T22:17:29.261Z",
"updated": "2011-08-02T06:01:15.941Z",
"url": "http://buzz.blogger.com/",
"selfLink": "https://www.googleapis.com/blogger/v3/blogs/2399953",
"posts": {
"totalItems": 494,
"selfLink": "https://www.googleapis.com/blogger/v3/blogs/2399953/posts"
},
"pages": {
"totalItems": 2,
"selfLink": "https://www.googleapis.com/blogger/v3/blogs/2399953/pages"
},
"locale": {
"language": "en",
"country": "",
"variant": ""
}
}
you probably want to check https://developers.google.com/blogger/docs/3.0/reference/#Blogs
Adding a basic working example for working with Google drive with Python client library.
Recommendation for Python Client library
I'll add another example for using google drive with python client library.
First of all I would Highly suggest using the PyDrive library which makes the OAuth process much easier (Install with pip: pip install PyDrive).
More details in here.
How to run a basic python code that interacts with Google drive?
1 ) Go to APIs Console and make your own project.
2 ) Search for ‘Google Drive API’, select the entry, and click ‘Enable’.
3 ) Select ‘Credentials’ from the left menu, click ‘Create Credentials’, select ‘OAuth client ID’.
4 ) Now, the product name and consent screen need to be set -> click ‘Configure consent screen’ and follow the instructions. Once finished:
a. Select ‘Application type’ to be Web application.
b. Enter an appropriate name.
c. Input http://localhost:8080 for Authorized JavaScript origins.
d. Input http://localhost:8080/ for Authorized redirect URIs.
e. Click ‘Create’.
5 ) Click ‘Download JSON’ on the right side of Client ID to download client_secret_<long ID>.json.
The downloaded file has all authentication information of your application.
Rename the file to “client_secrets.json” and place it in your working directory.
Basic code example
Add your code and run it - for example basic code that returns all files in root directory:
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
gauth = GoogleAuth()
gauth.LocalWebserverAuth()
drive = GoogleDrive(gauth)
# Auto-iterate through all files that matches this query
file_list = drive.ListFile({'q': "'root' in parents and trashed=false"}).GetList()
for file1 in file_list:
print('title: %s, id: %s' % (file1['title'], file1['id']))
If you receive 403 authorization error in the authentication process
You will have to add your gmail account at least as a test user in order to proceed with the authentication flow:
Additional Snapshots
Image for steps 4.c and 4.d:
Image for step 5:
Related
I'm using msal library for python and i have completed all registration on Microsoft Azure including set http://localhost as a redirection url in the active directory to be able to use the acquire_token_interactive function to fetch the token
import logging
from pprint import pprint
import msal
import requests
config = {
"authority": "https://login.microsoftonline.com/organizations",
"client_id": "XXXXXXXXXXXXXXX",
"username": "XXXXX#XXXX.com",
"client_secret": "XXXXXXXXXXXXXXXXX",
"scope": ["User.ReadBasic.All"],
"endpoint": "https://graph.microsoft.com/v1.0/users"
}
app = msal.PublicClientApplication(
config["client_id"], authority=config["authority"],
# allow_broker=True, # If opted in, you will be guided to meet the prerequisites, when applicable
# See also: https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-desktop-acquire-token-wam#wam-value-proposition
# token_cache=... # Default cache is in memory only.
# You can learn how to use SerializableTokenCache from
# https://msal-python.readthedocs.io/en/latest/#msal.SerializableTokenCache
)
result = None
accounts = app.get_accounts(username=config.get("username"))
if accounts:
logging.info("Account(s) exists in cache, probably with token too. Let's try.")
print("Account(s) already signed in:")
for a in accounts:
print(a["username"])
chosen = accounts[0] # Assuming the end user chose this one to proceed
print("Proceed with account: %s" % chosen["username"])
# Now let's try to find a token in cache for this account
result = app.acquire_token_silent(config["scope"], account=chosen)
if not result:
logging.info("No suitable token exists in cache. Let's get a new one from AAD.")
print("A local browser window will be open for you to sign in. CTRL+C to cancel.")
result = app.acquire_token_interactive(
port=5001,
# Only works if your app is registered with redirect_uri as http://localhost
scopes=config["scope"],
# parent_window_handle=..., # If broker is enabled, you will be guided to provide a window handle
login_hint=config.get("username"), # Optional.
# If you know the username ahead of time, this parameter can pre-fill
# the username (or email address) field of the sign-in page for the user,
# Often, apps use this parameter during reauthentication,
# after already extracting the username from an earlier sign-in
# by using the preferred_username claim from returned id_token_claims.
on_before_launching_ui=lambda ui="browser", **kwargs: print(
"A {} will be launched you can close the the tab".format(ui))
# prompt=msal.Prompt.SELECT_ACCOUNT, # Or simply "select_account". Optional. It forces to show account selector page
# prompt=msal.Prompt.CREATE, # Or simply "create". Optional. It brings user to a self-service sign-up flow.
# Prerequisite: https://docs.microsoft.com/en-us/azure/active-directory/external-identities/self-service-sign-up-user-flow
)
if "access_token" in result:
# print(result["access_token"])
pprint(result)
# Calling graph using the access token
graph_response = requests.get( # Use token to call downstream service
config["endpoint"],
headers={'Authorization': 'Bearer ' + result['access_token']}, )
# print("Graph API call result: %s ..." % graph_response.text)
# pprint(graph_response.json())
else:
print(result.get("error"))
print(result.get("error_description"))
print(result.get("correlation_id")) # You may need this when reporting a bug
The content of redirection after successful authentication:
Authentication completed. You can close this window now.
with link: localhost:XXX/?code=0.ATAXXXXXX&session_state=8beXXXXXX
how I can close the localhost tab automatically
Is there a way to automate this process?
After investigation, I have found the
success_template param
We need to add it to the acquire_token_interactive() function
And I have injected the JS code into it after 3 seconds the window will be closed
success_template="""<html><body><script>setTimeout(function(){window.close()}, 3000);</script></body></html> """
I'm trying to upgrade a legacy mail bot to authenticate via Oauth2 instead of Basic authentication, as it's now deprecated two days from now.
The document states applications can retain their original logic, while swapping out only the authentication bit
Application developers who have built apps that send, read, or
otherwise process email using these protocols will be able to keep the
same protocol, but need to implement secure, Modern authentication
experiences for their users. This functionality is built on top of
Microsoft Identity platform v2.0 and supports access to Microsoft 365
email accounts.
Note I've explicitly chosen the client credentials flow, because the documentation states
This type of grant is commonly used for server-to-server interactions
that must run in the background, without immediate interaction with a
user.
So I've got a python script that retrieves an Access Token using the MSAL python library. Now I'm trying to authenticate with the IMAP server, using that Access Token. There's some existing threads out there showing how to connect to Google, I imagine my case is pretty close to this one, except I'm connecting to a Office 365 IMAP server. Here's my script
import imaplib
import msal
import logging
app = msal.ConfidentialClientApplication(
'client-id',
authority='https://login.microsoftonline.com/tenant-id',
client_credential='secret-key'
)
result = app.acquire_token_for_client(scopes=['https://graph.microsoft.com/.default'])
def generate_auth_string(user, token):
return 'user=%s\1auth=Bearer %s\1\1' % (user, token)
# IMAP time!
mailserver = 'outlook.office365.com'
imapport = 993
M = imaplib.IMAP4_SSL(mailserver,imapport)
M.debug = 4
M.authenticate('XOAUTH2', lambda x: generate_auth_string('user#mydomain.com', result['access_token']))
print(result)
The IMAP authentication is failing and despite setting M.debug = 4, the output isn't very helpful
22:56.53 > b'DBDH1 AUTHENTICATE XOAUTH2'
22:56.53 < b'+ '
22:56.53 write literal size 2048
22:57.84 < b'DBDH1 NO AUTHENTICATE failed.'
22:57.84 NO response: b'AUTHENTICATE failed.'
Traceback (most recent call last):
File "/home/ubuntu/mini-oauth.py", line 21, in <module>
M.authenticate("XOAUTH2", lambda x: generate_auth_string('user#mydomain.com', result['access_token']))
File "/usr/lib/python3.10/imaplib.py", line 444, in authenticate
raise self.error(dat[-1].decode('utf-8', 'replace'))
imaplib.IMAP4.error: AUTHENTICATE failed.
Any idea where I might be going wrong, or how to get more robust information from the IMAP server about why the authentication is failing?
Things I've looked at
Note this answer no longer works as the suggested scopes fail to generate an Access Token.
The client credentials flow seems to mandate the https://graph.microsoft.com/.default grant. I'm not sure if that includes the scope required for the IMAP resource
https://outlook.office.com/IMAP.AccessAsUser.All?
Verified the code lifted from the Google thread produces the SASL XOAUTH2 string correctly, per example on the MS docs
import base64
user = 'test#contoso.onmicrosoft.com'
token = 'EwBAAl3BAAUFFpUAo7J3Ve0bjLBWZWCclRC3EoAA'
xoauth = "user=%s\1auth=Bearer %s\1\1" % (user, token)
xoauth = xoauth.encode('ascii')
xoauth = base64.b64encode(xoauth)
xoauth = xoauth.decode('ascii')
xsanity = 'dXNlcj10ZXN0QGNvbnRvc28ub25taWNyb3NvZnQuY29tAWF1dGg9QmVhcmVyIEV3QkFBbDNCQUFVRkZwVUFvN0ozVmUwYmpMQldaV0NjbFJDM0VvQUEBAQ=='
print(xoauth == xsanity) # prints True
This thread seems to suggest multiple tokens need to be fetched, one for graph, then another for the IMAP connection; could that be what I'm missing?
Try the below steps.
For Client Credentials Flow you need to assign “Application permissions” in the app registration, instead of “Delegated permissions”.
Add permission “Office 365 Exchange Online / IMAP.AccessAsApp” (application).
Grant admin consent to you application
Service Principals and Exchange.
Once a service principal is registered with Exchange Online, administrators can run the Add-Mailbox Permission cmdlet to assign receive permissions to the service principal.
Use scope 'https://outlook.office365.com/.default'.
Now you can generate the SALS authentication string by combining this access token and the mailbox username to authenticate with IMAP4.
#Python code
def get_access_token():
tenantID = 'abc'
authority = 'https://login.microsoftonline.com/' + tenantID
clientID = 'abc'
clientSecret = 'abc'
scope = ['https://outlook.office365.com/.default']
app = ConfidentialClientApplication(clientID,
authority=authority,
client_credential = clientSecret)
access_token = app.acquire_token_for_client(scopes=scope)
return access_token
def generate_auth_string(user, token):
auth_string = f"user={user}\1auth=Bearer {token}\1\1"
return auth_string
#IMAP AUTHENTICATE
imap = imaplib.IMAP4_SSL(imap_host, 993)
imap.debug = 4
access_token = get_access_token_to_authenticate_imap()
imap.authenticate("XOAUTH2", lambda x:generate_auth_string(
'useremail',
access_token['access_token']))
imap.select('inbox')
The imaplib.IMAP4.error: AUTHENTICATE failed Error occured because one point in the documentation is not that clear.
When setting up the the Service Principal via Powershell you need to enter the App-ID and an Object-ID. Many people will think, it is the Object-ID you see on the overview page of the registered App, but its not!
At this point you need the Object-ID from "Azure Active Directory -> Enterprise Applications --> Your-App --> Object-ID"
New-ServicePrincipal -AppId <APPLICATION_ID> -ServiceId <OBJECT_ID> [-Organization <ORGANIZATION_ID>]
Microsoft says:
The OBJECT_ID is the Object ID from the Overview page of the
Enterprise Application node (Azure Portal) for the application
registration. It is not the Object ID from the Overview of the App
Registrations node. Using the incorrect Object ID will cause an
authentication failure.
Ofcourse you need to take care for the API-permissions and the other stuff, but this was for me the point.
So lets go trough it again, like it is explained on the documentation page.
Authenticate an IMAP, POP or SMTP connection using OAuth
Register the Application in your Tenant
Setup a Client-Key for the application
Setup the API permissions, select the APIs my organization uses tab and search for "Office 365 Exchange Online" -> Application permissions -> Choose IMAP and IMAP.AccessAsApp
Setup the Service Principal and full access for your Application on the mailbox
Check if IMAP is activated for the mailbox
Thats the code I use to test it:
import imaplib
import msal
import pprint
conf = {
"authority": "https://login.microsoftonline.com/XXXXyourtenantIDXXXXX",
"client_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX", #AppID
"scope": ['https://outlook.office365.com/.default'],
"secret": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", #Key-Value
"secret-id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", #Key-ID
}
def generate_auth_string(user, token):
return f"user={user}\x01auth=Bearer {token}\x01\x01"
if __name__ == "__main__":
app = msal.ConfidentialClientApplication(conf['client_id'], authority=conf['authority'],
client_credential=conf['secret'])
result = app.acquire_token_silent(conf['scope'], account=None)
if not result:
print("No suitable token in cache. Get new one.")
result = app.acquire_token_for_client(scopes=conf['scope'])
if "access_token" in result:
print(result['token_type'])
pprint.pprint(result)
else:
print(result.get("error"))
print(result.get("error_description"))
print(result.get("correlation_id"))
imap = imaplib.IMAP4('outlook.office365.com')
imap.starttls()
imap.authenticate("XOAUTH2", lambda x: generate_auth_string("target_mailbox#example.com", result['access_token']).encode("utf-8"))
After setting up the Service Principal and giving the App full access on the mailbox, wait 15 - 30 minutes for the changes to take effect and test it.
Try with this script:
import json
import msal
import requests
client_id = '***'
client_secret = '***'
tenant_id = '***'
authority = f"https://login.microsoftonline.com/{tenant_id}"
app = msal.ConfidentialClientApplication(
client_id=client_id,
client_credential=client_secret,
authority=authority)
scopes = ["https://graph.microsoft.com/.default"]
result = None
result = app.acquire_token_silent(scopes, account=None)
if not result:
print(
"No suitable token exists in cache. Let's get a new one from Azure Active Directory.")
result = app.acquire_token_for_client(scopes=scopes)
# if "access_token" in result:
# print("Access token is " + result["access_token"])
if "access_token" in result:
userId = "***"
endpoint = f'https://graph.microsoft.com/v1.0/users/{userId}/sendMail'
toUserEmail = "***"
email_msg = {'Message': {'Subject': "Test Sending Email from Python",
'Body': {'ContentType': 'Text', 'Content': "This is a test email."},
'ToRecipients': [{'EmailAddress': {'Address': toUserEmail}}]
},
'SaveToSentItems': 'true'}
r = requests.post(endpoint,
headers={'Authorization': 'Bearer ' + result['access_token']}, json=email_msg)
if r.ok:
print('Sent email successfully')
else:
print(r.json())
else:
print(result.get("error"))
print(result.get("error_description"))
print(result.get("correlation_id"))
Source: https://kontext.tech/article/795/python-send-email-via-microsoft-graph-api
Reporting a possible bug on Youtube Api commentThreads.update
Name of API affected: CommentThreads: update
Issue summary: CommentThreads: update When calling this api to update the topLevelComment returned by CommentThreads.list, I got a 400 error. I was using this API in an internal tool to allow me respond to comments. It was working fine for the last months... and in the last days (~2016-11-07) with no changes on any code on my side, the API started to return a 400 error. I have all my internal comment systems halted right now as I could not figure out how to fix this. Every thing I tried and tested brings me to the 400 error and the message below.
"While this can be a transient error, it usually indicates that the requests input is invalid."
Even using the API console, I got the same errror!
My bet? This is a transient error that I hope Google can fix.
Steps to reproduce issue:
Retrieve "heldForReview" threads using CommentThreads.list
Try to update the returned topLevelComment changing ["topLevelComment"]["snippet"]["textOriginal"] = "some text". Optionally, change the "moderationStatus" to "published". The error is the same.
This behaviour can be easily spotted using the Python API samples, specifically the file comment_threads.py from https://github.com/youtube/api-samples/blob/master/python/comment_threads.py
Expected output: The top level comment from Thread should become visible (after setting the moderationStatus to published) and have the "some text" as the first child comment as its answer.
Actual results:
{
"error": {
"errors": [
{
"domain": "youtube.commentThread",
"reason": "processingFailure",
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the \u003ccode\u003ecommentThread\u003c/code\u003e resource in the request body to ensure that it is valid.",
"locationType": "other",
"location": "body"
}
],
"code": 400,
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the \u003ccode\u003ecommentThread\u003c/code\u003e resource in the request body to ensure that it is valid."
}
}
Notes: I am providing a sample code below.
#!/usr/bin/python
# Usage example:
# python comment_threads.py --channelid='<channel_id>' --videoid='<video_id>' --text='<text>'
import httplib2
import os
import sys
from apiclient.discovery import build_from_document
from apiclient.errors import HttpError
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import argparser, run_flow
# The CLIENT_SECRETS_FILE variable specifies the name of a file that contains
# the OAuth 2.0 information for this application, including its client_id and
# client_secret. You can acquire an OAuth 2.0 client ID and client secret from
# the {{ Google Cloud Console }} at
# {{ https://cloud.google.com/console }}.
# Please ensure that you have enabled the YouTube Data API for your project.
# For more information about using OAuth2 to access the YouTube Data API, see:
# https://developers.google.com/youtube/v3/guides/authentication
# For more information about the client_secrets.json file format, see:
# https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
CLIENT_SECRETS_FILE = "client_secrets.json"
# This OAuth 2.0 access scope allows for full read/write access to the
# authenticated user's account and requires requests to use an SSL connection.
YOUTUBE_READ_WRITE_SSL_SCOPE = "https://www.googleapis.com/auth/youtube.force-ssl"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the APIs Console
https://console.developers.google.com
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__),CLIENT_SECRETS_FILE))
# Authorize the request and store authorization credentials.
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=YOUTUBE_READ_WRITE_SSL_SCOPE,message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
# Trusted testers can download this discovery document from the developers page
# and it should be in the same directory with the code.
with open("youtube-v3-discoverydocument.json", "r") as f:
doc = f.read()
return build_from_document(doc, http=credentials.authorize(httplib2.Http()))
# Call the API's commentThreads.list method to list the existing comments.
def get_comments(youtube, video_id, channel_id):
results = youtube.commentThreads().list(
part="snippet",
videoId=video_id,
channelId=channel_id,
# ONLY FILTER COMMENTS THAT ARE HELD FOR REVIEW
# THOSE COMMENTS ONLY HAVE ONE TOP LEVEL COMMENT
moderationStatus="heldForReview",
textFormat="plainText").execute()
for item in results["items"]:
comment = item["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Comment by %s: %s" % (author, text)
return results["items"]
# Call the API's commentThreads.insert method to insert a comment.
def insert_comment(youtube, channel_id, video_id, text):
insert_result = youtube.commentThreads().insert(
part="snippet",
body=dict(
snippet=dict(
channelId=channel_id,
videoId=video_id,
topLevelComment=dict(
snippet=dict(
textOriginal=text
)
)
)
)
).execute()
comment = insert_result["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Inserted comment for %s: %s" % (author, text)
# Call the API's commentThreads.update method to update an existing comment.
def update_comment(youtube, comment):
comment["snippet"]["topLevelComment"]["snippet"]["textOriginal"] = 'updated'
update_result = youtube.commentThreads().update(
part="snippet",
body=comment
).execute()
comment = update_result["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Updated comment for %s: %s" % (author, text)
if __name__ == "__main__":
# The "channelid" option specifies the YouTube channel ID that uniquely
# identifies the channel for which the comment will be inserted.
argparser.add_argument("--channelid",
help="Required; ID for channel for which the comment will be inserted.")
# The "videoid" option specifies the YouTube video ID that uniquely
# identifies the video for which the comment will be inserted.
argparser.add_argument("--videoid",
help="Required; ID for video for which the comment will be inserted.")
# The "text" option specifies the text that will be used as comment.
argparser.add_argument("--text", help="Required; text that will be used as comment.")
args = argparser.parse_args()
if not args.channelid:
exit("Please specify channelid using the --channelid= parameter.")
if not args.videoid:
exit("Please specify videoid using the --videoid= parameter.")
if not args.text:
exit("Please specify text using the --text= parameter.")
youtube = get_authenticated_service(args)
try:
# All the available methods are used in sequence just for the sake of an example.
# Insert channel comment by omitting videoId
# REMOVED AS IT IS NOT RELEVANT TO THE ERRRO REPORT. insert_comment(youtube, args.channelid, None, args.text)
# Insert video comment
# REMOVED AS IT IS NOT RELEVANT TO THE ERRRO REPORT. insert_comment(youtube, args.channelid, args.videoid, args.text)
video_comments = get_comments(youtube, args.videoid, None)
if video_comments:
update_comment(youtube, video_comments[0])
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
else:
print "Inserted, listed and updated top-level comments."
Of course, you will need your authorization files inside the same directory.
I am just starting with the Google Analytics Reporting API and used the Hello API tutorial to get started. (https://developers.google.com/analytics/solutions/articles/hello-analytics-api)
Unfortunately, I am stuck before I even start. I read it (twice). Created the project, updates the client_secrets.jason file... but when I run the main, it crashes.
File "C:\Python27\New Libraries Downloaded\analytics-v3-python-cmd-line\hello_analytics_api_v3.py", line 173, in <module>
main(sys.argv)
File "C:\Python27\New Libraries Downloaded\analytics-v3-python-cmd-line\hello_analytics_api_v3.py", line 56, in main
service, flags = sample_tools.init(argv, 'analytics', 'v3', __doc__, __file__, scope='https://www.googleapis.com/auth/analytics.readonly')
NameError: global name '__file__' is not defined
I'm new (really really new) to this, so any help (and a more detailed tutorial) would be much appreciated.
Thanks !
EDIT: I have't changed anything from the original code in the tutorial. I'll worry about modifications after I get this running. Thanks !
CODE: hello_analytics_api_v3.py
import argparse
import sys
from apiclient.errors import HttpError
from apiclient import sample_tools
from oauth2client.client import AccessTokenRefreshError
def main(argv):
# Authenticate and construct service.
service, flags = sample_tools.init(argv, 'analytics', 'v3', __doc__, __file__, scope='https://www.googleapis.com/auth/analytics.readonly')
# Try to make a request to the API. Print the results or handle errors.
try:
first_profile_id = get_first_profile_id(service)
if not first_profile_id:
print 'Could not find a valid profile for this user.'
else:
results = get_top_keywords(service, first_profile_id)
print_results(results)
except TypeError, error:
# Handle errors in constructing a query.
print ('There was an error in constructing your query : %s' % error)
except HttpError, error:
# Handle API errors.
print ('Arg, there was an API error : %s : %s' % (error.resp.status, error._get_reason()))
except AccessTokenRefreshError:
# Handle Auth errors.
print ('The credentials have been revoked or expired, please re-run ','the application to re-authorize')
def get_first_profile_id(service):
"""Traverses Management API to return the first profile id.
This first queries the Accounts collection to get the first account ID.
This ID is used to query the Webproperties collection to retrieve the first
webproperty ID. And both account and webproperty IDs are used to query the
Profile collection to get the first profile id.
Args:
service: The service object built by the Google API Python client library.
Returns:
A string with the first profile ID. None if a user does not have any
accounts, webproperties, or profiles.
"""
accounts = service.management().accounts().list().execute()
if accounts.get('items'):
firstAccountId = accounts.get('items')[0].get('id')
webproperties = service.management().webproperties().list(
accountId=firstAccountId).execute()
if webproperties.get('items'):
firstWebpropertyId = webproperties.get('items')[0].get('id')
profiles = service.management().profiles().list(
accountId=firstAccountId,
webPropertyId=firstWebpropertyId).execute()
if profiles.get('items'):
return profiles.get('items')[0].get('id')
return None
def get_top_keywords(service, profile_id):
"""Executes and returns data from the Core Reporting API.
This queries the API for the top 25 organic search terms by visits.
Args:
service: The service object built by the Google API Python client library.
profile_id: String The profile ID from which to retrieve analytics data.
Returns:
The response returned from the Core Reporting API.
"""
return service.data().ga().get(
ids='ga:' + profile_id,
start_date='2012-01-01',
end_date='2012-01-15',
metrics='ga:visits',
dimensions='ga:source,ga:keyword',
sort='-ga:visits',
filters='ga:medium==organic',
start_index='1',
max_results='25').execute()
def print_results(results):
"""Prints out the results.
This prints out the profile name, the column headers, and all the rows of
data.
Args:
results: The response returned from the Core Reporting API.
"""
print
print 'Profile Name: %s' % results.get('profileInfo').get('profileName')
print
# Print header.
output = []
for header in results.get('columnHeaders'):
output.append('%30s' % header.get('name'))
print ''.join(output)
# Print data table.
if results.get('rows', []):
for row in results.get('rows'):
output = []
for cell in row:
output.append('%30s' % cell)
print ''.join(output)
else:
print 'No Rows Found'
if __name__ == '__main__':
main(sys.argv)
according to the error the program doesn't recognize 'file'. In IPython this error comes up (not 100% sure why) but this error shouldn't come up when running a file. In a file the 'file' argument will return the full path and the file name.
Try creating a file and running from there or simply paste in a the full path and file name instead.
Also be sure that the client secrets are located in the same folder as your script!
Can a Python script upload a photo to photo bucket and then retrieve the URL for it? Is so how?
I found a script at this link: http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=240x677
But I just found that confusing.
many thanks,
Phil
Yes, you can. Photobucket has a well-documented API, and someone wrote a wrapper around it.
Download the it and put it into your Python path, then download httplib2 (you can use easy_install or pip for this one).
Then, you have to request a key for the Photobucket API.
If you did everything right, you can write your script now. The Python wrapper is great, but is not documented at all which makes it very difficult to use it. I spent hours on understanding it (compare the question and response time here). As example, the script even has form/multipart support, but I had to read the code to find out how to use it. I had to prefix the filename with a #.
This library is a great example how you should NOT document your code!
I finally got it working, enjoy the script: (it even has oAuth handling!)
import pbapi
import webbrowser
import cPickle
import os
import re
import sys
from xml.etree import ElementTree
__author__ = "leoluk"
###############################################
## CONFIGURATION ##
###############################################
# File in which the oAuth token will be stored
TOKEN_FILE = "token.txt"
IMAGE_PATH = r"D:\Eigene Dateien\Bilder\SC\foo.png"
IMAGE_RECORD = {
"type": 'image',
"uploadfile": '#'+IMAGE_PATH,
"title": "My title", # <---
"description": "My description", # <---
}
ALBUM_NAME = None # default album if None
API_KEY = "149[..]"
API_SECRET = "528[...]"
###############################################
## SCRIPT ##
###############################################
api = pbapi.PbApi(API_KEY, API_SECRET)
api.pb_request.connection.cache = None
# Test if service online
api.reset().ping().post()
result = api.reset().ping().post().response_string
ET = ElementTree.fromstring(result)
if ET.find('status').text != 'OK':
sys.stderr.write("error: Ping failed \n"+result)
sys.exit(-1)
try:
# If there is already a saved oAuth token, no need for a new one
api.username, api.pb_request.oauth_token = cPickle.load(open(TOKEN_FILE))
except (ValueError, KeyError, IOError, TypeError):
# If error, there's no valid oAuth token
# Getting request token
api.reset().login().request().post().load_token_from_response()
# Requesting user permission (you have to login with your account)
webbrowser.open_new_tab(api.login_url)
raw_input("Press Enter when you finished access permission. ")
#Getting oAuth token
api.reset().login().access().post().load_token_from_response()
# This is needed for getting the right subdomain
infos = api.reset().album(api.username).url().get().response_string
ET = ElementTree.fromstring(infos)
if ET.find('status').text != 'OK':
# Remove the invalid oAuth
os.remove(TOKEN_FILE)
# This happend is user deletes the oAuth permission online
sys.stderr.write("error: Permission deleted. Please re-run.")
sys.exit(-1)
# Fresh values for username and subdomain
api.username = ET.find('content/username').text
api.set_subdomain(ET.find('content/subdomain/api').text)
# Default album name
if not ALBUM_NAME:
ALBUM_NAME = api.username
# Debug :-)
print "User: %s" % api.username
# Save the new, valid oAuth token
cPickle.dump((api.username, api.oauth_token), open(TOKEN_FILE, 'w'))
# Posting the image
result = (api.reset().album(ALBUM_NAME).
upload(IMAGE_RECORD).post().response_string)
ET = ElementTree.fromstring(result)
if ET.find('status').text != 'OK':
sys.stderr.write("error: File upload failed \n"+result)
sys.exit(-1)
# Now, as an example what you could do now, open the image in the browser
webbrowser.open_new_tab(ET.find('content/browseurl').text)
Use the python API by Ron White that was written to do just this