I've been trying to make a script that update the YouTube title with the video views, like the Tom Scott video. But I keep getting this error with the authentication.
I get the code to authenticate it but it just says I don't have enough permission.
<HttpError 400 when requesting https://youtube.googleapis.com/youtube/v3/videos?part=%28%27snippet%2Cstatus%2Clocalizations%27%2C%29&alt=json returned "'('snippet'". Details: "[{'message': "'('snippet'", 'domain': 'youtube.part', 'reason': 'unknownPart', 'location': 'part', 'locationType': 'parameter'}]"> File "C:\Users\erik\Documents\PYTHON CODE\view_changer\example.py", line 49, in main response_update = request_update.execute().
Here's my code:
import os
import google_auth_oauthlib.flow
import googleapiclient.discovery
import googleapiclient.errors
scopes = ["https://www.googleapis.com/auth/youtube"]
api_key = "..."
video_1_part_view = 'statistics'
video_1_id_view = 'hVuoPgZJ3Yw'
video_1_part_update= "snippet,status,localizations",
video_1_part_body_update= { "id": "1y94SadSsMU",
"localizations": {"es": { "title": "no hay nada a ver aqui",
"description": "Esta descripcion es en español."
}
}
}
def main():
# Disable OAuthlib's HTTPS verification when running locally.
# *DO NOT* leave this option enabled in production.
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
api_service_name = "youtube"
api_version = "v3"
client_secrets_file = (r"C:\Users\erik\Documents\PYTHON CODE\view_changer\client_secret_379587650859-bbl63je7sh6kva19l33auonkledt09a9.apps.googleusercontent.com.json")
# Get credentials and create an API client
flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file(client_secrets_file, scopes)
credentials = flow.run_console()
youtube = googleapiclient.discovery.build(api_service_name, api_version, credentials=credentials)
#check views
request_view= youtube.videos().list( part = video_1_part_view, id = video_1_id_view )
respons_view = request_view.execute()
views = respons_view['items'][0]['statistics']['viewCount']
print (views)
#update video
request_update = youtube.videos().update(part= video_1_part_update, body = video_1_part_body_update)
response_update = request_update.execute()
print(response_update)
main()
# if __name__ == "__main__":
# main()
If the problem is not in the code I might have an other idea of what it might be. When I was first setting up the OAuth, I didn't know I needed to pick what the user would be giving permission to (I know rookie mistake(: ), but since I updated it, when I ask for permission after going to the authentication link the code gives me back, it doesn't ask permission for all the things I told it to do.
According to the official documentation, the request parameter part of the Videos.list API endpoint is defined as:
part (string)
The part parameter specifies a comma-separated list of one or more video resource properties that the API response will include.
[...]
But you're having that request parameter set as:
('snippet,status,localizations',).
That's because (though your code above doesn't show it) you have the variable video_1_part_view defined as:
video_1_part_view = "snippet,status,localizations",
(Do notice the trailing comma). That makes video_1_part_view a Python tuple, which is not what you need to send to the API endpoint.
The same is true for your variable video_1_part_update.
Just remove the trailing commas and things will work OK.
Related
I wrote a script in python that adds videos to a playlist on youtube via youtube api, text file with video ids and batch system. It works but incorrectly, it appears to me that some videos are being skipped and not added, it picks what to skip at random.
Edit: I run this script about 10 times during testing but in an image attached below theres a new playlist where i ran my program twice hence I painted on it 1st and 2nd.
Thats an example list (numbered for easier reading in this post):
videoIds = [ 1'd7ypnPjz81I', 2'vZv9-TWdBJM', 3'FndmvPkI1Ms', 4'q-rXs9Bu6lM', 5'63K5VMx2BZM', 6'9WS6hC-6i68', 7'7uG6E6bVKU0', 8'8EzfBYFU8Q0']
Thats a snipped of my code:
youtube = googleapiclient.discovery.build(
api_service_name, api_version, credentials=credentials)
batch = youtube.new_batch_http_request()
for videoId in videoIds:
batch.add(youtube.playlistItems().insert(
part="snippet",
body={
"snippet": {
"playlistId": "my playlist id",
"position": 0,
"resourceId": {
"kind": "youtube#video",
"videoId": videoId
}
}
}
)
)
response = batch.execute()
I see only two possible options, first is my batch function messing up, possibly merging two ids together which I think is highly unlikely. Second its failing at youtubes side, either not reading my batch correctly (which i have to admit i have no understanding of beside knowing that it puts small chunks into one single big piece), not catching up on requests resulting in skipping some or something entirely else i have no idea of either. I'm adding an image for clarity.
EDIT
Also, if i'm allowed to ask, why google quota eats so much of my usage, it counts as 150 or more each time i run my script. Shouldnt batch count only as one interaction costing only 50? And is it possible to skip that usage limit of 10,000 by running script on google site, in some testing window? Im not sure but i think i heard about such trick during my research while writing this program.
EDIT I tried adding callback into my script, from what i understand it gives information back that each request has been sent and completed? Anyways it added only three videos at random and since i setted up callback=main incorectly it gave me an error:
TypeError: main() takes 0 positional arguments but 3 were given
with my poor attempt
batch = youtube.new_batch_http_request(callback=main)
Number 3 must be the key to it, if only i could somehow extract information about the whole process to see what is actually happening.
EDIT I dont understand how to set callback properly https://github.com/googleapis/google-api-python-client/blob/master/docs/batch.md
Following test code snippet inserts videos in correct order, substitute playlistId with matching to your setup.
Note: videoId '9WS6hC-6i68' is unavailable
Note: Create an empty test playlist to run a test code snippet to verify insertion order
# -*- coding: utf-8 -*-
# Sample Python code for youtube.playlistItems.insert
# See instructions for running these code samples locally:
# https://developers.google.com/explorer-help/guides/code_samples#python
import os
import google_auth_oauthlib.flow
import googleapiclient.discovery
import googleapiclient.errors
scopes = ["https://www.googleapis.com/auth/youtube.force-ssl"]
def main():
# Disable OAuthlib's HTTPS verification when running locally.
# *DO NOT* leave this option enabled in production.
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "1"
api_service_name = "youtube"
api_version = "v3"
client_secrets_file = "client_secret.json"
playlistId = "your_playlist_id"
videoIds = [
"d7ypnPjz81I",
"vZv9-TWdBJM",
"FndmvPkI1Ms",
"q-rXs9Bu6lM",
"63K5VMx2BZM",
"7uG6E6bVKU0",
"8EzfBYFU8Q0"
]
# Get credentials and create an API client
flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file(
client_secrets_file, scopes)
credentials = flow.run_console()
youtube = googleapiclient.discovery.build(
api_service_name, api_version, credentials=credentials)
for videoId in videoIds:
request = youtube.playlistItems().insert(
part="snippet",
body={
"kind": "youtube#playlistItem",
"snippet": {
"playlistId": playlistId,
"position": 0,
"resourceId": {
"kind": "youtube#video",
"videoId": videoId
}
},
}
)
response = request.execute()
print(response)
if __name__ == "__main__":
main()
I was using the YouTube Data API v3 to put the current view count in my video title just like in this video ( https://www.youtube.com/watch?v=BxV14h0kFs0 ) and it works!
I set it up in pythonanywhere for this video ID: 7vSw7XRivWY, worked like a charm, still running.
Then I tried to do the same thing for another video (ID: fDh-qcbIxtU ) and now it throws
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://www.googleapis.com/youtube/v3/videos?part=snippet&alt=json returned "The request metadata is invalid.">
{
"error": {
"code": 403,
"message": "The request is missing a valid API key.",
"errors": [
{
"message": "The request is missing a valid API key.",
"domain": "global",
"reason": "forbidden"
}
],
"status": "PERMISSION_DENIED"
}
}
this error. I'm using the exact same code I used for the other ID, all I did was changing my variable for the video ID to the other one.
I've no clue why this happens. I retried it multiple times, I tried it with different API scopes, I tried it with another API client key - same result, not working.
In my python code I implemented a method change_title that takes the apiclient and the video ID as parameters then first fetches the view count & snippet part ( which contains the title ), edits the title section of the snippet, and then updates the video accordingly.
As already stated, it works for any other of my videos but not for this one (ID: fDh-qcbIxtU). I'm 100% sure I put in the correct ID and that I didn't change anything else.
Here's the whole file that is being executed:
# -*- coding: utf-8 -*-
# Sample Python code for youtube.channels.list
# See instructions for running these code samples locally:
# https://developers.google.com/explorer-help/guides/code_samples#python
import os
import google_auth_oauthlib.flow
import googleapiclient.discovery
import googleapiclient.errors
import time
scopes = ['https://www.googleapis.com/auth/youtube.force-ssl']
def like_video(youtube, videoID):
youtube.videos().rate(
id=videoID,
rating='like'
).execute()
def change_title(youtube, videoID):
video_list_response_statistics = youtube.videos().list(
id=videoID,
part="statistics"
).execute()
videos_list_statistics = video_list_response_statistics['items'][0]['statistics']
views = videos_list_statistics['viewCount']
print("Views: " + str(views))
snippetPart = youtube.videos().list(
id=videoID,
part="snippet"
).execute()
# Since the request specified a video ID, the response only contains one
# video resource. This code extracts the snippet from that resource.
videos_list_snippet = snippetPart['items'][0]['snippet']
print(videos_list_snippet)
videos_list_snippet['title'] += " [" + str(views) + " Views]"
videos_update_response = youtube.videos().update(
part='snippet',
body=dict(
id=videoID,
snippet=videos_list_snippet
)).execute()
def main():
# Disable OAuthlib's HTTPS verification when running locally.
# *DO NOT* leave this option enabled in production.
os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "0"
api_service_name = "youtube"
api_version = "v3"
client_secrets_file = "credentials.json"
# Get credentials and create an API client
flow = google_auth_oauthlib.flow.InstalledAppFlow.from_client_secrets_file(
client_secrets_file, scopes)
credentials = flow.run_console()
youtube = googleapiclient.discovery.build(
api_service_name, api_version, credentials=credentials)
i = 1
videoID = "fDh-qcbIxtU"
while True:
print("Update No" + str(i))
response = change_title(youtube, videoID)
print(response)
i += 1
time.sleep(1000)
if __name__ == "__main__":
main()
Thanks for anyone that's taking the time to try & solve my issue.
I was getting this error for my one particular youtube channel, and that was because it was managed by a brand account than its owner account.
It might not be helpful in your case but maybe it can help others facing this issue.
Just authorize with the right account.
The code I've written seems to be what I need, however it doesn't work and I get a 401 error (authentication) I've tried everything: 1. Service account permissions 2. create secret id and key (not sure how to use those to get access token though) 3. Basically, tried everything for the past 2 days.
import requests
from google.oauth2 import service_account
METADATA_URL = 'http://metadata.google.internal/computeMetadata/v1/'
METADATA_HEADERS = {'Metadata-Flavor': 'Google'}
SERVICE_ACCOUNT = [NAME-OF-SERVICE-ACCOUNT-USED-WITH-CLOUD-FUNCTION-WHICH-HAS-COMPUTE-ADMIN-PRIVILEGES]
def get_access_token():
url = '{}instance/service-accounts/{}/token'.format(
METADATA_URL, SERVICE_ACCOUNT)
# Request an access token from the metadata server.
r = requests.get(url, headers=METADATA_HEADERS)
r.raise_for_status()
# Extract the access token from the response.
access_token = r.json()['access_token']
return access_token
def start_vms(request):
request_json = request.get_json(silent=True)
request_args = request.args
if request_json and 'number_of_instances_to_create' in request_json:
number_of_instances_to_create = request_json['number_of_instances_to_create']
elif request_args and 'number_of_instances_to_create' in request_args:
number_of_instances_to_create = request_args['number_of_instances_to_create']
else:
number_of_instances_to_create = 0
access_token = get_access_token()
address = "https://www.googleapis.com/compute/v1/projects/[MY-PROJECT]/zones/europe-west2-b/instances?sourceInstanceTemplate=https://www.googleapis.com/compute/v1/projects/[MY-PROJECT]/global/instanceTemplates/[MY-INSTANCE-TEMPLATE]"
headers = {'token': '{}'.format(access_token)}
for i in range(1,number_of_instances_to_create):
data = {'name': 'my-instance-{}'.format(i)}
r = requests.post(address, data=data, headers=headers)
r.raise_for_status()
print("my-instance-{} created".format(i))
Any advice/guidance? If someone could tell me how to get an access token using secret Id and key. Also, I'm not too sure if OAuth2.0 will work because I essentially want to turn these machines on, and they do some processing and then self destruct. So there is no user involvement to allow access. If OAuth2.0 is the wrong way to go about it, what else can I use?
I tried using gcloud, but subprocess'ing gcloud commands aren't recommended.
I did something similar to this, though I used the Node 10 Firebase Functions runtime, but should be very similar never-the-less.
I agree that OAuth is not the correct solution since there is no user involved.
What you need to use is 'Application Default Credentials' which is based on the permissions available to your cloud functions' default service account which will be the one labelled as "App Engine default service account" here:
https://console.cloud.google.com/iam-admin/serviceaccounts?folder=&organizationId=&project=[YOUR_PROJECT_ID]
(For my project that service account already had the permissions necessary for starting and stopping GCE instances, but for other API's I have grant it permissions manually.)
ADC is for server-to-server API calls. To use it I called google.auth.getClient (of the Google APIs Auth Library) with just the scope, ie. "https://www.googleapis.com/auth/cloud-platform".
This API is very versatile in that it returns whatever credentials you need, so when I am running on cloud functions it returns a 'Compute' object and when I'm running in the emulator it gives me a "UserRefreshClient" object.
I then include that auth object in my call to compute.instances.insert() and compute.instances.stop().
Here the template I used for testing my code...
{
name: 'base',
description: 'Temporary instance used for testing.',
tags: { items: [ 'test' ] },
machineType: `zones/${zone}/machineTypes/n1-standard-1`,
disks: [
{
autoDelete: true, // you will want this!
boot: true,
type: 'PERSISTENT',
initializeParams: {
diskSizeGb: '10',
sourceImage: "projects/ubuntu-os-cloud/global/images/ubuntu-minimal-1804-bionic-v20190628",
}
}
],
networkInterfaces: [
{
network: `https://www.googleapis.com/compute/v1/projects/${projectId}/global/networks/default`,
accessConfigs: [
{
name: 'External NAT',
type: 'ONE_TO_ONE_NAT'
}
]
}
],
}
Hope that helps.
If you’re getting a 401 error that means that the access token you're using is either expired or invalid.
This guide will be able to show you how to request OAuth 2.0 access tokens and make API calls using a Service Account: https://developers.google.com/identity/protocols/OAuth2ServiceAccount
The .json file mentioned is the private key you create in IAM & Admin under your service account.
Reporting a possible bug on Youtube Api commentThreads.update
Name of API affected: CommentThreads: update
Issue summary: CommentThreads: update When calling this api to update the topLevelComment returned by CommentThreads.list, I got a 400 error. I was using this API in an internal tool to allow me respond to comments. It was working fine for the last months... and in the last days (~2016-11-07) with no changes on any code on my side, the API started to return a 400 error. I have all my internal comment systems halted right now as I could not figure out how to fix this. Every thing I tried and tested brings me to the 400 error and the message below.
"While this can be a transient error, it usually indicates that the requests input is invalid."
Even using the API console, I got the same errror!
My bet? This is a transient error that I hope Google can fix.
Steps to reproduce issue:
Retrieve "heldForReview" threads using CommentThreads.list
Try to update the returned topLevelComment changing ["topLevelComment"]["snippet"]["textOriginal"] = "some text". Optionally, change the "moderationStatus" to "published". The error is the same.
This behaviour can be easily spotted using the Python API samples, specifically the file comment_threads.py from https://github.com/youtube/api-samples/blob/master/python/comment_threads.py
Expected output: The top level comment from Thread should become visible (after setting the moderationStatus to published) and have the "some text" as the first child comment as its answer.
Actual results:
{
"error": {
"errors": [
{
"domain": "youtube.commentThread",
"reason": "processingFailure",
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the \u003ccode\u003ecommentThread\u003c/code\u003e resource in the request body to ensure that it is valid.",
"locationType": "other",
"location": "body"
}
],
"code": 400,
"message": "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the requests input is invalid. Check the structure of the \u003ccode\u003ecommentThread\u003c/code\u003e resource in the request body to ensure that it is valid."
}
}
Notes: I am providing a sample code below.
#!/usr/bin/python
# Usage example:
# python comment_threads.py --channelid='<channel_id>' --videoid='<video_id>' --text='<text>'
import httplib2
import os
import sys
from apiclient.discovery import build_from_document
from apiclient.errors import HttpError
from oauth2client.client import flow_from_clientsecrets
from oauth2client.file import Storage
from oauth2client.tools import argparser, run_flow
# The CLIENT_SECRETS_FILE variable specifies the name of a file that contains
# the OAuth 2.0 information for this application, including its client_id and
# client_secret. You can acquire an OAuth 2.0 client ID and client secret from
# the {{ Google Cloud Console }} at
# {{ https://cloud.google.com/console }}.
# Please ensure that you have enabled the YouTube Data API for your project.
# For more information about using OAuth2 to access the YouTube Data API, see:
# https://developers.google.com/youtube/v3/guides/authentication
# For more information about the client_secrets.json file format, see:
# https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
CLIENT_SECRETS_FILE = "client_secrets.json"
# This OAuth 2.0 access scope allows for full read/write access to the
# authenticated user's account and requires requests to use an SSL connection.
YOUTUBE_READ_WRITE_SSL_SCOPE = "https://www.googleapis.com/auth/youtube.force-ssl"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# This variable defines a message to display if the CLIENT_SECRETS_FILE is
# missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the APIs Console
https://console.developers.google.com
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__),CLIENT_SECRETS_FILE))
# Authorize the request and store authorization credentials.
def get_authenticated_service(args):
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=YOUTUBE_READ_WRITE_SSL_SCOPE,message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run_flow(flow, storage, args)
# Trusted testers can download this discovery document from the developers page
# and it should be in the same directory with the code.
with open("youtube-v3-discoverydocument.json", "r") as f:
doc = f.read()
return build_from_document(doc, http=credentials.authorize(httplib2.Http()))
# Call the API's commentThreads.list method to list the existing comments.
def get_comments(youtube, video_id, channel_id):
results = youtube.commentThreads().list(
part="snippet",
videoId=video_id,
channelId=channel_id,
# ONLY FILTER COMMENTS THAT ARE HELD FOR REVIEW
# THOSE COMMENTS ONLY HAVE ONE TOP LEVEL COMMENT
moderationStatus="heldForReview",
textFormat="plainText").execute()
for item in results["items"]:
comment = item["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Comment by %s: %s" % (author, text)
return results["items"]
# Call the API's commentThreads.insert method to insert a comment.
def insert_comment(youtube, channel_id, video_id, text):
insert_result = youtube.commentThreads().insert(
part="snippet",
body=dict(
snippet=dict(
channelId=channel_id,
videoId=video_id,
topLevelComment=dict(
snippet=dict(
textOriginal=text
)
)
)
)
).execute()
comment = insert_result["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Inserted comment for %s: %s" % (author, text)
# Call the API's commentThreads.update method to update an existing comment.
def update_comment(youtube, comment):
comment["snippet"]["topLevelComment"]["snippet"]["textOriginal"] = 'updated'
update_result = youtube.commentThreads().update(
part="snippet",
body=comment
).execute()
comment = update_result["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print "Updated comment for %s: %s" % (author, text)
if __name__ == "__main__":
# The "channelid" option specifies the YouTube channel ID that uniquely
# identifies the channel for which the comment will be inserted.
argparser.add_argument("--channelid",
help="Required; ID for channel for which the comment will be inserted.")
# The "videoid" option specifies the YouTube video ID that uniquely
# identifies the video for which the comment will be inserted.
argparser.add_argument("--videoid",
help="Required; ID for video for which the comment will be inserted.")
# The "text" option specifies the text that will be used as comment.
argparser.add_argument("--text", help="Required; text that will be used as comment.")
args = argparser.parse_args()
if not args.channelid:
exit("Please specify channelid using the --channelid= parameter.")
if not args.videoid:
exit("Please specify videoid using the --videoid= parameter.")
if not args.text:
exit("Please specify text using the --text= parameter.")
youtube = get_authenticated_service(args)
try:
# All the available methods are used in sequence just for the sake of an example.
# Insert channel comment by omitting videoId
# REMOVED AS IT IS NOT RELEVANT TO THE ERRRO REPORT. insert_comment(youtube, args.channelid, None, args.text)
# Insert video comment
# REMOVED AS IT IS NOT RELEVANT TO THE ERRRO REPORT. insert_comment(youtube, args.channelid, args.videoid, args.text)
video_comments = get_comments(youtube, args.videoid, None)
if video_comments:
update_comment(youtube, video_comments[0])
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
else:
print "Inserted, listed and updated top-level comments."
Of course, you will need your authorization files inside the same directory.
Can a Python script upload a photo to photo bucket and then retrieve the URL for it? Is so how?
I found a script at this link: http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=240x677
But I just found that confusing.
many thanks,
Phil
Yes, you can. Photobucket has a well-documented API, and someone wrote a wrapper around it.
Download the it and put it into your Python path, then download httplib2 (you can use easy_install or pip for this one).
Then, you have to request a key for the Photobucket API.
If you did everything right, you can write your script now. The Python wrapper is great, but is not documented at all which makes it very difficult to use it. I spent hours on understanding it (compare the question and response time here). As example, the script even has form/multipart support, but I had to read the code to find out how to use it. I had to prefix the filename with a #.
This library is a great example how you should NOT document your code!
I finally got it working, enjoy the script: (it even has oAuth handling!)
import pbapi
import webbrowser
import cPickle
import os
import re
import sys
from xml.etree import ElementTree
__author__ = "leoluk"
###############################################
## CONFIGURATION ##
###############################################
# File in which the oAuth token will be stored
TOKEN_FILE = "token.txt"
IMAGE_PATH = r"D:\Eigene Dateien\Bilder\SC\foo.png"
IMAGE_RECORD = {
"type": 'image',
"uploadfile": '#'+IMAGE_PATH,
"title": "My title", # <---
"description": "My description", # <---
}
ALBUM_NAME = None # default album if None
API_KEY = "149[..]"
API_SECRET = "528[...]"
###############################################
## SCRIPT ##
###############################################
api = pbapi.PbApi(API_KEY, API_SECRET)
api.pb_request.connection.cache = None
# Test if service online
api.reset().ping().post()
result = api.reset().ping().post().response_string
ET = ElementTree.fromstring(result)
if ET.find('status').text != 'OK':
sys.stderr.write("error: Ping failed \n"+result)
sys.exit(-1)
try:
# If there is already a saved oAuth token, no need for a new one
api.username, api.pb_request.oauth_token = cPickle.load(open(TOKEN_FILE))
except (ValueError, KeyError, IOError, TypeError):
# If error, there's no valid oAuth token
# Getting request token
api.reset().login().request().post().load_token_from_response()
# Requesting user permission (you have to login with your account)
webbrowser.open_new_tab(api.login_url)
raw_input("Press Enter when you finished access permission. ")
#Getting oAuth token
api.reset().login().access().post().load_token_from_response()
# This is needed for getting the right subdomain
infos = api.reset().album(api.username).url().get().response_string
ET = ElementTree.fromstring(infos)
if ET.find('status').text != 'OK':
# Remove the invalid oAuth
os.remove(TOKEN_FILE)
# This happend is user deletes the oAuth permission online
sys.stderr.write("error: Permission deleted. Please re-run.")
sys.exit(-1)
# Fresh values for username and subdomain
api.username = ET.find('content/username').text
api.set_subdomain(ET.find('content/subdomain/api').text)
# Default album name
if not ALBUM_NAME:
ALBUM_NAME = api.username
# Debug :-)
print "User: %s" % api.username
# Save the new, valid oAuth token
cPickle.dump((api.username, api.oauth_token), open(TOKEN_FILE, 'w'))
# Posting the image
result = (api.reset().album(ALBUM_NAME).
upload(IMAGE_RECORD).post().response_string)
ET = ElementTree.fromstring(result)
if ET.find('status').text != 'OK':
sys.stderr.write("error: File upload failed \n"+result)
sys.exit(-1)
# Now, as an example what you could do now, open the image in the browser
webbrowser.open_new_tab(ET.find('content/browseurl').text)
Use the python API by Ron White that was written to do just this