I want use django-s3direct and I want upload many image in admin panel.
1) All time when I try upload image/file get error "Oops, file upload failed, please try again" ?
When I refresh page. Name file is in input. But my input "Save" are disabled :/
edit
I remove from settings:
AWS_SECRET_ACCESS_KEY = ''
AWS_ACCESS_KEY_ID = ''
AWS_STORAGE_BUCKET_NAME = ''
and now I don't get error but file no upload :/ All time black progress bar..
2) How upload multiple image? No inline.. Please help me and give some advice? Im newbie..
I have Django 1.5.5. Now i use inline and I don't know what's next.
You will need to edit some of the permissions properties of the target S3 bucket so that the final request has sufficient privileges to write to the bucket. Sign in to the AWS console and select the S3 section. Select the appropriate bucket and click the ‘Properties’ tab. Select the Permissions section and three options are provided (Add more permissions, Edit bucket policy and Edit CORS configuration).
CORS (Cross-Origin Resource Sharing) will allow your application to access content in the S3 bucket. Each rule should specify a set of domains from which access to the bucket is granted and also the methods and headers permitted from those domains.
For this to work in your application, click ‘Add CORS Configuration’ and enter the following XML:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>yourdomain.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Click ‘Save’ in the CORS window and then ‘Save’ again in the bucket’s ‘Properties’ tab.
This tells S3 to allow any domain access to the bucket and that requests can contain any headers. For security, you can change the ‘AllowedOrigin’ to only accept requests from your domain.
If you wish to use S3 credentials specifically for this application, then more keys can be generated in the AWS account pages. This provides further security, since you can designate a very specific set of requests that this set of keys are able to perform. If this is preferable to you, then you will need to also set up an IAM user in the Edit bucket policy option in your S3 bucket. There are various guides on AWS’s web pages detailing how this can be accomplished.
Setting up the client-side code
This setup does not require any additional, non-standard Python libraries, but some scripts are necessary to complete the implementation on the client-side.
This article covers the use of the s3upload.js script. Obtain this script from the project’s repo (using Git or otherwise) and store it somewhere appropriate in your application’s static directory. This script currently depends on both the JQuery and Lo-Dash libraries. Inclusion of these in your application will be covered later on in this guide.
The HTML and JavaScript can now be created to handle the file selection, obtain the request and signature from your Python application, and then finally make the upload request.
Firstly, create a file called account.html in your application’s templates directory and populate the head and other necessary HTML tags appropriately for your application. In the body of this HTML file, include a file input and an element that will contain status updates on the upload progress.
<input type="file" id="file" onchange="s3_upload();"/>
<p id="status">Please select a file</p>
<div id="preview"><img src="/static/default.png" /></div>
<form method="POST" action="/submit_form/">
<input type="hidden" id="" name="" value="/static/default.png" />
<input type="text" name="example" placeholder="" /><br />
<input type="text" name="example2" placeholder="" /><br /><br />
<input type="submit" value="" />
</form>
The preview element initially holds a default image. Both of these are updated by the JavaScript, discussed below, when the user selects a new image.
Thus when the user finally clicks the submit button, the URL of the image is submitted, along with the other details of the user, to your desired endpoint for server-side handling. The JavaScript method, s3_upload(), is called when a file is selected by the user. The creation and population of this method is covered below.
Next, include the three dependency scripts in your HTML file,account.html. You may need to adjust the src attribute for the files3upload.js if you put this file in a directory other than /static:
<script type="text/javascript" src="http://code.jquery.com/jquery-1.9.1.js"></script>
<script type="text/javascript" src="https://raw.github.com/bestiejs/lodash/v1.1.1/dist/lodash.min.js"></script>
<script type="text/javascript" src="/static/s3upload.js"></script>
The ordering of the scripts is important as the dependencies need to be satisfied in this sequence. If you desire to host your own versions of JQuery and Lo-Dash, then adjust thesrc attribute accordingly.
Finally, in a block, declare a JavaScript function,s3_upload(), in the same file again to process the file upload. This block will need to exist below the inclusion of the three dependencies:
function s3_upload(){
var s3upload = new S3Upload({
file_dom_selector: 'file',
s3_sign_put_url: '/sign_s3_upload/',
onProgress: function(percent, message) {
$('#status').html('Upload progress: ' + percent + '%' + message);
},
onFinishS3Put: function(url) {
$('#status').html('Upload completed. Uploaded to: '+ url);
$("#image_url").val(url);
$("#preview").html('<img src="'+url+'" style="width:300px;" />');
},
onError: function(status) {
$('#status').html('Upload error: ' + status);
}
});
}
This function creates a new instance of S3Upload, to which is passed the file input element, the URL from which to retrieve the signed request and three functions.
Initially, the function makes a request to the URL denoted by thes3_sign_put_url argument, passing the file name and mime type as GET parameters. The server-side code (covered in the next section) interprets the request and responds with a preview of the URL of the file to be uploaded to S3 and the signed request, which this function then uses to asynchronously upload the file to your bucket.
The function will post upload updates to the onProgress() function and , if the upload is successful, onFinishS3Put() is called and the URL returned by the Python application view is received as an argument. If, for any reason, the upload should fail, onError() will be called and thestatus parameter will describe the error.
If you find that the page isn’t working as you intend after implementing the system, then consider using console.log()to record any errors that occur inside the onError() callback and use your browser’s error console to help diagnose the problem.
If successful, the preview div will now be updated with the user’s chosen image, and the hidden input field will contain the URL for the image. Now, once the user has completed the rest of the form and clicked submit, all pieces of information can be posted to the same endpoint.
It is good practice to inform the user of any prolonged activity in any form of application (web- or device-based) and to display updates on changes. Thus the status methods could be used, for example, to show a loading GIF to indicate that an upload is in progress, which can then be hidden when the upload has finished. Without this sort of information, users may suspect that the page has crashed, and could try to refresh the page or otherwise disrupt the upload process.
Setting up the server-side Python code
To generate a temporary signature with which the upload request can be signed. This temporary signature uses the account details (the AWS access key and secret access key) as a basis for the signature, but users will not have direct access to this information. After the signature has expired, then upload requests with the same signature will not be successful.
As mentioned previously, this article covers the production of an application for the Flask framework, although the steps for other Python frameworks will be similar. Readers using Python 3 should consider therelevant information on Flask’s website before continuing.
Start by creating your main application file, application.py, and set up your skeleton application appropriately:
from flask import Flask, render_template, request
from hashlib import sha1
import time, os, json, base64, hmac, urllib
app = Flask(__name__)
if __name__ == '__main__':
port = int(os.environ.get('PORT', 5000))
app.run(host='0.0.0.0', port=port)
The currently-unused import statements will be necessary later on.
Readers using Python 3 should import urllib.parse in place of urllib.
Next, in the same file, you will need to create the views responsible for returning the correct information back to the user’s browser when requests are made to various URLs. First define view for requests to/account to return the page account.html, which contains the form for the user to complete:
#app.route("/account/")
def account():
return render_template('account.html')
Please note that the views for the application will need to be placed between the app = Flask(__name__) and if __name__ == '__main__': lines in application.py.
Now create the view, in the same Python file, that is responsible for generating and returning the signature with which the client-side JavaScript can upload the image. This is the first request made by the client before attempting an upload to S3. This view responds with requests to /sign_s3/:
#app.route('/sign_s3/')
def sign_s3():
AWS_ACCESS_KEY = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
S3_BUCKET = os.environ.get('S3_BUCKET')
object_name = request.args.get('s3_object_name')
mime_type = request.args.get('s3_object_type')
expires = int(time.time()+10)
amz_headers = "x-amz-acl:public-read"
put_request = "PUT\n\n%s\n%d\n%s\n/%s/%s" % (mime_type, expires, amz_headers, S3_BUCKET, object_name)
signature = base64.encodestring(hmac.new(AWS_SECRET_KEY, put_request, sha1).digest())
signature = urllib.quote_plus(signature.strip())
url = 'https://%s.s3.amazonaws.com/%s' % (S3_BUCKET, object_name)
return json.dumps({
'signed_request': '%s?AWSAccessKeyId=%s&Expires=%d&Signature=%s' % (url, AWS_ACCESS_KEY, expires, signature),
'url': url
})
Readers using Python 3 should useurllib.parse.quote_plus() to quote the signature.
This code performs the following steps:
• The request is received to /sign_s3/ and the AWS keys and S3 bucket name are loaded from the environment.
• The name and mime type of the object to be uploaded are extracted from the GET parameters of the request (this stage may differ in other frameworks).
• The expiry time of the signature is set and forms the basis of the temporary nature of the signature. As shown, this is best used as a function relative to the current UNIX time. In this example, the signature will expire 10 seconds after Python has executed that line of code.
• The headers line tells S3 what access permissions to grant. In this case, the object will be publicly available for download.
• Now the PUT request can be constructed from the object information, headers and expiry time.
• The signature is generated as an SHA hash of the compiled AWS secret key and the actual PUT request.
• In addition, surrounding whitespace is stripped from the signature and special characters are escaped (using quote_plus) for safer transmission through HTTP.
• The prospective URL of the object to be uploaded is produced as a combination of the S3 bucket name and the object name.
• Finally, the signed request can be returned, along with the prospective URL, to the browser in JSON format.
You may wish to assign another, customised name to the object instead of using the one that the file is already named with, which is useful for preventing accidental overwrites in the S3 bucket. This name could be related to the ID of the user’s account, for example. If not, you should provide some method for properly quoting the name in case there are spaces or other awkward characters present. In addition, this is the stage at which you could provide checks on the uploaded file in order to restrict access to certain file types. For example, a simple check could be implemented to allow only .png files to proceed beyond this point.
It is sometimes possible for S3 to respond with 403 (forbidden) errors for requests which are signed by temporary signatures containing special characters. Therefore, it is important to appropriately quote the signature as demonstrated above.
Finally, in application.py, create the view responsible for receiving the account information after the user has uploaded an image, filled in the form, and clicked submit. Since this will be a POST request, this will also need to be defined as an ‘allowed access method’. This method will respond to requests to the URL /submit_form/:
#app.route("/submit_form/", methods=["POST"])
def submit_form():
example = request.form[""]
example2 = request.form[""]
image_url = request.form["image_url"]
update_account(example, example2, image_url)
return redirect(url_for('profile'))
In this example, an update_account() function has been called, but creation of this method is not covered in this article. In your application, you should provide some functionality, at this stage, to allow the app to store these account details in some form of database and correctly associate the information with the rest of the user’s account details.
In addition, the URL for the profile page has not been defined in this article (or companion code). Ideally, for example, after updating the account, the user would be redirected back to their own profile so that they can see the updated information.
For more information http://www.tivix.com/blog/easy-user-uploads-with-direct-s3-uploading/
Related
https://github.com/haricot/django-cookie-consent
https://django-cookie-consent.readthedocs.io/en/latest/index.html
I found a fork of the django-cookie-consent github project for managing cookies on your website and I got it to work most of the time but it is not 100% perfect.
Here is how I got it to run (either install via pip from that fork link or):
Do not use pip3 install django-cookie-consent from the default PyPi. Download the zip file from github and copy the cookie_consent folder to your site packages folder. For example for me it was - /home/user/.local/share/virtualenvs/project_name/lib/python3.7/site-packages/cookie_consent. Then pip3 install django-appconf. Then follow the documentation instructions.
Links:
http://127.0.0.1:8000/cookies/
http://127.0.0.1:8000/cookies/accept/
http://127.0.0.1:8000/cookies/accept/variable_name/
http://127.0.0.1:8000/cookies/decline/
http://127.0.0.1:8000/cookies/decline/variable_name/
I found some code for the consent banner https://github.com/haricot/django-cookie-consent/tree/master/tests/core/templates but was having problems with it. I copied the test_page.html template code to my own project's base.html but this entire script tag did not work for me -> <script type="{% cc_receipts "social" %}" data-varname="social">. I got django.template.exceptions.TemplateSyntaxError: 'cc_receipts' did not receive value(s) for the argument(s): 'request'. Copying the rest of the code from that file and not including that one script tag did cause the banner to show up on my project's base.html file.
Accepting a cookie from clicking accept on the banner code found from the tests directory just redirects me to a blank /cookies/accept/social/ page. This acceptance does not get logged either.
Accepting a cookie from /cookies/ does get logged but it gave me this error:
TypeError: cannot use a string pattern on a bytes-like object
[20/Jan/2020 16:00:43] "POST /cookies/accept/social/ HTTP/1.1" 500 121416
Method Not Allowed (GET): /cookies/accept/social/
Method Not Allowed: /cookies/accept/social/
[20/Jan/2020 16:00:44] "GET /cookies/accept/social/ HTTP/1.1" 405 0
Is this error a possible python3 incompatibility issue?
How would I configure, for example, where a group variable name called social and a cookie named 1P_JAR (this is an example of a recaptcha v3 cookie on my site).
Noticed that the username is not being logged or the the user's IP address. It would be nice to include these once they accept or decline.
I am not sure if this fork automatically blocks cookies until the user accepts. Can someone verify this? If this feature is or is not included, how do you implement it?
When accepting cookies or declining cookies, an actual cookie called cookie_consent gets created in your browser and it tells you which cookies are accepted or declined.
Can someone please help me get this to work? It seems very close to being GDPR compliant.
Check your runserver log. You have to set up COOKIE_CONSENT_NAME setting. Because there is no default value for this setting.
Then you have to go to django admin panel and create the cookies with their respective names and domains which you can find in the browser inspector.
how does it works: in creates a cookie labelled cookie_consent, this stores all the data necessary for this package to work. to make it work properly, several tweaks are required:
1)into setting.py you must indicate COOKIE_CONSENT_NAME = "cookie_consent" (or whatever you like probably works too)
into TEMPLATE_CONTEXT_PROCESSORS copy: 'django.template.context_processors.request' --OR-- copy into:
TEMPLATES = [
{
...
'OPTIONS': {
'context_processors': [
"django.template.context_processors.request",
...
3)[NOT MANDATORY BUT HELPFUL, see instuctions on django documentation] i have also indicated into setting COOKIE_CONSENT_CACHE_BACKEND = "default" plus
setted the whole website cache to: django.core.cache.backends.db.DatabaseCache
then you create on the admin page firstly a cookie group into cookie groups window, WITHOUT checking the Is required checkbox (otherwise you are not able to manage the cookie group because the checkbox implies that these cookies are always on and thus the user cannot decide or not to eliminate them); then you must also add at least 1 cookie for cookie group
(otherwise the get_version will not work for cookie groups and its mandatory to the correct use of the library)
consensus is gathered for groups and not for each cookie (As GDPR suggests), meaning that the accept_cookies(request, response, varname="your_cookie_group") will only works if you use cookie groups.
this is an exaple of the functions (not perfect) telling to accept the cookie from view, to work it requires obviously two refresh, the first to set the cookie, the second to see it:
---views.py
def homepage(request):
#to render the page
response = render(request = request,
...
#to accept a cookie group
accept_cookies(request, response, varname="cookie_group")#probabilmente solo per cookie groups
#to check the cookie group
cc = get_cookie_value_from_request(request, varname='cookie_group')
print("cookie value from request: ",cc)
if cc == True:
print("Consensus given", cc)
elif cc==False:
print("Consensus not given",cc)
else:
print("probable error in getting cookie value from request: ", cc) # problem with cookie group
return response
---urls.py
path('', views.homepage, name ="home")
I have a Python script that is running periodically on an AWS EC2 Ubuntu machine.
This script reads data from some files and sometimes changes data in them.
I want to download these files from OneDrive, do my own thing with them, and upload them back to OneDrive.
I want this to be done automatically, without the need for a user to approve any login or credentials. I'm ok with doing it once (i.e. approving the login on the first run) but the rest has to run automatically, without asking ever again for approvals (unless the permissions change, of course).
What is the best way to do this?
I've been reading the documentation on Microsoft Graph API but I'm struggling with the authentication part. I've created an application in Azure AAD, gave the sample permissions (to test) and created a secret credential.
I managed to do it. I'm not sure if it's the best way but it is working now. It's running automatically every hour and I don't need to touch it.
I followed the information on https://learn.microsoft.com/en-gb/azure/active-directory/develop/v2-oauth2-auth-code-flow
This is what I did.
Azure Portal
Create an application. Azure Active Directory -> App Registrations -> Applications from personal account
In Supported account types, choose the one that has personal Microsoft accounts.
In Redirect URI, choose Public client/native. We'll add the specific URI later.
In the application details, in the section Overview, take note of the Application (client) ID. We'll need this later.
In the section Authentication, click Add a Platform and choose Desktop + devices. You can use your own, I chose one of the suggested: https://login.microsoftonline.com/common/oauth2/nativeclient
In the section API permissions, you have to add all the permissions that your app will use. I added User.Read, Files.ReadWrite and offline_access. The offline_access is to be able to get the refresh token, which will be crucial to keep the app running without asking the user to login.
I did not create any Certificate or Secret.
Web
Looks like to get a token for the first time we have to use a browser or emulate something like that.
There must be a programmatic way to do this, but I had no idea how to do it. I also thought about using Selenium for this, but since it's only one time and my app will request tokens every hour (keeping the tokens fresh), I dropped that idea.
If we add new permissions, the tokens that we have will become invalid and we have to do this manual part again.
Open a browser and go to the URL below. Use the Scopes and the Redirect URI that you set up in Azure Portal.
https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=your_app_client_id&response_type=code&redirect_uri=https%3A%2F%2Flogin.microsoftonline.com%2Fcommon%2Foauth2%2Fnativeclient&response_mode=query&scope=User.Read%20offline_access%20Files.ReadWrite
That URL will redirect you to the Redirect URI that you set up and with a code=something in the URL. Copy that something.
Do a POST request with type FORM URL Encoded. I used https://reqbin.com/ for this.
Endpoint: https://login.microsoftonline.com/common/oauth2/v2.0/token
Form URL: grant_type=authorization_code&client_id=your_app_client_id&code=use_the_code_returned_on_previous_step
This will return an Access Token and a Refresh Token. Store the Refresh Token somewhere. I'm saving it in a file.
Python
# Build the POST parameters
params = {
'grant_type': 'refresh_token',
'client_id': your_app_client_id,
'refresh_token': refresh_token_that_you_got_in_the_previous_step
}
response = requests.post('https://login.microsoftonline.com/common/oauth2/v2.0/token', data=params)
access_token = response.json()['access_token']
new_refresh_token = response.json()['refresh_token']
# ^ Save somewhere the new refresh token.
# I just overwrite the file with the new one.
# This new one will be used next time.
header = {'Authorization': 'Bearer ' + access_token}
# Download the file
response = requests.get('https://graph.microsoft.com/v1.0/me/drive/root:' +
PATH_TO_FILE + '/' + FILE_NAME + ':/content', headers=header)
# Save the file in the disk
with open(file_name, 'wb') as file:
file.write(response.content)
So basically, I have the Refresh Token always updated.
I call the Token endpoint using that Refresh Token, and the API gives me an Access Token to use during the current session and a new Refresh Token.
I use this new Refresh Token the next time I run the program, and so on.
I've just published a repo which does this. Contributions and pull requests welcome:
https://github.com/stevemurch/onedrive-download
I'm working with google API lately and use simple flask method to retrieve some id_token.
here is my code with explanations in comment:
#app.route('/afterlogin/id_token')
def afterlogin(id): # get the id
print(id) # print it
return render_template(r'creds_view.html', data=id) # and render the template with 'id' in it (for test purposes)
So what happens is that after the user logins, the api redirects the id_token to http://localhost:8000/afterlogin/#id_token=some_id_token.
but for some reason it is showing me 404 error.
i think it is because of the '#' in the url , i want the id_token. i know that '#' in html means for path linking or routing in 'href'.
so for that i tried.
#app.route('/afterlogin/<path:id>')
but the error still persists.
any guesses?
Everything after # is processed locally by the browser, it's not sent to the server, so you can't use it in routing. Leave out the #:
http://localhost:8000/afterlogin/some_id_token
I'm having some trouble understanding and implementing the Google Directory API's users watch function and push notification system (https://developers.google.com/admin-sdk/reports/v1/guides/push#creating-notification-channels) in my Python GAE app. What I'm trying to achieve is that any user (admin) who uses my app would be able to watch user changes within his own domain.
I've verified the domain I want to use for notifications and implemented the watch request as follows:
directoryauthdecorator = OAuth2Decorator(
approval_prompt='force',
client_id='my_client_id',
client_secret='my_client_secret',
callback_path='/oauth2callback',
scope=['https://www.googleapis.com/auth/admin.directory.user'])
class PushNotifications(webapp.RequestHandler):
#directoryauthdecorator.oauth_required
def get(self):
auth_http = directoryauthdecorator.http()
service = build("admin", "directory_v1", http=auth_http)
uu_id=str(uuid.uuid4())
param={}
param['customer']='my_customer'
param['event']='add'
param['body']={'type':'web_hook','id':uu_id,'address':'https://my-domain.com/pushNotifications'}
watchUsers = service.users().watch(**param).execute()
application = webapp.WSGIApplication(
[
('/pushNotifications',PushNotifications),
(directoryauthdecorator.callback_path, directoryauthdecorator.callback_handler())],
debug=True)
Now, the receiving part is what I don't understand. When I add a user on my domain and check the app's request logs I see some activity, but there's no usable data. How should I approach this part?
Any help would be appreciated. Thanks.
The problem
It seems like there's been some confusion in implementing the handler. Your handler actually sets up the notifications channel by sending a POST request to the Reports API endpoint. As the docs say:
To set up a notification channel for messages about changes to a particular resource, send a POST request to the watch method for the resource.
source
You should only need to send this request one time to set up the channel, and the "address" parameter should be the URL on your app that will receive the notifications.
Also, it's not clear what is happening with the following code:
param={}
param['customer']='my_customer'
param['event']='add'
Are you just breaking the code in order to post it here? Or is it actually written that way in the file? You should actually preserve, as much as possible, the code that your app is running so that we can reason about it.
The solution
It seems from the docs you linked - in the "Receiving Notifications" section, that you should have code inside the "address" specified to receive notifications that will inspect the POST request body and headers on the notification push request, and then do something with that data (like store it in BigQuery or send an email to the admin, etc.)
Managed to figure it out. In the App Engine logs I noticed that each time I make a change, which is being 'watched', on my domain I get a POST request from Google's API, but with a 302 code. I discovered that this was due to the fact I had login: required configured in my app.yaml for the script, which was handling the requests and the POST request was being redirected to the login page, instead of the processing script.
I'm trying to upload a video to my Youtube to my account using the API but I can't find a way to do it easily. All the methods I saw require me to authenticate with oAuth in a browser.
I simply want to upload a video from a script to one account using a username and password or dev key or similar without going through the crazy, overly complex authentication methods. The script will run in a private in environment so security is not a concern.
try:
youtube-upload
django-youtube (if you use django)
Uploading videos
OAuth2 authorization let's you get a refresh token once the user authorize the upload.
So you can get that token form OAuth2 playground manually for "https://www.googleapis.com/auth/youtube.upload" scope, save it and have a script to get access token periodically. Then you can plug in that access token to upload.
To sum up, browser interaction is required once, and you can do that through playground and save the token manually.
Try YouTube Upload Direct Lite. It is really easy to set up. https://code.google.com/p/youtube-direct-lite/
"Adding YouTube Direct Lite is as simple as adding an iframe HTML tag to your existing web pages. There is no server-side code that needs to be configured or deployed, though we do recommend that you check out your own copy of the YouTube Direct Lite HTML/CSS/JavaScript and host it on your existing web server. "
youtube-upload is a really nice tool which you can make heavy usage out of it. This video shows you how to upload videos on your Youtube channel using youtube-upload.
Using ZEND, theres is a method, but it's deprecated by google: the client login.
Even you tag your question with pyton, I think this PHP example can helps you to give and idea
<?php
/*First time, first: start the session and calls Zend Library.
Remember that ZEND path must be in your include_path directory*/
session_start();
require_once 'Zend/Loader.php';
Zend_Loader::loadClass('Zend_Gdata_YouTube');
Zend_Loader::loadClass('Zend_Gdata_ClientLogin');
$authenticationURL= 'https://accounts.google.com/ClientLogin';
$httpClient =
Zend_Gdata_ClientLogin::getHttpClient(
$username = 'myuser#gmail.com',
$password = 'mypassword',
$service = 'youtube',
$client = null,
$source = 'My super duper application',
$loginToken = null,
$loginCaptcha = null,
$authenticationURL);
//Now, create an Zend Youtube Objetc
$yt = new Zend_Gdata_YouTube($httpClient, $applicationId, $clientId, $developerKey);
// create a new video object
$video = new Zend_Gdata_YouTube_VideoEntry();
$video ->setVideoTitle('Test video');
$video ->setVideoDescription('This is a test video');
$video ->setVideoCategory('News'); // The category must be a valid YouTube category
//Will get an one-time upload URL and a one-time token
$tokenHandlerUrl = 'http://gdata.youtube.com/action/GetUploadToken';
$tokenArray = $yt->getFormUploadToken($myVideoEntry, $tokenHandlerUrl);
$tokenValue = $tokenArray['token']; //Very important token, it will send on an hidden input in your form
$postUrl = $tokenArray['url']; //This is a very importan URL
// place to redirect user after upload
$nextUrl = 'http://your-site.com/after-upload-page'; //this address must be registered on your google dev
// build the form using the $posturl and the $tokenValue
echo '<form action="'. $postUrl .'?nexturl='. $nextUrl .
'" method="post" enctype="multipart/form-data">'.
'<input name="file" type="file"/>'.
'<input name="token" type="hidden" value="'. $tokenValue .'"/>'.
'<input value="Upload Video File" type="submit" />'.
'</form>';
?>
I really hope this will be helpful.
¡Have a great day!