I want to upload document, file to google docs using Google Apps Engine (python)
any code or link will be appreciated
See the documentation, but you might try something like:
ms = gdata.MediaSource(file_path='/path/to/your/test.doc', content_type=gdata.docs.service.SUPPORTED_FILETYPES['DOC'])
entry = gd_client.Upload(ms, 'MyDocTitle')
print 'Document now accessible online at:', entry.GetAlternateLink().href
Solution is with files Upload, You need to read data using below line in python:
function to read file size
def getSize(self,fileobject):
fileobject.seek(0,2) # move the cursor to the end of the file
size = fileobject.tell()
return size
f = self.request.POST.get('fname').file
media = gdata.data.MediaSource(file_handle=f.read(), content_type=gdata.docs.service.SUPPORTED_FILETYPES[ext], content_length=self.getSize(self.request.POST.get('fname').file))
And also need to modify the gdata python library of Google to achieve this:
client.py:
in
def upload_file
replace:
while not entry:
entry = self.upload_chunk(start_byte, self.file_handle.read(self.chunk_size))
start_byte += self.chunk_size
With:
while not entry:
entry = self.upload_chunk(start_byte, self.file_handle)
start_byte += self.chunk_size
And you can upload file directory to google doc
Related
I'm working with Python hug API would like to create a GET API for the frontend. The frontend can download a created word document file e.g. via download button. However, after going through a documentation, I still cannot figure out a way to do it.
Here is my working script so far:
import os
import hug
from docx import Document
#hug.get("/download_submission_document")
def download_submission_document():
file_name = 'example.docx'
document = Document()
document.add_heading('Test header', level=2)
document.add_paragraph('Test paragraph')
document.save(file_name)
# TO DO: send a created file to frontend
I'm not sure if we can send the object right away or we have to save it first somewhere before sending the the frontend. (requirements: hug, python-docx)
I'm trying to use something like
#hug.get("/download_submission_document", output=hug.output_format.file)
but not sure how to return a file.
Alright, I found a solution which is easier than I thought. Just do the following:
#hug.get("/download_submission_document", output=hug.output_format.file)
def download_submission_document():
file_name = 'example.docx'
document = Document()
document.add_heading('Test header', level=2)
document.add_paragraph('Test paragraph')
document.save(file_name)
return file_name
Return file_name already download the docx
I have a project in hand to backup a website for some reasons. I use Requests in Python to crawl the contents and images (urls). The problem is, how can I save the image in the cloud, by using the url of that image, in Cloud services (Google Drive, Dropbox, etc.).
I know there is a way to first save the image locally and then upload the local image to the cloud. But I'm wondering if there are APIs that support uploading images by urls, not the local file.
It seems like Dropbox has a feature called /save_url that
lets app developers upload files to Dropbox by just providing a URL, without having to download the file first.
https://www.dropbox.com/developers-v1/core/docs#save-url
If you don't mind paying for the storage, you can save it to your own cloud storage. I occasionally have to do a similar action, and handle it as such:
def on_upload_image(self):
url = self.request.get('url')
result = urlfetch.fetch(url)
binary = result.content
blob_key = functions.get_blob_key_by_data(binary)
self.url = images.get_serving_url(blob_key, secure_url=True)
self.json()
from google.appengine.api import app_identity
def get_blob_key_by_data(data):
bucket = app_identity.get_default_gcs_bucket_name()
filename = hashlib.sha256(data).hexdigest()
mime_type = get_mime_type(data)
if not mime_type:
return None
gcs_filename = '/%s/image_%s' % (bucket, filename)
with gcs.open(gcs_filename, 'w', content_type=mime_type) as f:
f.write(data)
blob_key = blobstore.create_gs_key("/gs" + gcs_filename)
return blob_key
In my flask application, I am using a function to upload file to Amazon s3, using Boto.
Its working fine most of the cases, but some times its uploading files as zero byte file with no extension.
Why its failing sometimes,
I am validating user image file in form.
FileField('Your photo',validators=[FileAllowed(['jpg', 'png'], 'Images only!')])
My image upload function.
def upload_image_to_s3(image_from_form):
#upload pic to amazon
source_file_name_photo = secure_filename(image_from_form.filename)
source_extension = os.path.splitext(source_file_name_photo)[1]
destination_file_name_photo = uuid4().hex + source_extension
s3_file_name = destination_file_name_photo
# Connect to S3 and upload file.
conn = boto.connect_s3('ASJHjgjkhSDJJHKJKLSDH','GKLJHASDJGFAKSJDGJHASDKJKJHbbvhjcKJHSD')
b = conn.get_bucket('mybucket')
# Connect to S3 and upload file.
sml = b.new_key("/".join(["myfolder",destination_file_name_photo]))
sml.set_contents_from_string(image_from_form.read())
acl='public-read'
sml.set_acl(acl)
return s3_file_name
How large are your assets? If there is too large of an upload, you may have to multipart/chunk it otherwise it will timeout.
bucketObject.initiate_multipart_upload('/local/object/as/file.ext')
it means you will not be using set_contents_from_string but rather store and upload. You may have to use something to chuck the file, like FileChuckIO.
An example is here if this applies to you : http://www.bogotobogo.com/DevOps/AWS/aws_S3_uploading_large_file.php
Also, you may want to edit your post above and alter your AWS keys.
I have mp3 files stored in Amazon S3 and I have a MySQL database with a table called Songs. I want to run a Python script that updates my database by going to Amazon S3, retrieves details of the mp3 files (using ID3 for example) and then fills the Songs table in my database. I'm using Django. Is there any way that allows me to run this script by a simple click on an "update library" button for example through the Django admin panel? Also, is it possible to run it on a schedule?
P.S I'm new to both Django and Amazon S3
EDIT:
I wrote a small script that grabs meta tags from mp3 files in my local machine. Here is the code for it :
import eyeD3
import sys
import urllib
import os
class Track():
def __init__(self, audioFile):
self.title = audioFile.getTag().getTitle()
self.artist = audioFile.getTag().getArtist()
self.year = audioFile.getTag().getYear()
self.genre = audioFile.getTag().getGenre()
self.length = audioFile.getPlayTimeString()
self.album = audioFile.getTag().getAlbum()
def main():
for root, dirs, files in os.walk('.'):
for f in files:
if eyeD3.isMp3File(f):
audioFile = eyeD3.Mp3AudioFile(root+'/'+f)
t = Track(audioFile)
print t.artist," ",t.title, " ", t.length, " ", t.album, " ", t.genre
if __name__ == '__main__':
main()
I would like to find a way to run this script on Django even if ti's locally. I hope my point is clearer.
Thanks in advance !
You need to have a look at Boto and also django-storages for ideas on how to do what you'd like. django-storages makes it dead-simple to replace Django's FileStorage mechanism so you can upload imaes/files directly to your bucket(s) at S3.
Reading from S3 and updating your database objects is just the opposite workflow, but Boto makes it simple to get connected to the bucket(s) and read information.
Hope that helps you out.
I've got the following implementation for uploading a pdf file to google docs (taken from the gdata API samples):
def UploadResourceSample():
"""Upload a document, and convert to Google Docs."""
client = CreateClient()
doc = gdata.docs.data.Resource(type='document', title='My Sample Doc')
# This is a convenient MS Word doc that we know exists
path = _GetDataFilePath('test.0.doc')
print 'Selected file at: %s' % path
# Create a MediaSource, pointing to the file
media = gdata.data.MediaSource()
media.SetFileHandle(path, 'application/msword')
# Pass the MediaSource when creating the new Resource
doc = client.CreateResource(doc, media=media)
print 'Created, and uploaded:', doc.title.text, doc.resource_id.text
Now I would like to perform OCR text recognition on the uploaded file. But I'm not sure how to enable the OCR recognition in gdata docs python API. So my question is:
Is there a way to enable OCR recognition using gdata python v3.0 API on a pdf file?
I've managed to get my pdf document OCR'ed using the following code:
def UploadResourceSample(filename, filepath, fullpath):
"""Upload a document, and convert to Google Docs."""
client = CreateClient()
doc = gdata.docs.data.Resource(type='document', title=filename)
path = fullpath
print 'Selected file at: %s' % path
# Create a MediaSource, pointing to the file
media = gdata.data.MediaSource()
media.SetFileHandle(path, 'application/pdf')
# Pass the MediaSource when creating the new Resource
create_uri = gdata.docs.client.RESOURCE_UPLOAD_URI + '?ocr=true&ocr-language=de'
doc = client.CreateResource(doc, create_uri=create_uri, media=media)
print 'Created, and uploaded:', doc.title.text, doc.resource_id.text