I am trying to upload a file in to a folder in the box using the below code:
folder_id = '22222'
new_file = client.folder(folder_id).upload('/home/me/document.pdf')
print('File "{0}" uploaded to Box with file ID {1}'.format(new_file.name, new_file.id))
This code is not replacing the existing document.pdf in the box folder, rather it is keeping the older version of the file. I would like to remove the file in the target and keep the latest file. How to achieve this?
Since your goal is to replace the original file, you can try to overwrite its existing content. Here is an example. You will need to check for the filename if it is already present in the BOX folder though
folder_id = '22222'
file_path = '/home/me/document.pdf'
results = client.search().query(query='document', limit=1, ancestor_folder_ids=[folder_id], type='file', file_extensions=['pdf'])
file_id = None
for item in results:
file_id = item.id
if file_id:
updated_file = client.file(file_id).update_contents(file_path)
print('File "{0}" has been updated'.format(updated_file.name))
else:
new_file = client.folder(folder_id).upload(file_path)
print('File "{0}" uploaded to Box with file ID {1}'.format(new_file.name, new_file.id))
Its not replacing it because every time you upload new file it assign it a new id so the old file will never be replaced.
This is what I found in official docs.
Try to give it a name and then try that.
upload[source]
Upload a file to the folder. The contents are taken from the given file path, and it will have the given name. If file_name is not specified, the uploaded file will take its name from file_path.
Parameters:
file_path (unicode) – The file path of the file to upload to Box.
file_name (unicode) – The name to give the file on Box. If None, then use the leaf name of file_path
preflight_check (bool) – If specified, preflight check will be performed before actually uploading the file.
preflight_expected_size (int) – The size of the file to be uploaded in bytes, which is used for preflight check. The default value is ‘0’, which means the file size is unknown.
upload_using_accelerator (bool) –
If specified, the upload will try to use Box Accelerator to speed up the uploads for big files. It will make an extra API call before the actual upload to get the Accelerator upload url, and then make a POST request to that url instead of the default Box upload url. It falls back to normal upload endpoint, if cannot get the Accelerator upload url.
Please notice that this is a premium feature, which might not be available to your app.
Returns:
The newly uploaded file.
Return type:
File
Related
I was trying to open a file/image in python/django and upload it to s3 but I get different errors depending on what I try. I can get it to work when I send the image using the front end html form but not when opening the file on the back end. I get errors such as "'bytes' object has no attribute 'file'" Any ideas how to open an image and upload it to s3? I wasn't sure if I was using the correct upload function, but it worked when I received the file from an html form instead of opening it directly.
image = open(fileURL, encoding="utf-8")
S3_BUCKET = settings.AWS_BUCKET
session = boto3.Session(
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
)
s3 = session.resource('s3')
s3.Bucket(S3_BUCKET).put_object(Key='folder/%s' % fileName, Body=image)
Thanks.
The open command return a file object. Therefore Body=image does not contain the actual contents of the object.
Since you want to upload an existing object, you could use:
Key = 'folder/' + fileName
s3.Object(S3_BUCKET, Key).upload_file(fileURL)
I have read through several articles/docs/questions now and each solves a part of my problem, but I just can't seem to put them all together to work. I am using Flask/Python, the Google Drive API v3 and a Service Account, but want to run the site on Heroku, which is setup already. Since Heroku doesn't have a storage for files though, I need to directly upload the files into Google Drive, which would need to be done sooner or later anyway.
Basically what I want to do is:
People type in their data on a website and upload a file.
That file then gets renamed according to the user's data (needs to support Chinese characters/utf8)
then the renamed file gets uploaded onto Google Drive using a service account
I was able to rename the file with utf-8 characters and save it locally, but don't know how to upload it to Google.I have tried a way earlier, but I don't know whether just a file has been created with no content or the file has been uploaded properly,I do see files and folders when listing the files/folders (that doesn't work now though, as I think my attempt now is exactly the same). My Code:
def upload_service(file, filename):
file_metadata = {
'name': filename,
'parents': 'Upload'
}
mime = mimetypes.guess_type(file.filename)[0]
media = MediaFileUpload(filename, mimetype=mime)
cloudFile = service.files().create(body=file_metadata, media_body=media).execute()
result = service.files().list().execute()
In here, file is the actual file uploaded with the filename being the input the user typed in.
All works fine, but for media = MediaFileUpload(filename, mimetype=mime) I get a FileNotFoundError: [Errno 2] No such file or directory: 'gtmuCfr.jpg' error, stating that the file with the name of filename or file are not there. This goes for the original name of the file and the new name.
Ok, I finally found the issue and was able to fix it (for the most part).
with open(new_name.encode("utf-8"), "wb") as save_file:
file = save_file.write(f.read())
upload_service(file, new_name)
doing it like this solves the issue as file is the int where it is stored (I guess), but since it doesn't have a filename attribute, I need to use the path/new_name inside the function.
The only difference inside the function is:
file_metadata = {
'name': path,
'parents': 'Upload'
}
mime = mimetypes.guess_type(path)[0]
media = MediaFileUpload(path, mimetype=mime)
where I need to use the path/new_name that I sent as a second parameter.
Now the only issue left is with the Upload folder/parent not being selected for the upload, but I can see the files (after giving permission) in my Drive and they have the right content.
I am using Python 2.7 and Reportlab to create .pdf files for display/print in my app engine system. I am using ndb.Model to store the data if that matters.
I am able to produce the equivalent of a bank statement for a single client on-line. That is; the user clicks the on-screen 'pdf' button and the .pdf statement appears on screen in a new tab, exactly as it should.
I am using the following code to save .pdf files to Google Cloud Storage successfully
buffer = StringIO.StringIO()
self.p = canvas.Canvas(buffer, pagesize=portrait(A4))
self.p.setLineWidth(0.5)
try:
# create .pdf of .csv data here
finally:
self.p.save()
pdfout = buffer.getvalue()
buffer.close()
filename = getgcsbucket() + '/InvestorStatement.pdf'
write_retry_params = gcs.RetryParams(backoff_factor=1.1)
try:
gcs_file = gcs.open(filename,
'w',
content_type='application/pdf',
retry_params=write_retry_params)
gcs_file.write(pdfout)
except:
logging.error(traceback.format_exc())
finally:
gcs_file.close()
I am using the following code to create a list of all files for display on-screen, it shows all the files stored above.
allfiles = []
bucket_name = getgcsbucket()
rfiles = gcs.listbucket(bucket_name)
for rfile in rfiles:
allfiles.append(rfile.filename)
return allfiles
My screen (html) shows rows of ([Delete] and Filename). When the user clicks the [Delete] button, the following delete code snippet works (filename is /bucket/filename, complete)
filename = self.request.get('filename')
try:
gcs.delete(filename)
except gcs.NotFoundError:
pass
My question - given I have a list of files on-screen, I want the user to click on the filename and for that file to be downloaded to the user's computer. In Google's Chrome Browser, this would result in the file being downloaded, with it's name displayed on the bottom left of the screen.
One other point, the above example is for .pdf files. I will also have to show .csv files in the list and would like them to be downloaded as well. I only want the files to be downloaded, no display is required.
So, I would like a snippet like ...
filename = self.request.get('filename')
try:
gcs.downloadtousercomputer(filename) ???
except gcs.NotFoundError:
pass
I think I have tried everything I can find both here and elsewhere. Sorry I have been so long-winded. Any hints for me?
To download a file instead of showing it in the browser, you need to add a header to your response:
self.response.headers["Content-Disposition"] = 'attachment; filename="%s"' % filename
You can specify the filename as shown above and it works for any file type.
One solution you can try is to read the file from the bucket and print the content as the response with the correct header:
import cloudstorage
...
def read_file(self, filename):
bucket_name = "/your_bucket_name"
file = bucket_name + '/' + filename
with cloudstorage.open(file) as cloudstorage_file:
self.response.headers["Content-Disposition"] = str('attachment;filename=' + filename)
contents = cloudstorage_file.read()
cloudstorage_file.close()
self.response.write(contents)
Here filename could be something you are sending as GET parameter and needs to be a file that exist on your bucket or you will raise an exception.
[1] Here you will find a sample.
[1]https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/read-write-to-cloud-storage
I want to read the size of a file before uploading it to S3 in order to check if there is enough available storage left. The following code works. However the file is empty when it is uploaded to S3. If I delete the part that checks the size of the file it is uploaded properly. Is there another way to get the file size? The file comes from an upload form of a HTML page and I'm uploading it directly to S3 without saving it to the server first.
availablestorage = getavailablestorage() #gets the available storage in bytes
latestfile = request.files['filetoupload'] #get the file from the HTML form
latestfile.seek(0,2)
latestsize = latestfile.tell() #this gets the size of the file
if availablestorage < latestsize:
return "No space available. Delete files."
bucketname = request.form.get('spaceforupload')
conn = boto3.client('s3')
conn.upload_fileobj(latestfile, bucketname, latestfile.filename)
return redirect(url_for('showspace', spacename=bucketname))
of course, you just seeked to the end to get the size, now latestfile handle current position is "end of file".
Just do:
latestfile.seek(0)
before running conn.upload_fileobj. That should work.
I am using the web2py framework.
I have uploaded txt a file via SQLFORM and the file is stored in the "upload folder", now I need to read this txt file from the controller, what is the file path I should use in the function defined in the default.py ?
def readthefile(uploaded_file):
file = open(uploaded_file, "rb")
file.read()
....
You can do join of application directory and upload folder to build path to file.
Do something like this:
import os
filepath = os.path.join(request.folder, 'uploads', uploaded_file_name)
file = open(filepath, "rb")
request.folder: the application directory. For example if the
application is "welcome", request.folder is set to the absolute path
"/path/to/welcome". In your programs, you should always use this
variable and the os.path.join function to build paths to the files you
need to access.
Read request.folder
The transformed name of the uploaded file is stored in the upload field of your database table, so you need a way to query the specific record that was inserted via the SQLFORM submission in order to get the name of the stored file. Here is how it would look assuming you know the record ID:
stored_filename = db.mytable(record_id).my_upload_field
original_filename, stream = db.mytable.my_upload_field.retrieve(stored_filename)
stream.read()
When you pass a filename to the .retrieve method of an upload field, it will return a tuple containing the original filename as well as the open file object (called stream in the code above).