I am using the web2py framework.
I have uploaded txt a file via SQLFORM and the file is stored in the "upload folder", now I need to read this txt file from the controller, what is the file path I should use in the function defined in the default.py ?
def readthefile(uploaded_file):
file = open(uploaded_file, "rb")
file.read()
....
You can do join of application directory and upload folder to build path to file.
Do something like this:
import os
filepath = os.path.join(request.folder, 'uploads', uploaded_file_name)
file = open(filepath, "rb")
request.folder: the application directory. For example if the
application is "welcome", request.folder is set to the absolute path
"/path/to/welcome". In your programs, you should always use this
variable and the os.path.join function to build paths to the files you
need to access.
Read request.folder
The transformed name of the uploaded file is stored in the upload field of your database table, so you need a way to query the specific record that was inserted via the SQLFORM submission in order to get the name of the stored file. Here is how it would look assuming you know the record ID:
stored_filename = db.mytable(record_id).my_upload_field
original_filename, stream = db.mytable.my_upload_field.retrieve(stored_filename)
stream.read()
When you pass a filename to the .retrieve method of an upload field, it will return a tuple containing the original filename as well as the open file object (called stream in the code above).
Related
I'm trying to create a download function for my streamlit app. But what I currently have allows me to download a zip file via a button on my streamlit app but unfortunately it also saves it to my local folder. I don't want it to save to my local folder. The problem is when I initialize the file_zip object. I want the zip file in a specific name ideally the same name of the file that the user upload with a '.zip' extension (i.e datafile that contains the string file name as a parameter in the function). But everytime I do that it keeps saving the zip file in my local folder. Is there an alternative to this? BTW I'm trying to save list of pandas dataframe into one zip file.
def downloader(list_df, datafile, file_type):
file = datafile.name.split(".")[0]
#create zip file
with zipfile.ZipFile("{}.zip".format(file), 'w', zipfile.ZIP_DEFLATED) as file_zip:
for i in range(len(list_df)):
file_zip.writestr(file+"_group_{}".format(i)+".csv", pd.DataFrame(list_df[i]).to_csv())
file_zip.close()
#pass it to front end for download
zip_name = "{}.zip".format(file)
with open(zip_name, "rb") as f:
bytes=f.read()
b64 = base64.b64encode(bytes).decode()
href = f'Click Here To Download'
st.markdown(href, unsafe_allow_html=True)
It sounds like you want to create the zip file in memory and use it later to build a base64 encoding. You can use an io.BytesIO() object with ZipFile, rewind it, and read the data back for base64 encoding.
import io
def downloader(list_df, datafile, file_type):
file = datafile.name.split(".")[0]
#create zip file
zip_buf = io.BytesIO()
with zipfile.ZipFile(zip_buf, 'w', zipfile.ZIP_DEFLATED) as file_zip:
for i in range(len(list_df)):
file_zip.writestr(file+"_group_{}".format(i)+".csv", pd.DataFrame(list_df[i]).to_csv())
zip_buf.seek(0)
#pass it to front end for download
zip_name = "{}.zip".format(file)
b64 = base64.b64encode(zip_buf.read()).decode()
del zip_buf
href = f'Click Here To download'
st.markdown(href, unsafe_allow_html=True)
I have read through several articles/docs/questions now and each solves a part of my problem, but I just can't seem to put them all together to work. I am using Flask/Python, the Google Drive API v3 and a Service Account, but want to run the site on Heroku, which is setup already. Since Heroku doesn't have a storage for files though, I need to directly upload the files into Google Drive, which would need to be done sooner or later anyway.
Basically what I want to do is:
People type in their data on a website and upload a file.
That file then gets renamed according to the user's data (needs to support Chinese characters/utf8)
then the renamed file gets uploaded onto Google Drive using a service account
I was able to rename the file with utf-8 characters and save it locally, but don't know how to upload it to Google.I have tried a way earlier, but I don't know whether just a file has been created with no content or the file has been uploaded properly,I do see files and folders when listing the files/folders (that doesn't work now though, as I think my attempt now is exactly the same). My Code:
def upload_service(file, filename):
file_metadata = {
'name': filename,
'parents': 'Upload'
}
mime = mimetypes.guess_type(file.filename)[0]
media = MediaFileUpload(filename, mimetype=mime)
cloudFile = service.files().create(body=file_metadata, media_body=media).execute()
result = service.files().list().execute()
In here, file is the actual file uploaded with the filename being the input the user typed in.
All works fine, but for media = MediaFileUpload(filename, mimetype=mime) I get a FileNotFoundError: [Errno 2] No such file or directory: 'gtmuCfr.jpg' error, stating that the file with the name of filename or file are not there. This goes for the original name of the file and the new name.
Ok, I finally found the issue and was able to fix it (for the most part).
with open(new_name.encode("utf-8"), "wb") as save_file:
file = save_file.write(f.read())
upload_service(file, new_name)
doing it like this solves the issue as file is the int where it is stored (I guess), but since it doesn't have a filename attribute, I need to use the path/new_name inside the function.
The only difference inside the function is:
file_metadata = {
'name': path,
'parents': 'Upload'
}
mime = mimetypes.guess_type(path)[0]
media = MediaFileUpload(path, mimetype=mime)
where I need to use the path/new_name that I sent as a second parameter.
Now the only issue left is with the Upload folder/parent not being selected for the upload, but I can see the files (after giving permission) in my Drive and they have the right content.
I am trying to upload a file in to a folder in the box using the below code:
folder_id = '22222'
new_file = client.folder(folder_id).upload('/home/me/document.pdf')
print('File "{0}" uploaded to Box with file ID {1}'.format(new_file.name, new_file.id))
This code is not replacing the existing document.pdf in the box folder, rather it is keeping the older version of the file. I would like to remove the file in the target and keep the latest file. How to achieve this?
Since your goal is to replace the original file, you can try to overwrite its existing content. Here is an example. You will need to check for the filename if it is already present in the BOX folder though
folder_id = '22222'
file_path = '/home/me/document.pdf'
results = client.search().query(query='document', limit=1, ancestor_folder_ids=[folder_id], type='file', file_extensions=['pdf'])
file_id = None
for item in results:
file_id = item.id
if file_id:
updated_file = client.file(file_id).update_contents(file_path)
print('File "{0}" has been updated'.format(updated_file.name))
else:
new_file = client.folder(folder_id).upload(file_path)
print('File "{0}" uploaded to Box with file ID {1}'.format(new_file.name, new_file.id))
Its not replacing it because every time you upload new file it assign it a new id so the old file will never be replaced.
This is what I found in official docs.
Try to give it a name and then try that.
upload[source]
Upload a file to the folder. The contents are taken from the given file path, and it will have the given name. If file_name is not specified, the uploaded file will take its name from file_path.
Parameters:
file_path (unicode) – The file path of the file to upload to Box.
file_name (unicode) – The name to give the file on Box. If None, then use the leaf name of file_path
preflight_check (bool) – If specified, preflight check will be performed before actually uploading the file.
preflight_expected_size (int) – The size of the file to be uploaded in bytes, which is used for preflight check. The default value is ‘0’, which means the file size is unknown.
upload_using_accelerator (bool) –
If specified, the upload will try to use Box Accelerator to speed up the uploads for big files. It will make an extra API call before the actual upload to get the Accelerator upload url, and then make a POST request to that url instead of the default Box upload url. It falls back to normal upload endpoint, if cannot get the Accelerator upload url.
Please notice that this is a premium feature, which might not be available to your app.
Returns:
The newly uploaded file.
Return type:
File
I'm working on a small Google App Engine project in which I need to fetch a remote zip file from a URL and then list the files contained in the zip archive.
I'm using the zipfile module.
Here's what I've come up with so far:
# fetch the zip file from its remote URL
result = urlfetch.fetch(zip_url)
# store the contents in a stream
file_stream = StringIO.StringIO(result.content)
# create the ZipFile object
zip_file = zipfile.ZipFile(file_stream, 'w')
# read the files by name
archive_files = zip_file.namelist()
Unfortunately the archive_files list is always of length 0.
Any ideas what I'm doing wrong?
You are opening the file with w permissions, which truncates it. Change it to r permissions for reading:
zip_file = zipfile.ZipFile(file_stream, 'r')
Reference: http://docs.python.org/library/zipfile.html#zipfile-objects
You're opening the ZipFile for writing. Try reading instead.
I am working on python and biopython right now. I have a file upload form and whatever file is uploaded suppose(abc.fasta) then i want to pass same name in execute (abc.fasta) function parameter and display function parameter (abc.aln). Right now i am changing file name manually, but i want to have it automatically.
Workflow goes like this.
----If submit is not true then display only header and form part
--- if submit is true then call execute() and get file name from form input
--- Then displaying result file name is same as executed file name but only change in extension
My raw code is here -- http://pastebin.com/FPUgZSSe
Any suggestions, changes and algorithm is appreciated
Thanks
You need to read the uploaded file out of the cgi.FieldStorage() and save it onto the server. Ususally a temp directory (/tmp on Linux) is used for this. You should remove these files after processing or on some schedule to clean up the drive.
def main():
import cgi
import cgitb; cgitb.enable()
f1 = cgi.FieldStorage()
if "dfile" in f1:
fileitem = f1["dfile"]
pathtoTmpFile = os.path.join("path/to/temp/directory", fileitem.filename)
fout = file(pathtoTmpFile, 'wb')
while 1:
chunk = fileitem.file.read(100000)
if not chunk: break
fout.write (chunk)
fout.close()
execute(pathtoTmpFile)
os.remove(pathtoTmpFile)
else:
header()
form()
This modified the execute to take the path to the newly saved file.
cline = ClustalwCommandline("clustalw", infile=pathToFile)
For the result file, you could also stream it back so the user gets a "Save as..." dialog. That might be a little more usable than displaying it in HTML.