I have a flask rest-api for simple file upload. I have written file upload code using reference from flask documentation. In the form I only have file input and submit button.
While uploading a file, During the send request, the request-header shows Content-Length: 9,33,48,851 and
The original file size: 9,33,48,466 bytes and
after the file uploaded on server, the file size: 9,33,48,466 bytes
Why there is difference of 385 bytes between request header and file-size. since I was passing the content-length to upload() function to match the file size after uploading to confirm all file contents are uploaded but I am getting this miss-match.
Is it possible to extract or know the file-size from request header?
When to start to display the progress bar on following (case-b)
After selecting a file and on clicking the submit button, I observed
a) If small file, the request is sent very fast (it seems the file getting attached to request body is fast) and my upload button gets quickly disabled.
b) If large file(GBs), the request seems quite slow by that I mean, I observed after clicking on upload button, on browser's tab I saw spinning and upload button gets disabled after 30-40 seconds[I assume the file is getting attach to request body and request yet not sent].
So from case-b is it good to show the progress bar to cover this 30-40 seconds. since the way I currently coded is on reaching request to server and where actual upload starts the progress bar comes in picture. Is my understanding correct regarding uploading?
Please suggest.
Related
I am trying to upload a binary file of size 10gb using st.file_uploader, however, I get the following error message. In fact, I get the same error message pretty much when I am trying to upload any file above 2gb, i.e I didn’t get this error message when I uploaded 1.2gb file.
I have set my file capacity to 10gb in config file.
As a matter of fact, the file loading shows as it should, then the file appears to be uploaded for a very short time (maybe a second), however, then the attachment disappears along with file_uploader widget itself, and the following message pops up.
By default, upload files are limited to 200MB. You can configure the server.maxUploadSize config option as such;
Set the config option in .streamlit/config.toml:
[server]
maxUploadSize=10000
https://github.com/streamlit/streamlit/issues/5938
The bug is raised in Streamlit repo. To be fixed.
I'm trying to forward a file upload POST request my Python Flask webservice receives to another API. I already receive the request in a format that the API would accept, however, I was not able to find any way to take the request I received and directly forward it to the API. I had imagined that maybe something like this could exist:
requests.post("url_of_api", request)
Where request is Flask's representation of the received request. Is such a thing possible?
Alternatively, I'm trying to recreate a request with the same data. The problem is that the request contains a file upload (wav specifically). I've tried to create a request with the file data as follows:
requests.post("url_of_api",files={"audio_data":("test.wav",request.files["audio_data"].stream.read())})
I can see in Wireshark that the requests are almost identical in respect to their file representation, however, there are slight differences at the start and end of the bytestream and file in the second request has 46 fewer bytes of data. In addition, it loses its MIME-type. Is there any way to exactly replicate the byte representation of the uploaded file in a new request?
I'm trying to export a CSV from this page via a python script. The complicated part is that the page opens after clicking the export button on this page, begins the download, and closes again, rather than just hosting the file somewhere static. I've tried using the Requests library, among other things, but the file it returns is empty.
Here's what I've done:
url = 'http://aws.state.ak.us/ApocReports/CampaignDisclosure/CDExpenditures.aspx?exportAll=True&%3bexportFormat=CSV&%3bisExport=True%22+id%3d%22M_C_sCDTransactions_csfFilter_ExportDialog_hlAllCSV?exportAll=True&exportFormat=CSV&isExport=True'
with open('CD_Transactions_02-27-2017.CSV', "wb") as file:
# get request
response = get(url)
# write to file
file.write(response.content)
I'm sure I'm missing something obvious, but I'm pulling my hair out.
It looks like the file is being generated on demand, and the url stays only valid as long as the session lasts.
There are multiple requests from the browser to the webserver (including POST requests).
So to get those files via code, you would have to simulate the browser, possibly including session state etc (and in this case also __VIEWSTATE ).
To see the whole communication, you can use developer tools in the browser (usually F12, then select NET to see the traffic), or use something like WireShark.
In other words, this won't be an easy task.
If this is open government data, it might be better to just ask that government for the data or ask for possible direct links to the (unfiltered) files (sometimes there is a public ftp server for example) - or sometimes there is an API available.
The file is created on demand but you can download it anyway. Essentially you have to:
Establish a session to save cookies and viewstate
Submit a form in order to click the export button
Grab the link which lies behind the popped-up csv-button
Follow that link and download the file
You can find working code here (if you don't mind that it's written in R): Save response from web-scraping as csv file
I am using Dropbox client for Python (actually a Python 3 version, but I don't think it matters now) to upload some files to my Dropbox. I am also using PyQt4 to have a GUI for this.
Is there a possibility to specify a callback to be called when the file is uploaded to show the user the upload progress?
You mean, you want to show progress while the file is uploading (on a progressbar or something)?
You probably need get_chunked_uploader()
From API Docs:
DESCRIPTION Uploads large files to Dropbox in multiple chunks. Also
has the ability to resume if the upload is interrupted. This allows
for uploads larger than the /files_put maximum of 150 MB.
Typical usage:
1) Send a PUT request to /chunked_upload with the first chunk of the file
without setting upload_id, and receive an upload_id in return.
2)Repeatedly PUT subsequent chunks using the upload_id to identify the
upload in progress and an offset representing the number of bytes
transferred so far.
3)After each chunk has been uploaded, the server
returns a new offset representing the total amount transferred.
...
I do an url fetch to get info from an online txt file. It's a big file (like 2Mb and counting) that gets modified all the time, automatically.
I'm using memcache from Google App Engine to keep the data for a while. But for each new request, the incoming bandwith increased, and I started to get Over Quota error.
I need a way to make a partial download of this file downloading only whats changed, instead of all the file.
Any ideas? :)
Only if you know what part of the file has been changed.
For example, if you know that the file is only appended to, then you could use a HTTP Range request to request only the end of the file.
If you have no way of knowing where the file has been changed, then it would work only if the server sent you a patch or delta to a previous version.