FastAPI: Transfering large files (200MB images) fast [duplicate] - python
I am trying to upload a large file (≥3GB) to my FastAPI server, without loading the entire file into memory, as my server has only 2GB of free memory.
Server side:
async def uploadfiles(upload_file: UploadFile = File(...):
Client side:
m = MultipartEncoder(fields = {"upload_file":open(file_name,'rb')})
prefix = "http://xxx:5000"
url = "{}/v1/uploadfiles".format(prefix)
try:
req = requests.post(
url,
data=m,
verify=False,
)
which returns:
HTTP 422 {"detail":[{"loc":["body","upload_file"],"msg":"field required","type":"value_error.missing"}]}
I am not sure what MultipartEncoder actually sends to the server, so that the request does not match. Any ideas?
With requests-toolbelt library, you have to pass the filename as well, when declaring the field for upload_file, as well as set the Content-Type header—which is the main reason for the error you get, as you are sending the request without setting the Content-Type header to multipart/form-data, followed by the necessary boundary string—as shown in the documentation. Example:
filename = 'my_file.txt'
m = MultipartEncoder(fields={'upload_file': (filename, open(filename, 'rb'))})
r = requests.post(url, data=m, headers={'Content-Type': m.content_type})
print(r.request.headers) # confirm that the 'Content-Type' header has been set
However, I wouldn't recommend using a library (i.e., requests-toolbelt) that hasn't provided a new release for over three years now. I would suggest using Python requests instead, as demonstrated in this answer and that answer (also see Streaming Uploads and Chunk-Encoded Requests), or, preferably, use the HTTPX library, which supports async requests (if you had to send multiple requests simultaneously), as well as streaming File uploads by default, meaning that only one chunk at a time will be loaded into memory (see the documentation). Examples are given below.
Option 1 (Fast) - Upload File and Form data using .stream()
As previously explained in detail in this answer, when you declare an UploadFile object, FastAPI/Starlette, under the hood, uses a SpooledTemporaryFile with the max_size attribute set to 1MB, meaning that the file data is spooled in memory until the file size exceeds the max_size, at which point the contents are written to disk; more specifically, to a temporary file on your OS's temporary directory—see this answer on how to find/change the default temporary directory—that you later need to read the data from, using the .read() method. Hence, this whole process makes uploading file quite slow; especially, if it is a large file (as you'll see in Option 2 below later on).
To avoid that and speed up the process, as the linked answer above suggested, one can access the request body as a stream. As per Starlette documentation, if you use the .stream() method, the (request) byte chunks are provided without storing the entire body to memory (and later to a temporary file, if the body size exceeds 1MB). This method allows you to read and process the byte chunks as they arrive. The below takes the suggested solution a step further, by using the streaming-form-data library, which provides a Python parser for parsing streaming multipart/form-data input chunks. This means that not only you can upload Form data along with File(s), but you also don't have to wait for the entire request body to be received, in order to start parsing the data. The way it's done is that you initialise the main parser class (passing the HTTP request headers that help to determine the input Content-Type, and hence, the boundary string used to separate each body part in the multipart payload, etc.), and associate one of the Target classes to define what should be done with a field when it has been extracted out of the request body. For instance, FileTarget would stream the data to a file on disk, whereas ValueTarget would hold the data in memory (this class can be used for either Form or File data as well, if you don't need the file(s) saved to the disk). It is also possible to define your own custom Target classes. I have to mention that streaming-form-data library does not currently support async calls to I/O operations, meaning that the writing of chunks happens synchronously (within a def function). Though, as the endpoint below uses .stream() (which is an async function), it will give up control for other tasks/requests to run on the event loop, while waiting for data to become available from the stream. You could also run the function for parsing the received data in a separate thread and await it, using Starlette's run_in_threadpool()—e.g., await run_in_threadpool(parser.data_received, chunk)—which is used by FastAPI internally when you call the async methods of UploadFile, as shown here. For more details on def vs async def, please have a look at this answer.
You can also perform certain validation tasks, e.g., ensuring that the input size is not exceeding a certain value. This can be done using the MaxSizeValidator. However, as this would only be applied to the fields you defined—and hence, it wouldn't prevent a malicious user from sending extremely large request body, which could result in consuming server resources in a way that the application may end up crashing—the below incorporates a custom MaxBodySizeValidator class that is used to make sure that the request body size is not exceeding a pre-defined value. The both validators desribed above solve the problem of limiting upload file (as well as the entire request body) size in a likely better way than the one desribed here, which uses UploadFile, and hence, the file needs to be entirely received and saved to the temporary directory, before performing the check (not to mention that the approach does not take into account the request body size at all)—using as ASGI middleware such as this would be an alternative solution for limiting the request body. Also, in case you are using Gunicorn with Uvicorn, you can also define limits with regards to, for example, the number of HTTP header fields in a request, the size of an HTTP request header field, and so on (see the documentation). Similar limits can be applied when using reverse proxy servers, such as Nginx (which also allows you to set the maximum request body size using the client_max_body_size directive).
A few notes for the example below. Since it uses the Request object directly, and not UploadFile and Form objects, the endpoint won't be properly documented in the auto-generated docs at /docs (if that's important for your app at all). This also means that you have to perform some checks yourself, such as whether the required fields for the endpoint were received or not, and if they were in the expected format. For instance, for the data field, you could check whether data.value is empty or not (empty would mean that the user has either not included that field in the multipart/form-data, or sent an empty value), as well as if isinstance(data.value, str). As for the file(s), you can check whether file_.multipart_filename is not empty; however, since a filename could likely not be included in the Content-Disposition by some user, you may also want to check if the file exists in the filesystem, using os.path.isfile(filepath) (Note: you need to make sure there is no pre-existing file with the same name in that specified location; otherwise, the aforementioned function would always return True, even when the user did not send the file).
Regarding the applied size limits, the MAX_REQUEST_BODY_SIZE below must be larger than MAX_FILE_SIZE (plus all the Form values size) you expcect to receive, as the raw request body (that you get from using the .stream() method) includes a few more bytes for the --boundary and Content-Disposition header for each of the fields in the body. Hence, you should add a few more bytes, depending on the Form values and the number of files you expect to receive (hence the MAX_FILE_SIZE + 1024 below).
app.py
from fastapi import FastAPI, Request, HTTPException, status
from streaming_form_data import StreamingFormDataParser
from streaming_form_data.targets import FileTarget, ValueTarget
from streaming_form_data.validators import MaxSizeValidator
import streaming_form_data
from starlette.requests import ClientDisconnect
import os
MAX_FILE_SIZE = 1024 * 1024 * 1024 * 4 # = 4GB
MAX_REQUEST_BODY_SIZE = MAX_FILE_SIZE + 1024
app = FastAPI()
class MaxBodySizeException(Exception):
def __init__(self, body_len: str):
self.body_len = body_len
class MaxBodySizeValidator:
def __init__(self, max_size: int):
self.body_len = 0
self.max_size = max_size
def __call__(self, chunk: bytes):
self.body_len += len(chunk)
if self.body_len > self.max_size:
raise MaxBodySizeException(body_len=self.body_len)
#app.post('/upload')
async def upload(request: Request):
body_validator = MaxBodySizeValidator(MAX_REQUEST_BODY_SIZE)
filename = request.headers.get('Filename')
if not filename:
raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail='Filename header is missing')
try:
filepath = os.path.join('./', os.path.basename(filename))
file_ = FileTarget(filepath, validator=MaxSizeValidator(MAX_FILE_SIZE))
data = ValueTarget()
parser = StreamingFormDataParser(headers=request.headers)
parser.register('file', file_)
parser.register('data', data)
async for chunk in request.stream():
body_validator(chunk)
parser.data_received(chunk)
except ClientDisconnect:
print("Client Disconnected")
except MaxBodySizeException as e:
raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
detail=f'Maximum request body size limit ({MAX_REQUEST_BODY_SIZE} bytes) exceeded ({e.body_len} bytes read)')
except streaming_form_data.validators.ValidationError:
raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
detail=f'Maximum file size limit ({MAX_FILE_SIZE} bytes) exceeded')
except Exception:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail='There was an error uploading the file')
if not file_.multipart_filename:
raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, detail='File is missing')
print(data.value.decode())
print(file_.multipart_filename)
return {"message": f"Successfuly uploaded {filename}"}
As mentioned earlier, to upload the data (on client side), you can use the HTTPX library, which supports streaming file uploads by default, and thus allows you to send large streams/files without loading them entirely into memory. You can pass additional Form data as well, using the data argument. Below, a custom header, i.e., Filename, is used to pass the filename to the server, so that the server instantiates the FileTarget class with that name (you could use the X- prefix for custom headers, if you wish; however, it is not officially recommended anymore).
To upload multiple files, use a header for each file (or, use random names on server side, and once the file has been fully uploaded, you can optionally rename it using the file_.multipart_filename attribute), pass a list of files, as described in the documentation (Note: use a different field name for each file, so that they won't overlap when parsing them on server side, e.g., files = [('file', open('bigFile.zip', 'rb')),('file_2', open('bigFile2.zip', 'rb'))], and finally, define the Target classes on server side accordingly.
test.py
import httpx
import time
url ='http://127.0.0.1:8000/upload'
files = {'file': open('bigFile.zip', 'rb')}
headers={'Filename': 'bigFile.zip'}
data = {'data': 'Hello World!'}
with httpx.Client() as client:
start = time.time()
r = client.post(url, data=data, files=files, headers=headers)
end = time.time()
print(f'Time elapsed: {end - start}s')
print(r.status_code, r.json(), sep=' ')
Upload both File and JSON body
In case you would like to upload both file(s) and JSON instead of Form data, you can use the approach described in Method 3 of this answer, thus also saving you from performing manual checks on the received Form fields, as explained earlier (see the linked answer for more details). To do that, make the following changes in the code above.
app.py
#...
from fastapi import Form
from pydantic import BaseModel, ValidationError
from typing import Optional
from fastapi.encoders import jsonable_encoder
class Base(BaseModel):
name: str
point: Optional[float] = None
is_accepted: Optional[bool] = False
def checker(data: str = Form(...)):
try:
model = Base.parse_raw(data)
except ValidationError as e:
raise HTTPException(detail=jsonable_encoder(e.errors()), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY)
return model
#...
#app.post('/upload')
async def upload(request: Request):
#...
# place this after the try-except block
model = checker(data.value.decode())
print(model.dict())
test.py
#...
import json
data = {'data': json.dumps({"name": "foo", "point": 0.13, "is_accepted": False})}
#...
Option 2 (Slow) - Upload File and Form data using UploadFile and Form
If you would like to use a normal def endpoint instead, see this answer.
app.py
from fastapi import FastAPI, File, UploadFile, Form, HTTPException, status
import aiofiles
import os
CHUNK_SIZE = 1024 * 1024 # adjust the chunk size as desired
app = FastAPI()
#app.post("/upload")
async def upload(file: UploadFile = File(...), data: str = Form(...)):
try:
filepath = os.path.join('./', os.path.basename(file.filename))
async with aiofiles.open(filepath, 'wb') as f:
while chunk := await file.read(CHUNK_SIZE):
await f.write(chunk)
except Exception:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail='There was an error uploading the file')
finally:
await file.close()
return {"message": f"Successfuly uploaded {file.filename}"}
As mentioned earlier, using this option would take longer for the file upload to complete, and as HTTPX uses a default timeout of 5 seconds, you will most likely get a ReadTimeout exception (as the server will need some time to read the SpooledTemporaryFile in chunks and write the contents to a permanent location on the disk). Thus, you can configure the timeout (see the Timeout class in the source code too), and more specifically, the read timeout, which "specifies the maximum duration to wait for a chunk of data to be received (for example, a chunk of the response body)". If set to None instead of some positive numerical value, there will be no timeout on read.
test.py
import httpx
import time
url ='http://127.0.0.1:8000/upload'
files = {'file': open('bigFile.zip', 'rb')}
headers={'Filename': 'bigFile.zip'}
data = {'data': 'Hello World!'}
timeout = httpx.Timeout(None, read=180.0)
with httpx.Client(timeout=timeout) as client:
start = time.time()
r = client.post(url, data=data, files=files, headers=headers)
end = time.time()
print(f'Time elapsed: {end - start}s')
print(r.status_code, r.json(), sep=' ')
Related
How to send a Starlette FormData data structure to a FastAPI endpoint via python request library
My system architecture currently sends a form data blob from the frontend to the backend, both hosted on localhost on different ports. The form data is recieved in the backend via the FastAPI library as shown. #app.post('/avatar/request') async def get_avatar_request(request: Request, Authorize: AuthJWT = Depends()): form = await request.form() return run_function_in_jwt_wrapper(get_avatar_output, form, Authorize, False) Currently, I am trying to relay the form data unmodified to another FASTApi end point from the backend using the request library, as follows: response = requests.post(models_config["avatar_api"], data = form_data, headers = {"response-type": "blob"}) While the destination endpoint does receive the Form Data, it seemed to not have parsed the UploadFile component properly. Instead of getting the corresponding starlette UploadFile data structure, I instead receive the string of the classname, as shown in this error message: FormData([('avatarFile', '<starlette.datastructures.UploadFile object at 0x7f8d25468550>'), ('avatarFileType', 'jpeg'), ('background', 'From Image'), ('voice', 'en-US-Wavenet-B'), ('transcriptKind', 'text'), ('translateToLanguage', 'No translation'), ('transcriptText', 'do')]) How should I handle this problem?
FileUpload is a python object, you'd need to serialize it somehow before using requests.post() then deserialize it before actually getting the content out of it via content = await form["upload-file"].read(). I don't think you'd want to serialize a FileUpload object though (if it is possible), rather you'd read the content of the form data and then post that. Even better, if your other FastAPI endpoint is part of the same service, you might consider just calling a function instead and avoid requests altogether (maybe use a controller function that the route function calls in case you also need this endpoint to be callable from outside the service, then just call the controller function directly avoiding the route and the need for requests). This way you can pass whatever you want without needing to serialize it. If you must use requests, then I'd read the content of the form then create a new post with with that form data. e.g. form = await request.form() # starlette.datastructures.FormData upload_file = form["upload_file"] # starlette.datastructures.UploadFile - not used here, but just for illustrative purposes filename = form["upload_file"].filename # str contents = await form["upload_file"].read() # bytes content_type = form["upload_file"].content_type # str ... data = {k: v for k, v in form.items() if k != "upload-file"} # the form data except the file files = {"upload-file": (filename, contents, content_type)} # the file requests.post(models_config["avatar_api"], files=files, data=data, headers = {"response-type": "blob"})
Json in PUT request to Python Flask app fails to decode
I'm working on building an SQLite3 database to store aperture flux measurements of stars in any given astronomical image. I have one file (star_database.py) containing a Flask app running with two routes that handle selecting from and inserting into that database. There is a separate script (aperture_photometry.py) that will call those routes when incoming images need photometric processing. The crux of my problem is in the interaction between the function to insert data into the SQLite database and the aperture photometry script tasked with passing data to the Flask app. Here are the relevant functions: # Function in aperture_photometry.py that turns Star object data into a dictionary # and passes it to the Flask app from astropy.io import fits import requests import json def measure_photometry(image, bandpass): df = fits.getdata(image) hf = fits.getheader(image, ext=1) date_obs = hf['DATE-OBS'] ra, dec = get_center_coords(hf) response = requests.get(f'http://127.0.0.1:5000/select/{ra}/{dec}/').content star_json = json.loads(response) if star_json is not None: stars = json_to_stars(star_json) get_raw_flux(df, df*0.01, hf, stars) # Error array will be changed star_json = [] # Critical section for star in stars: star_json.append({"star_id":star.star_id, "source_id":star.source_id, "flux":star.flux, "flux_err":star.flux_err}) response = requests.put('http://127.0.0.1:5000/insert/', data={'stars':star_json, 'bandpass':bandpass, 'dateobs':date_obs}) print(response.content) else: print("ERROR: Could not get star objects from database.") # Function in star_database.py that handles incoming flux measurements from the # aperture photometry script, and then inserts data into the SQLite database from flask import Flask, request #app.route('/insert/', methods=['PUT']) def insert_flux_rows(): conn = create_connection(database) if conn is not None: c = conn.cursor() body = request.get_json(force=True) print(body) # More comes after this, but it is not relevant to the question After running the Flask app and calling aperture_photometry.py, the PUT request response.content line prints a 400 Bad Request error with the message, Failed to decode JSON object: Expecting value: line 1 column 1 (char 0). I think that problem is either in the way I have tried to format the star object data as it is being passed into the PUT request in measure_photometry, or if not there is something wrong with doing body = request.get_json(force=True). It is also worth mentioning that the statement print(body) in insert_flux_rows does not print anything to stdout. For all intents and purposes the two scripts must remain separate, i.e. I cannot combine them and remove the requests dependency. I would really appreciate some help with this, as I have been trying to fix it all day.
Based on the top answer from this question, it seems like your data variable in the measure_photometry function may not be properly convertable to json. You should try to test it out (maybe run a json.dumps on it) to see if a more detailed error message is provided. There's also the jsonschema package.
Export spreadsheet as text/csv using Drive v3 gives 500 Internal Error
I was trying to export a Google Spreadsheet in csv format using the Google client library for Python: # OAuth and setups... req = g['service'].files().export_media(fileId=fileid, mimeType=MIMEType) fh = io.BytesIO() downloader = http.MediaIoBaseDownload(fh, req) # Other file IO handling... This works for MIMEType: application/pdf, MS Excel, etc. According to Google's documentation, text/csv is supported. But when I try to make a request, the server gives a 500 Internal Error. Even using google's Drive API playground, it gives the same error. Tried: Like in v2, I added a field: gid = 0 to the request to specify the worksheet, but then it's a bad request.
This is a known bug in Google's code. https://code.google.com/a/google.com/p/apps-api-issues/issues/detail?id=4289 However, if you manually build your own request, you can download the whole file in bytes (the media management stuff won't work). With file as the file ID, http as the http object that you've authorized against you can download a file with: from apiclient.http import HttpRequest def postproc(*args): return args[1] data = HttpRequest(http=http, postproc=postproc, uri='https://docs.google.com/feeds/download/spreadsheets/Export?key=%s&exportFormat=csv' % file, headers={ }).execute() data here is a bytes object that contains your CSV. You can open it something like: import io lines = io.TextIOWrapper(io.BytesIO(data), encoding='utf-8', errors='replace') for line in lines: #Do whatever
You just need to implement an Exponential Backoff. Looking at this documentation of ExponentialBackOffPolicy. The idea is that the servers are only temporarily unavailable, and they should not be overwhelmed when they are trying to get back up. The default implementation requires back off for 500 and 503 status codes. Subclasses may override if different status codes are required. Here is an snippet of an implementation of Exponential Backoff from the first link: ExponentialBackOff backoff = ExponentialBackOff.builder() .setInitialIntervalMillis(500) .setMaxElapsedTimeMillis(900000) .setMaxIntervalMillis(6000) .setMultiplier(1.5) .setRandomizationFactor(0.5) .build(); request.setUnsuccessfulResponseHandler(new HttpBackOffUnsuccessfulResponseHandler(backoff)); You may want to look at this documentation for the summary of the ExponentialBackoff implementation.
Streaming file upload using bottle (or flask or similar)
I have a REST frontend written using Python/Bottle which handles file uploads, usually large ones. The API is wirtten in such a way that: The client sends PUT with the file as a payload. Among other things, it sends Date and Authorization headers. This is a security measure against replay attacks -- the request is singed with a temporary key, using target url, the date and several other things Now the problem. The server accepts the request if the supplied date is in given datetime window of 15 minutes. If the upload takes long enough time, it will be longer than the allowed time delta. Now, the request authorization handling is done using decorator on bottle view method. However, bottle won't start the dispatch process unless the upload is finished, so the validation fails on longer uploads. My question is: is there a way to explain to bottle or WSGI to handle the request immediately and stream the upload as it goes? This would be useful for me for other reasons as well. Or any other solutions? As I am writing this, WSGI middleware comes to mind, but still, I'd like external insight. I would be willing to switch to Flask, or even other Python frameworks, as the REST frontend is quite lightweight. Thank you
I recommend splitting the incoming file into smaller-sized chunks on the frontend. I'm doing this to implement a pause/resume function for large file uploads in a Flask application. Using Sebastian Tschan's jquery plugin, you can implement chunking by specifying a maxChunkSize when initializing the plugin, as in: $('#file-select').fileupload({ url: '/uploads/', sequentialUploads: true, done: function (e, data) { console.log("uploaded: " + data.files[0].name) }, maxChunkSize: 1000000 // 1 MB }); Now the client will send multiple requests when uploading large files. And your server-side code can use the Content-Range header to patch the original large file back together. For a Flask application, the view might look something like: # Upload files #app.route('/uploads/', methods=['POST']) def results(): files = request.files # assuming only one file is passed in the request key = files.keys()[0] value = files[key] # this is a Werkzeug FileStorage object filename = value.filename if 'Content-Range' in request.headers: # extract starting byte from Content-Range header string range_str = request.headers['Content-Range'] start_bytes = int(range_str.split(' ')[1].split('-')[0]) # append chunk to the file on disk, or create new with open(filename, 'a') as f: f.seek(start_bytes) f.write(value.stream.read()) else: # this is not a chunked request, so just save the whole file value.save(filename) # send response with appropriate mime type header return jsonify({"name": value.filename, "size": os.path.getsize(filename), "url": 'uploads/' + value.filename, "thumbnail_url": None, "delete_url": None, "delete_type": None,}) For your particular application, you will just have to make sure that the correct auth headers are still sent with each request. Hope this helps! I was struggling with this problem for a while ;)
When using plupload solution might be like this one: $("#uploader").plupload({ // General settings runtimes : 'html5,flash,silverlight,html4', url : "/uploads/", // Maximum file size max_file_size : '20mb', chunk_size: '128kb', // Specify what files to browse for filters : [ {title : "Image files", extensions : "jpg,gif,png"}, ], // Enable ability to drag'n'drop files onto the widget (currently only HTML5 supports that) dragdrop: true, // Views to activate views: { list: true, thumbs: true, // Show thumbs active: 'thumbs' }, // Flash settings flash_swf_url : '/static/js/plupload-2.1.2/js/plupload/js/Moxie.swf', // Silverlight settings silverlight_xap_url : '/static/js/plupload-2.1.2/js/plupload/js/Moxie.xap' }); And your flask-python code in such case would be similar to this: from werkzeug import secure_filename # Upload files #app.route('/uploads/', methods=['POST']) def results(): content = request.files['file'].read() filename = secure_filename(request.values['name']) with open(filename, 'ab+') as fp: fp.write(content) # send response with appropriate mime type header return jsonify({ "name": filename, "size": os.path.getsize(filename), "url": 'uploads/' + filename,}) Plupload always sends chunks in exactly same order, from first to last, so you do not have to bother with seek or anything like that.
GAE - how to use blobstore stub in testbed?
My code goes like this: self.testbed.init_blobstore_stub() upload_url = blobstore.create_upload_url('/image') upload_url = re.sub('^http://testbed\.example\.com', '', upload_url) response = self.testapp.post(upload_url, params={ 'shopid': id, 'description': 'JLo', }, upload_files=[('file', imgPath)]) self.assertEqual(response.status_int, 200) how come it shows 404 error? For some reasons the upload path does not seem to exist at all.
You can't do this. I think the problem is that webtest (which I assume is where self.testapp came from) doesn't work well with testbed blobstore functionality. You can find some info at this question. My solution was to override unittest.TestCase and add the following methods: def create_blob(self, contents, mime_type): "Since uploading blobs doesn't work in testing, create them this way." fn = files.blobstore.create(mime_type = mime_type, _blobinfo_uploaded_filename = "foo.blt") with files.open(fn, 'a') as f: f.write(contents) files.finalize(fn) return files.blobstore.get_blob_key(fn) def get_blob(self, key): return self.blobstore_stub.storage.OpenBlob(key).read() You will also need the solution here. For my tests where I would normally do a get or post to a blobstore handler, I instead call one of the two methods above. It is a bit hacky but it works. Another solution I am considering is to use Selenium's HtmlUnit driver. This would require the dev server to be running but should allow full testing of blobstore and also javascript (as a side benefit).
I think Kekito is right, you cannot POST to the upload_url directly. But if you want to test the BlobstoreUploadHandler, you can fake the POST request it would normally received from the blobstore in the following way. Assuming your handler is at /handler : import email ... def test_upload(self): blob_key = 'abcd' # The blobstore upload handler receives a multipart form request # containing uploaded files. But instead of containing the actual # content, the files contain an 'email' message that has some meta # information about the file. They also contain a blob-key that is # the key to get the blob from the blobstore # see blobstore._get_upload_content m = email.message.Message() m.add_header('Content-Type', 'image/png') m.add_header('Content-Length', '100') m.add_header('X-AppEngine-Upload-Creation', '2014-03-02 23:04:05.123456') # This needs to be valie base64 encoded m.add_header('content-md5', 'd74682ee47c3fffd5dcd749f840fcdd4') payload = m.as_string() # The blob-key in the Content-type is important params = [('file', webtest.forms.Upload('test.png', payload, 'image/png; blob-key='+blob_key))] self.testapp.post('/handler', params, content_type='blob-key') I figured that out by digging into the blobstore code. The important bit is that the POST request that the blobstore sends to the UploadHandler doesn't contain the file content. Instead, it contains an "email message" (well, informations encoded like in an email) with metadata about the file (content-type, content-length, upload time and md5). It also contains a blob-key that can be used to retrieve the file from the blobstore.