python-vuforia bad http request - python

I'm currently working on an augmented reality app using Django and the Vuforia SDK.
Vuforia provides an API to manage target images on the Vuforia Clouddatabase.
I stumbled across a python script to communicate with Vuforias Rest-API: https://github.com/dadoeyad/python-vuforia
The functions fetch data from the Database work nicely.
But I can't figure out how to use the function to add data to the Database.
import augmented.vuforia
upload = vuforia.Vuforia()
data = '{"name":"tarmac","width":"265.0","image":"/9j/4AAQSkZJR..."}'
upload.add_target(data)
This gives me an error: Bad Http Request
Is someone smart out there who gets how the data should be formatted?
The docs also seem to have typos:
https://developer.vuforia.com/resources/dev-guide/adding-target-cloud-database-api

In the library there is an example of how to add a target.
v = Vuforia(server_access, server_secret)
image_file = open('PATH_TO_IMAGE_FILE')
image = base64.b64encode(image_file.read())
meta = "this is the metadata"
metadata = base64.b64encode(meta)
print v.add_target({"name": "zxczxc", "width": "550", "image": image, "application_metadata": metadata, "active_flag": 1})

Related

Moving a file using a python web service [duplicate]

this is a two-part question: I have seen individual pieces discussed, but can't seem to get the recommended suggestions to work together. I want to create a web service to store images and their metadata passed from a caller and run a test call from Postman to make sure it is working. So to pass an image (Drew16.jpg) to the web service via Postman, it appears I need something like this:
For the web service, I have some python/flask code to read the request (one of many variations I have tried):
from flask import Flask, jsonify, request, render_template
from flask_restful import Resource, Api, reqparse
...
def post(self, name):
request_data = request.get_json()
userId = request_data['UserId']
type = request_data['ImageType']
image = request.files['Image']
Had no problem with the data portion and straight JSON but adding the image has been a bugger. Where am I going wrong on my Postman config? What is the actual set of Python commands for reading the metadata and the file from the post? TIA
Pardon the almost blog post. I am posting this because while you can find partial answers in various places, I haven't run across a complete post anywhere, which would have saved me a ton of time. The problem is you need both sides to the story in order to verify either.
So I want to send a request using Postman to a Python/Flask web service. It has to have an image along with some metadata.
Here are the settings for Postman (URL, Headers):
And Body:
Now on to the web service. Here is a bare bones service which will take the request, print the metadata and save the file:
from flask import Flask, request
app = Flask(__name__)
# POST - just get the image and metadata
#app.route('/RequestImageWithMetadata', methods=['POST'])
def post():
request_data = request.form['some_text']
print(request_data)
imagefile = request.files.get('imagefile', '')
imagefile.save('D:/temp/test_image.jpg')
return "OK", 200
app.run(port=5000)
Enjoy!
Make sure `request.files['Image'] contains the image you are sending and follow http://flask.pocoo.org/docs/1.0/patterns/fileuploads/ to save the file to your file system. Something like
file = request.files['Image']
file.save('./test_image.jpg')
might do what you want, while you will have to work out the details of how the file should be named and where it should be placed.

Export spreadsheet as text/csv using Drive v3 gives 500 Internal Error

I was trying to export a Google Spreadsheet in csv format using the Google client library for Python:
# OAuth and setups...
req = g['service'].files().export_media(fileId=fileid, mimeType=MIMEType)
fh = io.BytesIO()
downloader = http.MediaIoBaseDownload(fh, req)
# Other file IO handling...
This works for MIMEType: application/pdf, MS Excel, etc.
According to Google's documentation, text/csv is supported. But when I try to make a request, the server gives a 500 Internal Error.
Even using google's Drive API playground, it gives the same error.
Tried:
Like in v2, I added a field:
gid = 0
to the request to specify the worksheet, but then it's a bad request.
This is a known bug in Google's code. https://code.google.com/a/google.com/p/apps-api-issues/issues/detail?id=4289
However, if you manually build your own request, you can download the whole file in bytes (the media management stuff won't work).
With file as the file ID, http as the http object that you've authorized against you can download a file with:
from apiclient.http import HttpRequest
def postproc(*args):
return args[1]
data = HttpRequest(http=http,
postproc=postproc,
uri='https://docs.google.com/feeds/download/spreadsheets/Export?key=%s&exportFormat=csv' % file,
headers={ }).execute()
data here is a bytes object that contains your CSV. You can open it something like:
import io
lines = io.TextIOWrapper(io.BytesIO(data), encoding='utf-8', errors='replace')
for line in lines:
#Do whatever
You just need to implement an Exponential Backoff.
Looking at this documentation of ExponentialBackOffPolicy.
The idea is that the servers are only temporarily unavailable, and they should not be overwhelmed when they are trying to get back up.
The default implementation requires back off for 500 and 503 status codes. Subclasses may override if different status codes are required.
Here is an snippet of an implementation of Exponential Backoff from the first link:
ExponentialBackOff backoff = ExponentialBackOff.builder()
.setInitialIntervalMillis(500)
.setMaxElapsedTimeMillis(900000)
.setMaxIntervalMillis(6000)
.setMultiplier(1.5)
.setRandomizationFactor(0.5)
.build();
request.setUnsuccessfulResponseHandler(new HttpBackOffUnsuccessfulResponseHandler(backoff));
You may want to look at this documentation for the summary of the ExponentialBackoff implementation.

How to use Bigquery streaming insertall on app engine & python

I would like to develop an app engine application that directly stream data into a BigQuery table.
According to Google's documentation there is a simple way to stream data into bigquery:
http://googlecloudplatform.blogspot.co.il/2013/09/google-bigquery-goes-real-time-with-streaming-inserts-time-based-queries-and-more.html
https://developers.google.com/bigquery/streaming-data-into-bigquery#streaminginsertexamples
(note: in the above link you should select the python tab and not Java)
Here is the sample code snippet on how streaming insert should be coded:
body = {"rows":[
{"json": {"column_name":7.7,}}
]}
response = bigquery.tabledata().insertAll(
projectId=PROJECT_ID,
datasetId=DATASET_ID,
tableId=TABLE_ID,
body=body).execute()
Although I've downloaded the client api I didn't find any reference to a "bigquery" module/object referenced in the above Google's example.
Where is the the bigquery object (from snippet) should be located?
Can anyone show a more complete way to use this snippet (with the right imports)?
I've Been searching for that a lot and found documentation confusing and partial.
Minimal working (as long as you fill in the right ids for your project) example:
import httplib2
from apiclient import discovery
from oauth2client import appengine
_SCOPE = 'https://www.googleapis.com/auth/bigquery'
# Change the following 3 values:
PROJECT_ID = 'your_project'
DATASET_ID = 'your_dataset'
TABLE_ID = 'TestTable'
body = {"rows":[
{"json": {"Col1":7,}}
]}
credentials = appengine.AppAssertionCredentials(scope=_SCOPE)
http = credentials.authorize(httplib2.Http())
bigquery = discovery.build('bigquery', 'v2', http=http)
response = bigquery.tabledata().insertAll(
projectId=PROJECT_ID,
datasetId=DATASET_ID,
tableId=TABLE_ID,
body=body).execute()
print response
As Jordan says: "Note that this uses the appengine robot to authenticate with BigQuery, so you'll to add the robot account to the ACL of the dataset. Note that if you also want to use the robot to run queries, not just stream, you need the robot to be a member of the project 'team' so that it is authorized to run jobs."
Here is a working code example from an appengine app that streams records to a BigQuery table. It is open source at code.google.com:
http://code.google.com/p/bigquery-e2e/source/browse/sensors/cloud/src/main.py#124
To find out where the bigquery object comes from, see
http://code.google.com/p/bigquery-e2e/source/browse/sensors/cloud/src/config.py
Note that this uses the appengine robot to authenticate with BigQuery, so you'll to add the robot account to the ACL of the dataset.
Note that if you also want to use the robot to run queries, not just stream, you need to robot to be a member of the project 'team' so that it is authorized to run jobs.

GAE - how to use blobstore stub in testbed?

My code goes like this:
self.testbed.init_blobstore_stub()
upload_url = blobstore.create_upload_url('/image')
upload_url = re.sub('^http://testbed\.example\.com', '', upload_url)
response = self.testapp.post(upload_url, params={
'shopid': id,
'description': 'JLo',
}, upload_files=[('file', imgPath)])
self.assertEqual(response.status_int, 200)
how come it shows 404 error? For some reasons the upload path does not seem to exist at all.
You can't do this. I think the problem is that webtest (which I assume is where self.testapp came from) doesn't work well with testbed blobstore functionality. You can find some info at this question.
My solution was to override unittest.TestCase and add the following methods:
def create_blob(self, contents, mime_type):
"Since uploading blobs doesn't work in testing, create them this way."
fn = files.blobstore.create(mime_type = mime_type,
_blobinfo_uploaded_filename = "foo.blt")
with files.open(fn, 'a') as f:
f.write(contents)
files.finalize(fn)
return files.blobstore.get_blob_key(fn)
def get_blob(self, key):
return self.blobstore_stub.storage.OpenBlob(key).read()
You will also need the solution here.
For my tests where I would normally do a get or post to a blobstore handler, I instead call one of the two methods above. It is a bit hacky but it works.
Another solution I am considering is to use Selenium's HtmlUnit driver. This would require the dev server to be running but should allow full testing of blobstore and also javascript (as a side benefit).
I think Kekito is right, you cannot POST to the upload_url directly.
But if you want to test the BlobstoreUploadHandler, you can fake the POST request it would normally received from the blobstore in the following way. Assuming your handler is at /handler :
import email
...
def test_upload(self):
blob_key = 'abcd'
# The blobstore upload handler receives a multipart form request
# containing uploaded files. But instead of containing the actual
# content, the files contain an 'email' message that has some meta
# information about the file. They also contain a blob-key that is
# the key to get the blob from the blobstore
# see blobstore._get_upload_content
m = email.message.Message()
m.add_header('Content-Type', 'image/png')
m.add_header('Content-Length', '100')
m.add_header('X-AppEngine-Upload-Creation', '2014-03-02 23:04:05.123456')
# This needs to be valie base64 encoded
m.add_header('content-md5', 'd74682ee47c3fffd5dcd749f840fcdd4')
payload = m.as_string()
# The blob-key in the Content-type is important
params = [('file', webtest.forms.Upload('test.png', payload,
'image/png; blob-key='+blob_key))]
self.testapp.post('/handler', params, content_type='blob-key')
I figured that out by digging into the blobstore code. The important bit is that the POST request that the blobstore sends to the UploadHandler doesn't contain the file content. Instead, it contains an "email message" (well, informations encoded like in an email) with metadata about the file (content-type, content-length, upload time and md5). It also contains a blob-key that can be used to retrieve the file from the blobstore.

Getting HTTP GET variables using Tipfy

I'm currently playing around with tipfy on Google's Appengine and just recently ran into a problem: I can't for the life of me find any documentation on how to use GET variables in my application, I've tried sifting through both tipfy and Werkzeug's documentations with no success. I know that I can use request.form.get('variable') to get POST variables and **kwargs in my handlers for URL variables, but that's as much as the documentation will tell me. Any ideas?
request.args.get('variable') should work for what I think you mean by "GET data".
Source: http://www.tipfy.org/wiki/guide/request/
The Request object contains all the information transmitted by the client of the application. You will retrieve from it GET and POST values, uploaded files, cookies and header information and more. All these things are so common that you will be very used to it.
To access the Request object, simply import the request variable from tipfy:
from tipfy import request
# GET
request.args.get('foo')
# POST
request.form.get('bar')
# FILES
image = request.files.get('image_upload')
if image:
# User uploaded a file. Process it.
# This is the filename as uploaded by the user.
filename = image.filename
# This is the file data to process and/or save.
filedata = image.read()
else:
# User didn't select any file. Show an error if it is required.
pass
this works for me (tipfy 0.6):
from tipfy import RequestHandler, Response
from tipfy.ext.session import SessionMiddleware, SessionMixin
from tipfy.ext.jinja2 import render_response
from tipfy import Tipfy
class I18nHandler(RequestHandler, SessionMixin):
middleware = [SessionMiddleware]
def get(self):
language = Tipfy.request.args.get('lang')
return render_response('hello_world.html', message=language)

Categories

Resources