I just worked through the [bottle tutorial[1 and found the below helpful table (I hope I get the format right) of where what types of request attributes can be accessed
Attribute GET Form fields POST Form fields File Uploads
BaseRequest.query yes no no
BaseRequest.forms no yes no
BaseRequest.files no no yes
BaseRequest.params yes yes no
BaseRequest.GET yes no no
BaseRequest.POST no yes yes
Of course I want to try it out myself, but because Bottle data structures are special thread-safe versions, and I wanted to use json to print it in a sensible format, I wrote the following (working) test program
from bottle import run, route, request, response, template, Bottle
import uuid
import json
import os
ondersoek = Bottle()
#ondersoek.get('/x')
#ondersoek.post('/x')
def show_everything():
PythonDict={}
PythonDict['forms']={}
for item in request.forms:
PythonDict['forms'][item]=request.forms.get(item)
PythonDict['query']={}
for item in request.forms:
PythonDict['query'][item]=request.query.get(item)
#The below does not work - multipart/form-data doesn't serialize in json
#PythonDict['files']={}
#for item in request.files:
#PythonDict['files'][item]=request.files.get(item)
PythonDict['GET']={}
for item in request.GET:
PythonDict['GET'][item]=request.GET.get(item)
PythonDict['POST']={}
for item in request.POST:
PythonDict['POST'][item]=request.POST.get(item)
PythonDict['params']={}
for item in request.params:
PythonDict['params'][item]=request.params.get(item)
return json.dumps(PythonDict, indent=3)+"\n"
ondersoek.run(host='localhost', port=8080, reloader=True)
This works, I get:
tahaan#Komputer:~/Projects$ curl -G -d dd=dddd http://localhost:8080/x?q=qqq
{
"files": {},
"GET": {
"q": "qqq",
"dd": "dddd"
},
"forms": {},
"params": {
"q": "qqq",
"dd": "dddd"
},
"query": {},
"POST": {}
}
And
tahaan#Komputer:~/Projects$ curl -X POST -d dd=dddd http://localhost:8080/x?q=qqq
{
"files": {},
"GET": {
"q": "qqq"
},
"forms": {
"dd": "dddd"
},
"params": {
"q": "qqq",
"dd": "dddd"
},
"query": {
"dd": null
},
"POST": {
"dd": "dddd"
}
}
I'm quite sure that this is not thread safe because I'm copying the data one item at a time from the Bottle data structure into a Python native data structure. Right now I'm still using the default non-threaded server, but for performance reasons I would want to use a threaded server like CherryPy at some point in the future. The question therefore is How do I get data out of Bottle, or any other similar thread-safe dict into something that can be converted to JSON (easily)? Does Bottle by any chance expose a FormsDict-To-Json function somewhere?
Your code is thread safe. I.e., if you ran it in a multithreaded server, it'd work just fine.
This is because a multithreaded server still only assigns one request per thread. You have no global data; all the data in your code is contained within a single request, which means it's within a single thread.
For example, the Bottle docs for the Request object say (emphasis mine):
A thread-local subclass of
BaseRequest with a different set of attributes for each thread. There
is usually only one global instance of this class (request). If
accessed during a request/response cycle, this instance always refers
to the current request (even on a multithreaded server).
In other words, every time you access request in your code, Bottle does a bit of "magic" to give you a thread-local Request object. This object is not global; it is distinct from all other Request objects that may exist concurrently, e.g. in other threads. As such, it is thread safe.
Edit in response to your question about PythonDict in particular: This line makes your code thread-safe:
PythonDict={}
It's safe because you're creating a new dict every time a thread hits that line of code; and each dict you're creating is local to the thread that created it. (In somewhat more technical terms: it's on the stack.)
This is in contrast to the case where your threads were sharing a global dict; in that case, your suspicion would be right: it would not be thread-safe. But in your code the dict is local, so no thread-safety issues apply.
Hope that helps!
As far as I can see there's no reason to believe that there's a problem with threads, because your request is being served by Bottle in a single thread. Also there are no asynchronous calls in your own code that could spawn new threads that access shared variables.
Related
I'm doing some dockerized code in Python (3.5) and flask (1.1.1) working against a CouchDB database (2.3.1) using the cloudant python extension (2.12.0) which seems to be the most up to date library to work against CouchDB.
I'm trying to fetch and use a view from the database, but it is not working. I can fetch documents, and work with the database normally, but I can't use the view.
I've added a print statement for the object that should hold the design document at the program start, and I see that the document shows as having no views (or anything at all) AND the CouchDB log shows NO requests for the design document being made.
I also tried to both get the design document and use the view via curl using the same URL and username/password, and both actions work successfully.
Here's sample code that fails:
from flask import Flask, render_template , request, g
from cloudant.client import CouchDB
from cloudant.view import View
from cloudant.design_document import DesignDocument
import requests
application = Flask(__name__)
application.config.from_pyfile("config.py")
couch = CouchDB(application.config['COUCHDB_USER'], application.config['COUCHDB_PASSWORD'], url=application.config['COUCHDB_SERVER'], connect=True, auto_renew=True)
database = couch[application.config['COUCHDB_DATABASE']]
views = DesignDocument(database, '_design/vistas')
print(views)
print(views.list_views())
#application.route("/", methods=['GET', 'POST'])
def index():
for pelicula in View(views,'titulos_peliculas'):
titulos.append({ "id": pelicula['id'], "titulo": pelicula['key'] })
return render_template('menu.html',titulos=titulos)
In that code, the print of the design document (views) returns:
{'lists': {}, 'indexes': {}, 'views': {}, 'shows': {}, '_id': '_design/vistas'}
With empty views as show... And the CouchDB log only shows the login to the database and getting the DB info:
couchdb:5984 172.23.0.4 undefined POST /_session 200 ok 69
couchdb:5984 172.23.0.4 vmb_web HEAD //peliculas 200 ok 232
No other queries at all.
No errors in the app log either. Even when I call the routed use of the views:
[pid: 21|app: 0|req: 1/1] 172.23.0.1 () {52 vars in 1225 bytes} [Mon Aug 5 15:03:24 2019] POST / => generated 1148 bytes in 56 msecs (HTTP/1.1 200) 2 headers in 81 bytes (1 switches on core 0)
And, as I said, I can get, and use the document:
curl http://vmb_web:password#127.0.0.1:999/peliculas/_design/vistas
{"_id":"_design/vistas","_rev":"1-e8108d41a6627ea61b9a89a637f574eb","language":"javascript","views":{"peliculas":{"map":"function(doc) { if (doc.schema == 'pelicula') { emit(doc.titulo, null); for(i=0;i<doc.titulos_alt.length;i++) { emit(doc.titulos_alt[i],null); } for(i=0;i<doc.directores.length;i++) { emit(doc.directores[i].nombre,null); } for(i=0;i<doc.actores.length;i++) { emit(doc.actores[i].nombre,null); } for(i=0;i<doc.escritores.length;i++) { emit(doc.escritores[i].nombre,null); } for(i=0;i<doc.etiquetas.length;i++) { emit(doc.etiquetas[i],null); } } }"},"titulos_peliculas":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, null); } }"},"archivos_peliculas":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, doc.archivo); } }"},"titulo_rev":{"map":"function(doc) { if ((doc.schema == 'pelicula') && (doc.titulo)) { emit(doc.titulo, doc._rev); } }"}}}
I'm answering my own question, in case someone in the future stumbles upon this. I got the answer from Esteban Laver in the github for python-cloudant and it is what #chrisinmtown mentions in a response up there.
I was failing to call fetch() on the design document before using it.
Another good suggestion was to use the get_view_result helper method for the database object which takes care of fetching the design document and instantiating the View object from the selected view all at once.
I believe the code posted above creates a new DesignDocument object, and does not search for an existing DesignDocument. After creating that object, it looks like you need to call its fetch() method and then check its views property. HTH.
p.s. promoting my comment to an answer, hope that's cool in SO land these days :)
I am trying to write some test cases for some code I've developed using Elasticsearch and Django. The concept is straightforward - I just want to test a get request, which will be an Elasticsearch query. However, I am constructing the query as a nested dict. When I pass the nested dict to the Client object in the test script it gets passed through Django until it ends up at the urlencode function which doesn't look like it can handle nested dicts only MultivalueDicts. Any suggestions or solutions? I don't want to use any additional packages as I don't want to depend on potentially non-supported packages for this application.
Generic Code:
class MyViewTest(TestCase):
es_connection = elasticsearch.Elasticsearch("localhost:9200")
def test_es_query(self):
client = Client()
query = {
"query": {
"term": {
"city": "some city"
}
}
}
response = client.get("", query)
print(response)
Link for urlencode function: urlencode Django
The issue is clearly at the conditional statement when the urlencode function checks if the dictionary value is a str or bytes object. If it isn't it creates a generator object which can never access the nested portions of the dictionary.
EDIT: 07/25/2018
So I was able to come up with a temporary work around to at least run the test. However, it is ugly and I feel like there must be a better way. The first thing I tried was specifying the content_type and converting the dict to a json string first. However, Django still kicked back and error in the urlencode function.
class MyViewTest(TestCase):
es_connection = elasticsearch.Elasticsearch("localhost:9200")
def test_es_query(self):
client = Client()
query = {
"query": {
"term": {
"city": "some city"
}
}
}
response = client.get("", data=json.dumps(query), content_type="application/json")
print(response)
So instead I had to do:
class MyViewTest(TestCase):
es_connection = elasticsearch.Elasticsearch("localhost:9200")
def test_es_query(self):
client = Client()
query = {
"query": {
"term": {
"city": "some city"
}
}
}
query = json.dumps(query)
response = client.get("", data={"q": query}, content_type="application/json")
print(response)
This let me send the HttpRequest to my View and parse it back out using:
json.loads(request.GET["q"])
Then I was able to successfully get the requested data from Elasticsearch and return it as an HttpResponse. I feel like in Django though there has to be a way to just pass a json formatted string directly to the Client object's get function. I thought specifying the content_type as application/json would work but it still calls the urlencode function. Any ideas? I really don't want to implement this current system into production.
I am building Alexa skill for my application. When your ask's 'what is my account status?' this intent return sequence of statements related to user's account. API gives following response
response = [{
..
text: 'Total orders are 41, Delivered 28'
..
},
{
..
text: 'Today orders are 12, Delivered 2'
..
},
{}]
How to build response sequence based on API response?
With this intent, I get the response from API with set statements Alexa should prompt each statement one by one. If the user said 'next' in between any of the statement while Alexa prompting then it goes to next statement in the response array.
First when user says "what is my account status?" your intent will be called and you will get response in a list where in the first call you will display 0th item.
API Result:
response = [{
..
text: 'Total orders are 41, Delivered 28'
..
},
{
..
text: 'Today orders are 12, Delivered 2'
..
},
{}]
You need to store information in Session attributes, like intent name, index which you displayed (0 in case of first call) etc.
Now you need to setup one more intent which will be triggered on keywords like next. In the code you will check values of session attributes and make your response according to the values. For example you would want to check previous intent name, previous index. If all is fine you will modify the session attributes and respond to user.
Hope it helps.
Since you mentioned Python, I would suggest to take a look at Flask-ask, which provides you with two main responses type: statement and question.
As sid8491 mentioned, you will need to store info in sessions to keep track of which response (from json) needs to be returned. You can use redis for this purpose, using this python library.
Assuming the json response is stored in db (or somewhere), and can be accessed in a list, let's say your interaction model looks something like this:
{
"languageModel": {
"intents": [
{
"name": "NextIntent",
"samples": ["next", "tell me more"]
},
{
"name": "StopIntent",
"samples": ["stop"]
},
{
"name": "StatusIntent",
"samples": ["what is my account status"]
}
],
"invocationName": "orders"
}
}
You can use following steps (using redis and flask-ask for this example):
on 'StatusIntent', store session and return first response:
redis.set("session_key", 0)
return statement(response[0]) # assuming responses are stored in a list
on 'NextIntent', get value stored in session, if present return next response
value = redis.get("session_key")
if not value: # session is expired
return statement("I don't understand")
redis.set("session_key", int(value)+1)
return statement(response[int(value)+1])
on 'StopIntent', remove "session_key" from redis
redis.delete("session_key")
return statement("Ok. I am here if you need me.")
It's not the actual code but simply intended to give you an idea. Hope it helps.
:)
I am creating an imposter process using Mountebank and want to record the request and response. To create a http imposter I used the following CURL command as described in their documentation.
curl -i -X POST -H 'Content-Type: application/json' http://127.0.0.1:2525/imposters --data '{
"port": 6568,
"protocol": "http",
"name": "proxyAlways",
"stubs": [
{
"responses": [
{
"proxy": {
"to": "http://localhost:8000",
"mode": "proxyAlways",
"predicateGenerators": [
{
"matches": {
"method": true,
"path": true,
"query": true
}
}
]
}
}
]
}
]
}'
I have another server running at http://localhost:8000 which is listening to all the request coming to port 6568.
Output of my server now:
mb
info: [mb:2525] mountebank v1.6.0-beta.1102 now taking orders - point your browser to http://localhost:2525 for help
info: [mb:2525] POST /imposters
info: [http:6568 proxyAlways] Open for business...
info: [http:6568 proxyAlways] ::ffff:127.0.0.1:55488 => GET /
I want to record all the request and response going around, and unable to do right now. When I enter curl -i -X GET -H 'Content-Type: application/json' http://127.0.0.1:6568/ it is giving me a response but how I do store it?
Also can anyone explain me the meaning of
save off the response in a new stub in front of the proxy response:
(from this Mountebank documentation)
How to store proxy results
The short answer is that mountebank already is storing it. You can verify that by looking at the output of curl http://localhost:2525/imposters/6568. The real question is how do you replay the stored response?
The common usage scenario with mountebank proxies is that you record the proxy responses on one running instance of mb, save off the results, and then start the next instance of mb with those saved responses. The way you would do that is to have the system under test talk to service you're trying to stub out via the mountebank proxy under whatever conditions you need it to, and then save off the responses (and their request predicates) by sending an HTTP GET or DELETE to http://localhost:2525/imposters/6568?removeProxies=true&replayable=true. You feed the JSON body of that response into the next mb instance, either through the REST API, or by saving it on disk and starting mountebank with a command of something like mb --configfile savedProxyResults.json. At that point, mountebank is providing the exact same responses to the requests without connecting to the downstream service.
Proxies create new stubs
Your last question revolves around understanding how the proxyAlways mode works. The default proxyOnce mode means that the first time a mountebank proxy sees a request that uniquely satisfies a predicate, it queries the downstream service and saves the response. The next time it seems a request that satisfies the exact same predicates, it avoids the downstream call and simply returns the saved result. It only proxies downstream once for the same request. The proxyAlways mode, on the other hand, always sends the requests downstream, and saves a list of responses for the same request.
To make this clear, in the example you copied we care about the method, path, and query fields on the request, so if we see two requests with exactly the same combination of those three fields, we need to know whether we should send the saved response back or continue to proxy. Imagine we first sent:
GET /test?q=elephants
The method is GET, the path is /test, and the query is q=elephants. Since this is the first request, we send it to the downstream server, which returns a body of:
No results
That will be true regardless of which proxy mode you set mountebank to, since it has to query downstream at least once. Now suppose, while we're thinking about it, the downstream service added an elephant, and then our system under test makes the same call:
GET /test?q=elephants
If we're in proxyOnce mode, the fact that the elephant was added to the real service simply won't matter, we'll continue to return our saved response:
No results
You'd see the same behavior if you shut the mountebank process down and restarted it as described above. In the config file you saved, you'd see something like this (simplifying a bit):
"stubs": [
{
"predicates": [{
"deepEquals': {
"method": "GET",
"path": "/test",
"query": { "q": "elephants" }
}
}],
"responses": [
{
"is": {
"body": "No results"
}
}
]
}
]
There's only the one stub. If, on the other hand, we use proxyAlways, then the second call to the GET /test?q=elephants would yield the new elephant:
1. Jumbo reporting for duty!
This is important, because if we shut down the mountebank process and restart it, now our tests can rely on the fact that we'll cycle through both responses:
"stubs": [
{
"predicates": [{
"deepEquals': {
"method": "GET",
"path": "/test",
"query": { "q": "elephants" }
}
}],
"responses": [
{
"is": {
"body": "No results"
}
},
{
"is": {
"body": "1. Jumbo reporting for duty!"
}
}
]
}
]
I have some custom flask methods in an eve app that need to communicate with a telnet device and return a result, but I also want to pre-populate data into some resources after retrieving data from this telnet device, like so:
#app.route("/get_vlan_description", methods=['POST'])
def get_vlan_description():
switch = prepare_switch(request)
result = dispatch_switch_command(switch, 'get_vlan_description')
# TODO: populate vlans resource with result data and return status
My settings.py looks like this:
SERVER_NAME = '127.0.0.1:5000'
DOMAIN = {
'vlans': {
'id': {
'type': 'integer',
'required': True,
'unique': True
},
'subnet': {
'type': 'string',
'required': True
},
'description': {
'type': 'boolean',
'default': False
}
}
}
I'm having trouble finding docs or source code for how to access a mongo resource directly and insert this data.
Have you looked into the on_insert hook? From the documentation:
When documents are about to be stored in the database, both on_insert(resource, documents) and on_insert_<resource>(documents) events are raised. Callback functions could hook into these events to arbitrarily add new fields, or edit existing ones. on_insert is raised on every resource being updated while on_insert_<resource> is raised when the <resource> endpoint has been hit with a POST request. In both circumstances, the event will be raised only if at least one document passed validation and is going to be inserted. documents is a list and only contains documents ready for insertion (payload documents that did not pass validation are not included).
So, if I get what you want to achieve, you could have something like this:
def telnet_service(resource, documents):
"""
fetch data from telnet device;
update 'documents' accordingly
"""
pass
app = Eve()
app.on_insert += telnet_service
if __name__ == "__main__":
app.run()
Note that this way you don't have to mess with the database directly as Eve will take care of that.
If you don't want to store the telnet data but only send it back along with the fetched documents, you can hook to on_fetch instead.
Lastly, if you really want to use the data layer you can use app.data.driveras seen in this example snippet.
use post_internal
Usage example:
from run import app
from eve.methods.post import post_internal
payload = {
"firstname": "Ray",
"lastname": "LaMontagne",
"role": ["contributor"]
}
with app.test_request_context():
x = post_internal('people', payload)
print(x)