Attach a custom view to spyne - python

I am using Spyne to provide SOAP interface to my backend, but a need one or more custom views accessible on certain URLS. For example to show some statistics in HTML.
How do i do that?

Please have a look at the multiple protocols example here: https://github.com/arskom/spyne/tree/master/examples/multiple_protocols
If you want to return raw data via Http, you must set your output protocol to HttpRpc, your output type in your service to ByteArray or String and ctx.transport.mime_type to whatever type you're returning

Related

I want to get a stream object from Azure Inheritance Iterator ItemPaged - ItemPaged[TableEntity] to Stream (Python). Is it possible?

I want to get a stream object from Azure Inheritance Iterator ItemPaged - ItemPaged[TableEntity] to stream (Python). Is it possible?
https://learn.microsoft.com/en-us/python/api/azure-core/azure.core.paging.itempaged?view=azure-python
https://learn.microsoft.com/en-us/python/api/azure-core/azure.core.paging.itempaged?view=azure-python
#Updated 11.08.2021
I have a realization to backup Azure Tables to Azure Blob - Current process to backup Azure Tables. But I want to improve this process and I am considering different options. I try to get the stream from Azure Tables to use create_blob_from_stream
I assume you want to stream bytes from the HTTP response, and not the use the iterator of objects you receive.
Each API in the SDK supports a keyword argument call raw_response_hook that gives you access to the HTTP response object, and then let you use a stream download API if you want to. Note that since the payload is considered to represent objects, it will be pre-loaded in memory no matter what, but you can still use a stream syntax nonetheless.
The callback is simply one parameter:
def response_callback(response):
# Do something with the response
requests_response = response.internal_response
# Use "requests" API now
for chunk in requests_response.iter_content():
work_with_chunk(chunk)
Note that this is pretty advanced, you may encounter difficulties and this might not fit what you want precisely. We are working on a new pattern on SDK to simplify complex scenario like that, but it's not shipped yet. You would be able to send and receive raw requests using a send_request method, which gives you absolute control on all aspect of the query, like explaining you just want to stream (no pre-load in memory) or disabling the deserialization by default.
Feel free to open an issue on the Azure SDK for Python repo if you have additional questions or clarification: https://github.com/Azure/azure-sdk-for-python/issues
Edit with new suggestions: TableEntity is a dict like class, so you can json.dumps as string, or json.dump as a stream while using the ItemPaged<TableEntity>. If JSON dumps raise an exception, you can try our JSON encoder in azure.core.serialization.AzureJSONEncoder: https://github.com/Azure/azure-sdk-for-python/blob/1ffb583d57347257159638ae5f71fa85d14c2366/sdk/core/azure-core/tests/test_serialization.py#L83
(I work at MS in the Azure SDK for Python team.)
Ref:
https://docs.python-requests.org/en/master/api/#requests.Response.iter_content
https://azuresdkdocs.blob.core.windows.net/$web/python/azure-core/1.17.0/azure.core.pipeline.policies.html#azure.core.pipeline.policies.CustomHookPolicy

How can I get Custom Payload as string

I have a Dialogflow CX agent that returns a Custom payload. My client is a Python application using the dialogflowcx_v3beta1 SDK to call DetectIntent. The application needs to forward the custom payload in JSON format to another application, but I have been unable to find a way to convert the structured payload to JSON. There is no schema associated with the custom payload, which could be literally any valid JSON, and because it will simply be forwarded to another component, the application has no reason to interpret the content in any way.
Is there a way to serialize the custom payload to JSON?
Unless you're asking a Python question, the "CX Solution" could be to use the Fulfillment text instead of the Custom Payload feature, and include there the serialized JSON.

Suggestion on documenting endpoints for a Python Bottle web service

I have a portion of my API that i am exposing using Bottle (http://bottlepy.org/docs/dev/index.html).
I am now looking to document these endpoints for the end user clients and am looking for a good solution. I am looking for something that is tightly integrated with my"routes" defined in the Bottle app so that any changes in the future keep in sync. The key areas i want to document are the HTTP method types that are accepted and the necessary query parameters.
I have included an example route below which queries whether an instance defined in the underlying API is online. As you can see the route only accepts GET requests, and the "check_valid_instance" function expects to find a query parameter. Looking at this route definition there is no indication that a query param is needed and that is what i am trying to add here! Both to the source code, and also externally to some type of help page
#app.route("/application/app_instance/is_instance_online", method="GET")
def is_instance_online():
_check_valid_instance()
function = eval("app_instance.is_instance_online")
return _process_request_for_function(function)
The above route would be called as following
http://IP:Port/applicaton/app_instance/is_instance_online?instance=instance_name
Any suggestions welcome!
Thanks!
For additional params you can create a structure similar to this:
COMMANDS = {'is_instance_online': {'mandatory_params': 'instance_name',
'description': 'command description'}}
self.bottle.route('/<command>', method='GET', commands=COMMANDS)(self.command_execute)
Then you should be able to generate JSON description of the whole API as shown below:
Automatic Generation of REST API description with json

Python server exposing both and xml and json rpc interface

As the title says, is there a way to expose a function to both the JSON-RPC and the XML-RPC interface? Preferably one server running on a single port would answer to both types of requests.
Thanks!
One of the usual ways of returning different formats is to specify the type you want in the url in some way. Most common are:
http://example.com/some/page.<format>
or
http://example.com/some/page?output=<format>
And then your returned object should be transformed into the wanted format:
// somewhere at the end of the method handling the request
return Formater(format_param).format(response_object)

Python: Sending a large dictionary to a server

I have an application that should communicate status information to a server. This information is effectively a large dictionary with string keys.
The server will run a web application based on Turbogears, so the server-side method called accepts an arbitrary number of keyword arguments.
In addition to the actual data, some data related to authentication (id, password..) should be transmitted. One approach would be to simply urlencode a large dictionary containing all this and send it in a request to the server.
urllib2.urlencode(dataPlusId)
But actually, the method doing the authentication and accepting the data set does not have to know much about the data. The data could be transmitted and accepted transparently and handed over to another method working with the data.
So my question is: What is the best way to transmit a large dictionary of data to a server in general? And, in this specific case, what is the best way to deal with authentication here?
I agree with all the answers about avoiding pickle, if safety is a concern (it might not be if the sender gets authenticated before the data's unpickled -- but, when security's at issue, two levels of defense may be better than one); JSON is often of help in such cases (or, XML, if nothing else will do...!-).
Authentication should ideally be left to the webserver, as SpliFF recommends, and SSL (i.e. HTTPS) is generally good for that. If that's unfeasible, but it's feasible to let client and server share a "secret", then sending the serialized string in encrypted form may be best.
I think the best way is to encode your data in an appropriate transfer format (you should not use pickle, as it's not save, but it can be binary) and transfer it as a multipart post request
What I do not know if you can make it work with repoze.who. If it does not support sign in and function call in one step, you'll perhaps have to verify the credentials yourself.
If you can wrap your data in xml you could also use XML-RPC.
Why don't you serialize the dictionary to a file, and upload the file? This way, the server can read the object back into a dictionary .
Do a POST of your python data (use binary as suggested in other answers) and handle security using your webserver. Apache and Microsoft servers can both do authentication using a wide variety of methods (SSL client certs, Password, System accounts, etc...)
Serialising/Deserialising to text or XML is probably overkill if you're just going to turn it back to dictionary again).
I'd personally use SimpleJSON at both ends and just post the "file" (it would really just be a stream) over as multipart data.
But that's me. There are other options.
Have you tried using pickle on the data ?

Categories

Resources