I have a Swagger JSON definition file. I would like to generate Python client binding for it. I would like to generate it as part of my development cycle, i.e., every time the definition changes I can run a script locally to regenerate my client
I am aware of Swagger Editor and Swagger Online Generators but they are no satisfying my needs:
both methods rely on an external service I need to call
as result I am getting .zip file I need to unzip - that makes whole process more complex.
I remember times when I could generate Java client binding for SOAP service running Apache CXF locally. Is there something like that for Swagger? I've seen there's an option to clone the whole swagger-codegen project, for all possible programming languages, but that seems like an overkill. Is there another option?
How companies are handling issue like mine?
I've used https://github.com/OpenAPITools/openapi-generator
You don't need to clone the whole repository, just download the JAR file and use it to generate the client/server.
Related
I am new to the Google Cloud and to Dataflow. What I want to do is to create a tool which detects and recovers errors in (large) csv files. However at design time, not every error that should be dealt with is already known. Therefore, I need an easy way to add new functions that handle a specific error.
The tool should be something like a framework for automatically creating a dataflow template based on a user selection. I already thought of a workflow which could work, but as mentioned before I am totally new to this, so please feel free to suggest a better solution:
The user selects which error correction methods should be used in the frontend
A yaml file is created which specifies the selected transformations
A python script parses the yaml file and uses error handling functions to build a dataflow job that executes these functions as specified in the yaml file
The dataflow job is stored as a template and run via a REST API Call for a file stored on the GCP
To achieve the extensibility, new functions which implement the error corrections should easily be added. What I thought of was:
A developer writes the required function and uploads it to a specified folder
The new function is manually added to the frontend/or to a database etc. and can be selected to check for/deal with an error
The user can now select the newly added error handling function and the dataflow template that is being created uses this function without the need of editing the code that builds the dataflow template
However my problem is that I am not sure if this is possible or a "good" solution for this problem. Furthermore, I don't know how to create a python script that uses functions which are not known at design time.
(I thought of using something like a strategy pattern, but as far as I know you still need to have the functions implemented at design time already, even though the decision which function to use is being made during run time)
Any help would be greatly appreciated!
What you can use in your architecture is Cloud Functions together with Cloud Composer (hosted solution for Airflow). Apache Airflow is designed to run DAGs on a regular schedule, but you can also trigger DAGs in response to events, such as a change in a Cloud Storage bucket (where CSV files can be stored). You can configure this with your frontend and every time new files arrives into the Bucket, the DAG containing step by step process is being triggered.
Please, have a look for the official documentation, which describes the process of launching Dataflow pipelines with Cloud Composer using DataflowTemplateOperator.
Swagger and then flask or opposite
Hello,
I'm starting a new poroject who need an API.
I 'm going with flask and swagger.
A simple question: should i start by defining the API with swagger and then generate the python code or the opposite.
Thank for your help.
There are solutions for both approaches:
If you end up preferring to define the API first, I would recommend connexion. You define OpenAPI (new name for Swagger) specification files, and then you do Python code accordingly. I would say that this is the best approach as you can guarantee that whatever code you write afterwards will conform to the specification, that you can provide to whoever wants it. Another advantage is not mixing up core concepts of your logic with the API specification.
Otherwise, flask-restplus does the trick. This is the most common (but not necessarily better) approach, where you write your Python code and the specification is then generated. This is the approach I usually follow in simple use cases.
For information i finally choose the openapi-generator tool that allow me to generate python server stub on Flask/connexion stack and my client lib for Android on top of okhttp.
https://github.com/OpenAPITools/openapi-generator
I was rather satisfied with this choice during my project.
To merge your server stub last version with older one you can dedicated a git branch for raw server stub generation and another one wich you fill with your own code. That way you won't lose your previous version with the code you add to the server stub.
I have written a python module for extracting archives and processing some data from the extracted files, then optionally deleting the extracted files. Currently I am using it by writing user-code in a separate python script. e.g:
import my_module
with my_module.archive("data.rar") as a:
a.extract()
a.convert("data.csv", "data.xlxs")
a.delete(keep="data.xlsx")
But for non-programmers to use it, I am trying to create a webapp user-interface for it with flask.
What are the best practices for implementing some communication between the user and the module through a webapp? For example
how could my module to communicate to the webapp-user that the extraction or the data processing has finished?
how should the it display the list of extracted files?
how could the user select a file to be processed or to be deleted?
Basically I need to interleave my library code with some user code and I would like the webapp to generate and execute the user-code and to display the output of the processing module.
(I sense a close flag coming since this is a broad topic and it's been a while since "best practices" topics were allowed on SO. )
Based on my on faults I'd recommend to:
implement the backend first, don't waste time on web design if the backend functionality is not crystal clear yet. Read about flask blueprints and create a blueprint for all public calls like list directory, select file, unzip file, etc... test it out, use requests, post stuff, check responses, iterate.
if you are satisfied with basic functionality, start implementing the frontend. For interactivity you can use the very same api calls via javascript you already tested, use XMLHttpRequest (or use jQuery - I am not a big fan of that) for posting stuff. The catch here is that when you post (from the browser), you can define a callback, so you may use the flask response to update the interface.
add some css, eventlisteners to your templates to make it pretty and comfy.
Get ideas from here and here and here and here (just googled for some random pages, some seemed on topic for you).
You probably want to read this too.
I have a Plone instance (4.2.2) running and need to automate the creation of user accounts. I'd really like to do this using an external Python script or some other Linux-based command line utility (such as "curl").
If I use curl, I can authenticate to gain access to the "##new_user" page, but I can't seem to get the right POST setup in the headers.
If I don't use curl and use a Python script instead, are there any utilities or libraries that can do this? I've tried using libraries such as Products.CMFCore.utils.getToolByName and getting the "portal_registration" - but I can't seem to get that to work in a regular script (one that has no request/context).
This script needs to run once every N minutes on the server (where the user information is grabbed from an external database). I also need there to be no password - and select the option to email the user to set their own password, and I need to add this user to a pre-defined group.
Are there any suggestions - perhaps another utility or built-in library that would better suite these requirements?
There are many scripts floating around for batch imports of users. A good one is Tom Lazar's CSV import http://plone.org/documentation/kb/batch-adding-users . Tom's script could be very easily adapted to run as a Zope "run" script if you need to run it from the command line, say as a cron job. "Run" scripts are scripts processed via command like "bin/client5 run my_script.py" using a Zope or ZEO client instance.
However, a strategy like this is one-way. It sound's from the conversation you had with Martijn like you might be better off by creating a Pluggable Authentication System (PAS) plugin to provide a connection to your external database source. The great thing about this strategy is that you can very precisely determine which user properties have which source (external DB or Plone membership properties manager). Docs (as MJ indicated) are at http://plone.org/documentation/manual/developer-manual/users-and-security/pluggable-authentication-service/referencemanual-all-pages .
Make sure to look and see if there is already a plugin written for your external data source. There are already plugins for many auth schemes and dbs (like LDAP and SQLAlchemy).
Im looking to optimize our translation workflow for a django/python based project.
Currently we run a command to export our gettext files, send it to the translators, receive it back. Well you get the drill.
What in your opinion is the best way to optimize this workflow. Tools which integrate nicely and allow translations to be pushed and pulled from and to the system?
Options i've seen so far:
http://trac.transifex.org/ (supported in django 1.3)
Transifex was designed for pretty much this. It doesn't pull the strings from the project/app automatically yet, but it can be extended to do so if desired.
Transifex has two ways to automate this: If your POT file is on a public server, you can setup a resource to auto-fetch the POT file frequently and update your resource.
The second option is to use the client app and run it every time you build/deploy/commit.