I'm trying to setup a flask app on google app engine that will be something of a frontend management console for google container engine. Google has put out working APIs to spin up a container cluster, but it does not look like they have put out (python) APIs to administer kubernetes. That is, everything needed implement services, pods, RCs etc. seems to be setup to run through bash scripting. This is not compatible with the restrictions of google's app engine.
Is there a commonly accepted solution/package for this? Would it make more sense to abandon appengine in favor for a managed VM (not ideal)?
Thanks
As I mentioned in Submit jobs using API Client Library for Python?, the Kubernetes API uses a standard swagger specification, so it should be possible to generate a python client library. There is also pykube if you want to experiment with a existing client library.
Related
I have a Django application deployed on Google App Engine standard environment. I am interested in server side rendering of my JS frontend. Can I use node.js alongside Django on the same GAE? Maybe as microservice?
What you can do is to deploy each of your app as a separate service in App Engine and they will work independently as a microservice. To do so, make sure to set a service name for each of the app.yaml file of your apps:
service: service-name
Afterwards, you can communicate between your services through an HTTP invocation, such as a user request or a RESTful API call. Code in one service can't directly call code in another service.
Refer to this link for additional information about communicating between your services.
I have come across articles that talk about integrating python and Node but I personally haven't done it or seen it done on GAE.
If I were to take a stab, I think you would be looking at something like
Have the python app as a service (say it's available on python_service.myapp.appspot.com
Have your Node.js as your default service available on myapp.appspot.com
Your Nodejs will have a route and when this route is invoked, you make an http request to the python service, wait for a response and then your Nodejs app returns that response.
Our App, https://nocommandline.com is an Electron App (combo of Node.js & Vue.js) If you purchase a license and try to validate it, we make a call server side and our server side is Python based. It's not exactly the same thing you're looking at (since our App is not web-based) but this gives you an idea of what I was trying to describe.
I am a newbie who wants to deploy his flask app using google cloud functions. When I am searching it online, people are telling me to deploy it as a Flask app. I want to ask if there is any difference between those two.
A cloud instance or deploying flask app on google cloud VS cloud serverless function
As described by John and Kolban, Cloud Functions is a single purpose endpoint. You want to perform 1 thing, deploy 1 function.
However, if you want to have a many consistent things, like a microservice, you will have to deploy several endpoints that allow you to perform a CRUD on the same data object. You should prefer to deploy several endpoints (CRUD) and to have the capability to easily reuse class and object definitions and business logic. For this, a Flask webserver is that I recommend (and I prefer, I wrote an article on this).
A packaging in Cloud Run is the best for having a serverless platform and pay-per-use pricing model (and automatic scaling and...).
There is an additional great thing: Cloud Functions request object is based on Flask request object. By the way, and it's that I also present in my article, it's easy to switch from one platform to another one. You only have to choose according with your requirements, your skills,... I also wrote another article on this
If you deploy your Flask app as an application in a Compute Engine VM instance, you are basically configuring a computer and application to run your code. The notion of Cloud Functions relieves you from the chore and toil of having to create and manage the environment in which your program runs. A marketing mantra is "You bring the code, we bring the environment". When using Cloud Functions all you need do is code your application logic. The maintenance of the server, scaling up as load increases, making sure the server is available and much more is taken care of for you. When you run your code in your own VM instance, it is your responsibility to manage the whole environment.
References:
HTTP Functions
Deploying a Python serverless function in minutes with GCP
I'm getting my feet wet with GCP and GAE, also nodejs and python and networking (I know).
[+] What I have:
Basically I have some nodejs code that takes in some input and is supposed to then send that input to some python code that will do more stuff to it. My first idea was to deploy the nodejs code via GAE, then host the python code in a python server, then make post requests from the nodejs front-end to the python server backend.
[+] What I would like to be able to do:
just deploy both my nodejs code and my python code in the same project and instance of GAE so that the nodejs is the frontend that people see but so that the python server is also running in the same environment and can just communicate with the nodejs without sending anything online.
[+] What I have read
https://www.netguru.co/blog/use-node-js-backend
Google App Engine - Front and Backend Web Development
and countless other google searches for this type of setup but to no avail.
If anyone can point me in the right direction I would really appreciate it.
You can't have both python and nodejs running in the same instance, but they can run as separate services, each with their own instance(s) inside the same GAE app/project. See Service isolation and maybe Deploying different languages services to the same Application [Google App Engine]
Using post requests can work pretty well, but will likely take some effort to ensure no outside access.
Since you intend to use as frontend the nodejs service you're limited to using only the flexible environment for it, which limits the inter-service communication options - you can't use push queues (properly supported only in the standard environment) which IMHO would be a better/more secure solution than post requests.
Another secure communication option would be for the nodejs service to place the data into the datastore and have the python service pick it up from there - the datastore is shared by all instances/versions/services inside the same GAE app. Also more loosely coupled IMHO - each service can function (at least for a while) without the other being alive (not possible if using the post requests).
Maybe of interest: How to tell if a Google App Engine documentation page applies to the standard or the flexible environment
UPDATE:
Node.JS is currently available in the standard environment as well, so you can use those features, see:
Now, you can deploy your Node.js app to App Engine standard environment
Google App Engine Node.js Standard Environment Documentation
I'm looking for ways to expose business logic that I've implemented in a Google App Engine application as an RPC service for mobile clients to call; there seem to be two ways of doing this:
Google Protocol RPC Library for App Engine. The examples there had me create a WSGI service using protorpc.wsgi.service.service_mapping exposing service classes whose methods are decorated with protorpc.remote.method.
Cloud Endpoints Framework for App Engine. The examples there had me create a Cloud Endpoint using endpoints.api_server exposing service classes whose methods are decorated with endpoints.method.
I started with Google Protocol RPC library because that was the main thing that was linked from the App Engine documentation, and got the RPC working, which I can verify with a simple curl command. But I could not find how to generate Android / iOS client libraries.
After searching longer, I found Cloud Endpoints Framework for App Engine, which seems to be offering the option to generate Android / iOS libraries, but I'm confused by the similarities between the two options (e.g. it seems that Cloud Endpoints Framework relies at least partly on protorpc as well, but the decorators and handlers are different) and am unsure which one I should be choosing.
(NB: I understand that Cloud Endpoints are available for more environments, such as Compute Engine, but I want to stay in App Engine standard as much as possible.)
My questions:
Can someone articulate the differences between these two seemingly very similar solutions? What are some situations in which I might prefer using the Google Protocol RPC library over the Cloud Endpoints Framework for App Engine? What are some of the other situations in which Cloud Endpoints Framework is preferable?
Is there a way to generate client libraries from API definitions written for the Google Protocol RPC library? Now that I've gotten the simple Google Protocol RPC library version working, I'm not sure I should do the work to migrate over to Cloud Endpoints Framework for App Engine just for the benefit of client code generation.
In a Python script, mylibrary.py, I use Protocol Buffers to model data using the following approach:
Defining message formats in a .proto file.
Use the protocol buffer compiler.
Use the Python protocol buffer API to write and read messages in the .py module.
I want to implement Cloud Endpoints Framework on App Engine that imports and uses the aforementioned Python script, however Cloud Endpoints uses ProtoRPC, not 'standard' Protocol Buffers.
My App Engine Python module, main.py, imports from protorpc rather than using the 'offline' protoc compiler to generate serialization and deserialization code:
from protorpc import messages
from protorpc import remote
Messages are not defined using .proto files. Instead, classes are defined, inheriting from protorpc.messages.Message:
class MyMessageDefinition(messages.Message)
Can Proto Buffers be converted to Proto RPC equivalents? I don't really want to change mylibrary.py to use ProtoRPC, since it's less generic than Protocol Buffers.
After eight months and lots experimentation, I'll add my opinion. I hope it saves someone time.
Choose Your Framework First
There are different Cloud Endpoint offerings from Google Cloud. All can be used for JSON/REST APIs. This wasn't immediately clear to me. Cloud Endpoints is a very high-level phrase covering development, deployment and management of APIs on multiple Google Cloud backends.
The point here is that after deciding to use Cloud Endpoints, you must still decide on backend technologies to serve your API. The documentation feels a little hidden away, but I strongly recommend starting with the Google Cloud Endpoints doc.
You can choose between:
OpenAPI Specification
Endpoints Frameworks
gRPC
Choose Your Implementation Second
Within each API Framework there’s a choice of Cloud implementations upon which your API (service) can run:
OpenAPI Specification
- for JSON/REST APIs implemented on:
Google App Engine flexible environment
Google Compute Engine
Google Container Engine
Kubernetes
Endpoints Frameworks
- for JSON/REST APIs implemented on:
Google App Engine standard environment with Java
Google App Engine standard environment with Python
gRPC
- for gRPC APIs implemented on:
Google Compute Engine
Google Container Engine
Kubernetes
When posting question here, I was using Endpoints Frameworks running on Google App Engine standard environment with Python. I then migrated my API (service) to gRPC on Google Compute Engine.
The observant among you may notice both the OpenAPI Specification and Endpoints Frameworks can be used for JSON/REST APIs, while gRPC only exposes a gRPC API. So how did I port my REST API from Endpoints Frameworks to gRPC? The answer is Transcoding HTTP/JSON to gRPC (which I learnt along the way, and was not immediately clear to me). So, don't rule out gRPC just because you want REST/HTTP.
The Answer
So how does this related to my original question?
That I was trying to convert between .proto files and gRPC annotations at all, meant I had taken a wrong-turning along the way.
If you want to write an application using plain .proto files, then choose gRPC on Compute Engine. If you need this to be a REST API, this can be done, but you'll need to add an ESP into your backend configuration. It's pretty much an NGINX sever setup as a reverse proxy. The only downside here is you'll need some Docker knowledge to ensure the ESP (proxy) and your gRPC server can communicate (Docker networking).
If your code is already on an App Engine, and you want to expose it as a REST API with minimum effort and still get good API management features, choose Endpoints Frameworks. Warning: I moved away from this because it was prohibitively expensive (I was getting billed in the region of $100 USD monthly).
If you want to avoid .protos altogether, then go with OpenAPI Specification.
Lastly, if you want to offer programmatic integration, client libraries, or you want to offer a microservice, then really do consider gRPC. It's easy to remove the ESP (proxy) and run a gRPC server on nearly any machine (as long as the Protocol Buffer Runtime is installed.
Ultimately I settled on gRPC on Compute Engine with Docker. I also have an ESP to provide a HTTP transcoding to gRPC and vice-versa. I like this approach for a few reasons:
You learn a lot: Docker, Docker Networking, NGINX configuration, Protocol Buffers, ESP (Cloud Proxy), gRPC servers.
The service (core business) logic can be written with plain-old gRPC. This allows the service to be run on any machine without a web server. Your business logic, is the server :)
Protocol Buffers / gRPC are excellent for isolating business logic as a service...or microservice. They're also good for providing well-defined interfaces and libraries.
Avoid These Mistakes
Implementing the first framework / architecture you find. If I could start again, I would not choose Endpoints Frameworks. It's expensive, and uses annotations rather than .proto files, which, IMO, makes the code harder to port.
Read Always Free Usage Limits before deciding upon a framework and implementation. Endpoints Frameworks uses backend App Engine instances - which have almost no free quota. Confusing, frontend App Engine instances have a very generous free quota.
Consider Local Development. Cloud Endpoints local development servers are not officially supported (at least they weren't at the time of my question). Conversely there's a whole page on Running a Local Extensible Service Proxy.
I found the project called pyprotobuf (http://pyprotobuf.readthedocs.io) that can generate a module with protorpc classes starting from the proto file.
According to the documentation (http://pyprotobuf.readthedocs.io/topics/languages/protorpc.html) you need to execute:
protopy --format python example.proto