We are trying to use deploy a service using knative with the python client library of Kubernetes. We are using the following yaml file:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: test-{{ test_id }}
namespace: default
spec:
template:
spec:
containers:
- image: test-deployment:latest
resources:
limits:
cpu: 50m
memory: 128Mi
requests:
cpu: 50m
memory: 128Mi
containerConcurrency: 1
If we deploy using the command line tool of kubernetes, it works fine.
kubectl create -f test.yaml
With the python client library, we are doing:
import kubernetes
import yaml
import uuid
from jinja2 import Template
from urllib3 import exceptions as urllib_exceptions
api = kubernetes.client.CoreV1Api(api_client=kubernetes.config.load_kube_config(context=cluster))
with open(deployment_yaml_path, 'r') as file_reader:
file_content = file_reader.read()
deployment_template = Template(file_content)
deployment_template = yaml.safe_load(template.render({
'test_id': str(uuid.uuid4())
}))
deployment = kubernetes.client.V1Service(
api_version=deployment_template['apiVersion'],
kind="Service",
metadata=deployment_template['metadata'],
spec=deployment_template['spec']
)
try:
response = api.create_namespaced_service(body=deployment, namespace='default')
except (kubernetes.client.rest.ApiException, urllib_exceptions.HTTPError):
raise TestError
However, we are getting this error:
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'a1968276-e16b-44f4-a40d-5eb5eaee9d47', 'Content-Type': 'application/json', 'Date': 'Thu, 23 Apr 2020 08:29:36 GMT', 'Content-Length': '347'})
HTTP response body: {
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Service in version \"v1\" cannot be handled as a Service: no kind \"Service\" is registered for version \"serving.knative.dev/v1\" in scheme \"k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30\"",
"reason": "BadRequest",
"code": 400
}
Is there a way to deploy a service with knative? As far as I understood knative service is different than the normal Kubernetes service. I don't know whether the problem is that I'm trying to deploy the service in a wrong way or whether the Kubernetes python client library doesn't support this deployment yet.
Edit:
Python Client Library: kubernetes==11.0.0
Kubernetes:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:51:13Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-gke.5", GitCommit:"a5bf731ea129336a3cf32c3375317b3a626919d7", GitTreeState:"clean", BuildDate:"2020-03-31T02:49:49Z", GoVersion:"go1.12.17b4", Compiler:"gc", Platform:"linux/amd64"}
kubernetes.client.V1Service is a reference to the Kubernetes "Service" concept, which is a selector across pods that appears as a network endpoint, rather than the Knative "Service" concept, which is the entire application which provides functionality over the network.
Based on this example from the kubernetes-client/python repo, you need to do something like this to get and use a client for Knative services:
api = kubernetes.client.CustomObjectsApi()
try:
resource = api.create_namespaced_custom_object(
group="serving.knative.dev",
version="v1",
plural="services",
namespace="default",
body=deployment_template)
except (kubernetes.client.rest.ApiException, urllib_exceptions.HTTPError):
raise TestError
If you're going to be doing this a lot, you might want to make a helper that takes arguments similar to create_namespaced_service, and possibly also a wrapper object similar to kubernetes.client.V1Service to simplify creating Knative Services.
Try using create_namespaced_custom_object
Refer: https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#create_namespaced_custom_object
Here service is a custom resource specific to Knative.
Related
I'm building a Kubernetes application that Dockerizes code and runs it on the cluster. In order for users to be able to invoke their Dockerized code, I need to modify the Istio configuration to expose the service they've created.
I'm trying to create Istio virtual services using the Python API. I'm able to list existing Istio resources:
group = 'networking.istio.io'
version = 'v1alpha3'
plural = 'destinationrules'
from kubernetes import client, config
config.load_kube_config()
myclient = client.CustomObjectsApi()
api_response = myclient.list_cluster_custom_object(group, version, plural)
but when I use the same parameters to create, I get a 404 not found error.
with open('destination-rule.yaml', 'r') as file_reader:
file_content = file_reader.read()
deployment_template = yaml.safe_load(file_content)
api_response = myclient.create_cluster_custom_object(
group=group, version=version, plural=plural, body=body)
The destination-rule.yaml file looks like:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: test
spec:
host: test
subsets:
- name: v1
labels:
run: test
What am I doing wrong here?
My problem was that I was doing create_cluster_custom_object instead of create_namespaced_custom_object. When I switched over, it started working.
I have an API Gateway defined in the python cdk that will accept CURL Restful requests to upload / read / delete files from an S3 bucket:
api = api_gw.RestApi(self, "file-api",
rest_api_name="File REST Service")
file = api.root.add_resource("{id}")
get_files_integration = api_gw.LambdaIntegration(handler,
request_templates={"application/json": '{ "statusCode": "200" }'})
post_file_integration = api_gw.LambdaIntegration(handler)
get_file_integration = api_gw.LambdaIntegration(handler)
delete_file_integration = api_gw.LambdaIntegration(handler)
api.root.add_method("GET", get_files_integration, authorization_type=api_gw.AuthorizationType.COGNITO, authorizer=auth)
file.add_method("POST", post_file_integration); # POST /{id}
file.add_method("GET", get_file_integration); # GET /{id}
file.add_method("DELETE", delete_file_integration); # DELETE /{id}
Is it possible to enable CORS on the API Gateway so that it will perform pre-flight checks and allow external access from a localhost on another machine?
I have attempted to use the existing add_core_preflight() method defined in the documentation I can find but believe this may no longer be valid as of CDK 2.0.
Yes, IResource.add_cors_preflight() does exactly this.
You can also specify default CORS config with the default_cors_preflight_options attribute of RestApi.
Here are the examples from the docs. They're in Typescript, but it will work the same in Python.
The following example will enable CORS for all methods and all origins on all resources of the API:
new apigateway.RestApi(this, 'api', {
defaultCorsPreflightOptions: {
allowOrigins: apigateway.Cors.ALL_ORIGINS,
allowMethods: apigateway.Cors.ALL_METHODS // this is also the default
}
})
The following example will add an OPTIONS method to the myResource API resource, which only allows GET and PUT HTTP requests from the origin https://amazon.com.
declare const myResource: apigateway.Resource;
myResource.addCorsPreflight({
allowOrigins: [ 'https://amazon.com' ],
allowMethods: [ 'GET', 'PUT' ]
});
The Problem:
The library flask-oidc includes the scope parameter into the authorization-code/access-token exchange request, which unsurprisingly throws the following error:
oauth2client.client.FlowExchangeError: invalid_request Scope parameter is not supported on an authorization code access_token exchange request. Scope parameter should be supplied to the authorized request.
The Question:
Is this a configuration problem or a library problem?
My Configurations:
Flask Application:
app.config.update({
'DEBUG': True,
'TESTING': True,
'SECRET_KEY': 'secret',
'SERVER_NAME' : 'flask.example.com:8000',
'OIDC_COOKIE_SECURE': False,
'OIDC_REQUIRE_VERIFIED_EMAIL': False,
'OIDC_CALLBACK_ROUTE': '/oidc/callback',
'OIDC_CLIENT_SECRETS': 'client_secrets.json'
})
oidc = OpenIDConnect(app)
client_secrets.json
{
"web": {
"auth_uri": "http://openam.example.com:8080/openam/oauth2/realms/root/authorize",
"issuer": "http://openam.example.com:8080/openam/oauth2/realms/root/",
"userinfo_uri": "http://openam.example.com:8080/openam/oauth2/realms/root/userinfo",
"client_id": "MyClientID",
"client_secret": "password",
"redirect_uris": [
"http://flask.example.com:8000/oidc/callback"
],
"token_uri": "http://openam.example.com:8080/openam/oauth2/realms/root/token",
"token_introspection_uri": "http://openam.example.com:8080/openam/oauth2/realms/root/introspect"
}
}
Access Manager
For the access manager I use OpenAM. I configured an OpenAM client agent as follows:
Client ID = MyClientID
Client Secret = password
Response Type = code
Token Endpoint Authentication Method = client_secret_post
Redirect URI = http://flask.example.com:8000/oidc/callback
Context:
I use flask-oidc for the logic on the application side and OpenAM for the identity and access management - both applications run in docker containers. When using simple curl commands I can retrieve an authorization grant as well as an authentication token (grant type: Authorization Code Grant). However, using the mentioned library, after logging in to OpenAM and granting authorization to the application (endpoint 'oauth2/authorize'), flask-oidc sends the following GET request:
GET /oidc/callback?code=<some code> \
&scope=openid%20email \
&iss=http%3A%2F%2Fopenam.example.com%3A8080%2Fopenam%2Foauth2 \
&state=<some state> \
&client_id=MyClientID
Which leads to the error mentioned above.
While this does not directly answer the question, the best answer I could find was to use pyJWT or oauthlib instead of using flask-oidc. I found pyjwt was very straightforward in most respects, and there is an excellent tutorial here:
SSO Using Flask Request Oauthlib and pyjwt
I am not sure of this, but because the error is generated by oauth2client, not flask-oidc, it is possible the error is actually just related to the deprecated oathlib2clientlib.
There was a detailed request to mark the entire flask-oidc project as deprecated, but that request was made several years after the flask-oidc project was stopped being maintained. I hope one day flask will roove this link from their site because it is misleading to think that it is a main part of flask.
I've built an App engine API in Python that's fetched by a Node application. The API works as expected for (1) get and post requests in production and (2) get requests in development. It fails on post requests in development and I could use some help figuring out why.
Error messages
In my node environment I see the error:
No 'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:4444' is therefore not allowed
access. The response had HTTP status code 500. If an opaque response
serves your needs, set the request's mode to 'no-cors' to fetch the
resource with CORS disabled.
But I am already using the flask_cors package inside my app so I wonder if this is really a CORS issue.
My activated virtual python environment logs:
File
"/myproject/googleAdsApi/env/lib/python2.7/site-packages/urllib3/contrib/appengine.py",
line 103, in init
"URLFetch is not available in this environment.")
So perhaps I should use an alternative to URLFetch within my virtual environment?
My current implementation
Fetching:
fetch('http://localhost:8080/api/get_accounts', {
method: "POST",
mode: "cors",
cache: "no-cache",
credentials: "same-origin",
headers: {
"Content-Type": "application/json; charset=utf-8",
},
redirect: "follow",
referrer: "no-referrer",
body: JSON.stringify(credentials)
})
.then(response => response.json())
.then(result => console.log(result));
flask_cors:
app = Flask(__name__)
cors = CORS(app, resources={r"/api/*": {"origins": "*"}})
Always use dev_appserver.py for running your local development environment on GAE apps. GAE has a lot of peculiarities that are hard to reproduce manually on a local virtualenv. Plus you get a lot of useful tools to monitor various services (Taskqueues, Memcache, Storage, etc). dev_appserver.py also automatically loads a lot of GAE native apis for you to be able to use and very often they have their own versions of popular libs that are adapted for a serverless environment (URLFetch is one of them)
Official Docs
https://cloud.google.com/appengine/docs/standard/python/tools/using-local-server
I am writing a backend for an Android app in Python using Google Cloud Endpoints. When I try to run Google's API Explorer to test my API on the development server (localhost), it gives me an error about SSL:
403 Forbidden
{
"error": {
"errors": [
{
"domain": "global",
"reason": "sslRequired",
"message": "SSL is required to perform this operation."
}
],
"code": 403,
"message": "SSL is required to perform this operation."
}
}
Google's documentation supports this:
Endpoints requires SSL.
(https://cloud.google.com/appengine/docs/python/endpoints/ )
"The development web server does not support HTTPS connections"
cloud.google.com/appengine/docs/python/config/appconfig#Python_app_yaml_Secure_URLs
I have two inconvenient workarounds: use CURL to send commands to the development server (as the site below suggests) or test only deployed versions. The API Explorer was just so convenient, and it worked whenever I have used it for the last couple years, most recently in August 2014.
Does anyone know if requiring SSL for the API Explorer was recent change? Is there any way to use the API Explorer on the development server, as it says here ( https://cloud.google.com/appengine/docs/python/endpoints/test_deploy#running_and_testing_api_backends_locally)?
Thanks.
Work around found by Tyler Rockwood...
If you remove the hostname field from the #endpoints.api annotation it works again:
Won't work...
#endpoints.api(name="blah", version="v1", description="Blah", hostname="causesfailure.appspot.com")
Will work...
#endpoints.api(name="blah", version="v1", description="Blah")
or (even lamer) you can set the hostname to localhost while testing
#endpoints.api(name="blah", version="v1", description="Blah", hostname="localhost:8080")
Don't add hostname in #endpoints.api annotation.