I first ssh into the Master Node.
When I run kubectl get svc
I get the output for NAME, TYPE, CLUSTER-IP, EXTERNAL-IP, PORT(S), AGE:
python-app-service LoadBalancer 10.110.157.42 <pending> 5000:30008/TCP 68m
I then run curl 10.110.157.52:5000
and I get the following message:
curl: (7) Failed connect to 10.110.157.42:5000; Connection refused
Below I posted my Dockerfile, deployment file, service file, and python application file. When I run the docker image, it works fine. However when I try to apply a Kubernetes service to the pod, I am unable to make calls. What am I doing wrong? Also please let me know if I left out any necessary information. Thank you!
Kubernetes was created with KubeAdm using Flannel CNI
Deployment yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-api
labels:
app: my-python-app
type: back-end
spec:
replicas: 1
selector:
matchLabels:
app: my-python-app
type: backend
template:
metadata:
name: python-api-pod
labels:
app: my-python-app
type: backend
spec:
containers:
- name: restful-python-example
image: mydockerhub/restful-python-example
ports:
- containerPort: 5000
Service yaml file:
apiVersion: v1
kind: Service
metadata:
name: python-app-service
spec:
type: LoadBalancer
ports:
- port: 5000
targetPort: 5000
nodePort: 30008
selector:
app: my-python-app
type: backend
Python application source - restful.py:
#!/usr/bin/python3
from flask import Flask, jsonify, request, abort
from flask_restful import Api, Resource
import jsonpickle
app = Flask(__name__)
api = Api(app)
# Creating an empty dictionary and initializing user id to 0.. will increment every time a person makes a POST request.
# This is bad practice but only using it for the example. Most likely you will be pulling this information from a
# database.
user_dict = {}
user_id = 0
# Define a class and pass it a Resource. These methods require an ID
class User(Resource):
#staticmethod
def get(path_user_id):
if path_user_id not in user_dict:
abort(400)
return jsonify(jsonpickle.encode(user_dict.get(path_user_id, "This user does not exist")))
#staticmethod
def put(path_user_id):
update_and_add_user_helper(path_user_id, request.get_json())
#staticmethod
def delete(path_user_id):
user_dict.pop(path_user_id, None)
# Get all users and add new users
class UserList(Resource):
#staticmethod
def get():
return jsonify(jsonpickle.encode(user_dict))
#staticmethod
def post():
global user_id
user_id = user_id + 1
update_and_add_user_helper(user_id, request.get_json())
# Since post and put are doing pretty much the same thing, I extracted the logic from both and put it in a separate
# method to follow DRY principles.
def update_and_add_user_helper(u_id, request_payload):
name = request_payload["name"]
age = request_payload["age"]
address = request_payload["address"]
city = request_payload["city"]
state = request_payload["state"]
zip_code = request_payload["zip"]
user_dict[u_id] = Person(name, age, address, city, state, zip_code)
# Represents a user's information
class Person:
def __init__(self, name, age, address, city, state, zip_code):
self.name = name
self.age = age
self.address = address
self.city = city
self.state = state
self.zip_code = zip_code
# Add a resource to the api. You need to give the class name and the URI.
api.add_resource(User, "/users/<int:path_user_id>")
api.add_resource(UserList, "/users")
if __name__ == "__main__":
app.run()
Dockerfile:
FROM python:3
WORKDIR /usr/src/app
RUN pip install flask
RUN pip install flask_restful
RUN pip install jsonpickle
COPY . .
CMD python restful.py
kubectl describe svc python-app-service
Name: python-app-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=my-python-app,type=backend
Type: LoadBalancer
IP: 10.110.157.42
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30008/TCP
Endpoints: 10.244.3.24:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So the reason I was unable to connect was because I never exposed the port in my Dockerfile.
My Dockerfile should have been:
FROM python:3
WORKDIR /usr/src/app
RUN pip install flask
RUN pip install flask_restful
RUN pip install jsonpickle
COPY . .
EXPOSE 5000
CMD python restful.py
Related
I am very new with working on OCP . I have a task to schedule a curl statement via crontab but I'm unable to figure out where to pass the curl statement.
Not sure how to even start. I looked up some examples but do not see anything that matches my requirement
OCP is based on Kubernetes. In Kubernetes you have the CronJob resource, which seems to be what you're looking for for your resource and allows you to run a job on a specific schedule.
As you need to use curl, you can use the curlimage/curl image, which has the curl binary included:
apiVersion: batch/v1
kind: CronJob
metadata:
name: my-curl-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: curl-job
image: curlimages/curl
imagePullPolicy: IfNotPresent
args:
- "http://your.url.you.want.to.curl"
restartPolicy: Never
I have a flask app that I am loading through Docker, and when I try and access the application on localhost:8000 I get the error message in the subject line. I believe the issue is that the flask application is not recognizing my application's SECRET_KEY, but I'm not sure how to fix it.
Here is my app structure (condensed for clarity):
config/
-- settings.py
instance/
-- settings.py
myapp/
-- app.py
blueprints/
user/
-- models.py
.env
docker-compose
Dockerfile
My app-factory function looks like this in app.py:
def create_app(settings_override=None):
"""
Create a Flask application using the app factory pattern.
:param settings_override: Override settings
:return: Flask app
"""
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.settings')
app.config.from_pyfile('settings.py', silent=True)
if settings_override:
app.config.update(settings_override)
app.register_blueprint(admin)
app.register_blueprint(page)
app.register_blueprint(contact)
app.register_blueprint(user)
extensions(app)
authentication(app, User)
return app
The error is being triggered in the function called authentication:
def authentication(app, user_model):
"""
Initialize the Flask-Login extension (mutates the app passed in).
:param app: Flask application instance
:param user_model: Model that contains the authentication information
:type user_model: SQLAlchemy model
:return: None
"""
login_manager.login_view = 'user.login'
#login_manager.user_loader
def load_user(uid):
return user_model.query.get(uid)
#login_manager.token_loader
def load_token(token):
duration = app.config['REMEMBER_COOKIE_DURATION'].total_seconds()
serializer = URLSafeTimedSerializer(app.secret_key)
data = serializer.loads(token, max_age=duration)
user_uid = data[0]
return user_model.query.get(user_uid)
It's the line where it says data = serializer.loads(token, max_age=duration)
The token is usually generated from the secret_key of the application.
Here are some examples from my User class where a token is generated:
def serialize_token(self, expiration=3600):
"""
Sign and create a token that can be used for things such as resetting
a password or other tasks that involve a one off token.
:param expiration: Seconds until it expires, defaults to 1 hour
:type expiration: int
:return: JSON
"""
private_key = current_app.config['SECRET_KEY']
serializer = TimedJSONWebSignatureSerializer(private_key, expiration)
return serializer.dumps({'user_email': self.email}).decode('utf-8')
The SECRET_KEY variable is being set from my settings.py file from my config folder. Here is its contents:
from datetime import timedelta
DEBUG = True
SERVER_NAME = 'localhost:8000'
SECRET_KEY = 'insecurekeyfordev'
# Flask-Mail.
MAIL_DEFAULT_SENDER = 'contact#local.host'
MAIL_SERVER = 'smtp.gmail.com'
MAIL_PORT = 587
MAIL_USE_TLS = True
MAIL_USE_SSL = False
MAIL_USERNAME = 'you#gmail.com'
MAIL_PASSWORD = 'awesomepassword'
# Celery.
CELERY_BROKER_URL = 'redis://:devpassword#redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://:devpassword#redis:6379/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_REDIS_MAX_CONNECTIONS = 5
# SQLAlchemy.
db_uri = 'postgresql://snakeeyes:devpassword#postgres:5432/snakeeyes'
SQLALCHEMY_DATABASE_URI = db_uri
SQLALCHEMY_TRACK_MODIFICATIONS = False
# User.
SEED_ADMIN_EMAIL = 'dev#local.host'
SEED_ADMIN_PASSWORD = 'devpassword'
REMEMBER_COOKIE_DURATION = timedelta(days=90)
I don't know why this information isn't loading correctly in the app, but when I run docker-compose up --build I get the error message in the title.
If it's at all useful here are the contents of my docker files.
docker-compose.yml
version: '2'
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
build: .
command: >
gunicorn -b 0.0.0.0:8000
--access-logfile -
--reload
"snakeeyes.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/snakeeyes'
ports:
- '8000:8000'
celery:
build: .
command: celery worker -l info -A snakeeyes.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/snakeeyes'
volumes:
postgres:
redis:
And my DOCKERFILE:
FROM python:3.7.5-slim-buster
MAINTAINER My Name <myname#gmail.com>
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /myapp
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -b 0.0.0.0:8000 --access-logfile - "myapp.app:create_app()"
I'm using docker, selenium, and Django.
I just realised i was doing my tests on my production database ; while i wanted to test on StaticLiveServerTestCase self-generated database.
I tried to follow that tutorial
#override_settings(ALLOWED_HOSTS=['*'])
class BaseTestCase(StaticLiveServerTestCase):
host = '0.0.0.0'
#classmethod
def setUpClass(cls):
super().setUpClass()
cls.host = socket.gethostbyname(socket.gethostname())
cls.selenium = webdriver.Remote(
command_executor='http://hub:4444/wd/hub',
desired_capabilities=DesiredCapabilities.CHROME,
)
cls.selenium.implicitly_wait(5)
#classmethod
def tearDownClass(cls):
cls.selenium.quit()
super().tearDownClass()
class MyTest(BaseTestCase):
def test_simple(self):
self.selenium.get(self.live_server_url)
I've no error trying to connect to the chrome-hub, but when i try to print my page_source, i'm not on my django app but on a chrome error message. Here is a part :
<div class="error-code" jscontent="errorCode" jstcache="7">ERR_CONNECTION_REFUSED</div>
I'm using docker-compose 1. Selenium.yml:
chrome:
image: selenium/node-chrome:3.11.0-dysprosium
volumes:
- /dev/shm:/dev/shm
links:
- hub
environment:
HUB_HOST: hub
HUB_PORT: '4444'
hub:
image: selenium/hub:3.11.0-dysprosium
ports:
- "4444:4444"
expose:
- "4444"
app:
links:
- hub
I guess i did something wrong in my docker-compose file, but i don't manage to figure out what.
Thanks in advance !
PS : live_server_url = http://localhost:8081
You need to put container_name of the container that is running Django/the tests as host when using docker-compose, i.e.
host = 'app'
For a more detailed discussion see this question
I am having two dockerized application which needs to run in kubernetes.
Here is the scenario which needs to achieve.
Docker-1 which is flask application.
Docker-2 which is python script will take input from the Docker-1 and execute and need to write some file in a shared volume of the Docker-1 container.
Here is the flask web-app code.
from flask import Flask, request, Response, jsonify
app = Flask(__name__)
#app.route('/')
def root():
return "The API is working fine"
#app.route('/run-docker')
def run_docker_2():
args = "input_combo"
query = <sql query>
<initiate the docker run and pass params>
exit
#No return message need run as async
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=8080, threaded=True)
Docker file
FROM ubuntu:latest
MAINTAINER Abhilash KK "abhilash.kk#searshc.com"
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential python-tk
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]
requirements.txt
flask
Python script for the second docker. start_docker.py
import sys
input_combo = sys.argv[1]
query = sys.argv[2]
def function_to_run(input_combination,query):
#starting the model final creating file
function_to_run(input_combo, query)
Docker file 2
FROM python
COPY . /script
WORKDIR /script
CMD ["python", "start_docker.py"]
Please help me to connect with the docker images. or let me know any other way to achieve this problem. The basic requirement is to add a message to some queue and that queue listens for in time interval and starts the process with FIFO manner.
Any other approach in GCP service to initiate an async job will take input from the client and create a file which is accessible from web-app python.
First, create a Pod running "Docker-1" application. Then Kubernetes python client to spawn a second pod with "Docker-2".
You can share a volume between your pods in order to return the data to Docker1. In my code sample I'm using a host_path volume but you need to ensure that both pods are on the same node. I did add that code for readability.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: docker1
labels:
app: docker1
spec:
replicas: 1
selector:
matchLabels:
app: docker1
template:
metadata:
labels:
app: docker1
spec:
containers:
- name: docker1
image: abhilash/docker1
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /shared
name: shared-volume
volumes:
- name: shared-volume
hostPath:
path: /shared
The code of run_docker_2 handler:
from kubernetes import client, config
...
args = "input_combo"
config.load_incluster_config()
pod = client.V1Pod()
pod.metadata = client.V1ObjectMeta(name="docker2")
container = client.V1Container(name="docker2")
container.image = "abhilash/docker2"
container.args = [args]
volumeMount = client.V1VolumeMount(name="shared", mount_path="/shared")
container.volume_mounts = [volumeMount]
hostpath = client.V1HostPathVolumeSource(path = "/shared")
volume = client.V1Volume(name="shared")
volume.host_path = hostpath
spec = client.V1PodSpec(containers = [container])
spec.volumes = [volume]
pod.spec = spec
v1.create_namespaced_pod(namespace="default", body=pod)
return "OK"
A handler to read the returned results:
#app.route('/read-results')
def run_read():
with open("/shared/results.data") as file:
return file.read()
Note that it could be useful to add a watcher to wait for the pod to finish the job and then do some cleanup (delete the pod for instance)
From what I can understand you'd want the so called "sidecar pattern", you can run multiple containers in one pod and share a volume, e.g.:
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
You could also benefit from getting to know the basics of how Kubernetes work: Kubernetes Basics
So I have just started using Kubernetes API server and I tried this example :
from kubernetes import client, config
def main():
# Configs can be set in Configuration class directly or using helper
# utility. If no argument provided, the config will be loaded from
# default location.
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
This worked but it returned the pods that are on my local minikube, I want to get the pods that are at the kubernetes server here :
http://192.168.237.115:8080
How do I do that?
When I do kubectl config view , I get this :
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/piyush/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/piyush/.minikube/apiserver.crt
client-key: /home/piyush/.minikube/apiserver.key
I know this is for the local cluster I set up. I want to know how to modify this to make api requests to kubernetes server on http://192.168.237.115:8080
You can actually create a simple api wrapper. This way you can pass through different yaml configuration files, that I imagine may have different hosts
import yaml
from kubernetes import client
from kubernetes.client import Configuration
from kubernetes.config import kube_config
class K8s(object):
def __init__(self, configuration_yaml):
self.configuration_yaml = configuration_yaml
self._configuration_yaml = None
#property
def config(self):
with open(self.configuration_yaml, 'r') as f:
if self._configuration_yaml is None:
self._configuration_yaml = yaml.load(f)
return self._configuration_yaml
#property
def client(self):
k8_loader = kube_config.KubeConfigLoader(self.config)
call_config = type.__call__(Configuration)
k8_loader.load_and_set(call_config)
Configuration.set_default(call_config)
return client.CoreV1Api()
# Instantiate your kubernetes class and pass in config
kube_one = K8s(configuration_yaml='~/.kube/config1')
kube_one.client.list_pod_for_all_namespaces(watch=False)
kube_two = K8s(configuration_yaml='~/.kube/config2')
kube_two.client.list_pod_for_all_namespaces(watch=False)
Also another neat reference in libcloud. https://github.com/apache/libcloud/blob/trunk/libcloud/container/drivers/kubernetes.py.
Good luck! Hope this helps! :)
I have two solution for you:
[prefered] Configure your kubectl (i.e. ~/.kube/config) file. After kubectl works with your cluster, python client should automatically work with load_kube_config. See here for configuring kubectl: https://kubernetes.io/docs/tasks/administer-cluster/share-configuration/
You can configure python client directly. For a complete list of configurations, look at: https://github.com/kubernetes-client/python-base/blob/8704ce39c241f3f184d01833dcbaf1d1fb14a2dc/configuration.py#L48
You may need to set some of those configuration for your client to connect to your cluster. For example, if you don't have any certificate or SSL enabled:
from kubernetes import client, configuration
def main():
configuration.host = "http://192.168.237.115:8080"
configuration.api_key_prefix['authorization'] = "Bearer"
configuration..api_key['authorization'] = "YOUR_TOKEN"
v1 = client.CoreV1Api()
...
You may need to set other configurations such as username, api_key, etc. That's why I think if you follow first solution it would be easier.
config.load_kube_config() takes context as a parameter. If passed None (the default) then the current context will be used. Your current context is probably your minikube.
See here:
https://github.com/kubernetes-incubator/client-python/blob/436351b027df2673869ee00e0ff5589e6b3e2b7d/kubernetes/config/kube_config.py#L283
config.load_kube_config(context='some context')
If you are not familiar with Kubernetes contexts,
Kubernetes stores your configuration under ~/.kube/config (default location). In it you will find context definition for every cluster you may have access to. A field called current-context defines your current context.
You can issue the following commands:
kubectl config current-context to see the current context
kubectl config view to view all the configuration
Can you show me the file ~/.kube/config
If you update the API server in it, the python module kubernetes will automatically pick up the new API server you nominated.
- cluster:
certificate-authority: [Update real ca.crt here]
server: http://192.168.237.115:8080
There are other changes in ~/.kube/config as well, you'd better get the config from the remote kubernetes server directly.
After successfully config with remote kubernetes API servers, you should be fine to run kubectl and get the deployments, deamons, etc.
Then you should be fine to run with python kubernetes SDK