I have a flask app that I am loading through Docker, and when I try and access the application on localhost:8000 I get the error message in the subject line. I believe the issue is that the flask application is not recognizing my application's SECRET_KEY, but I'm not sure how to fix it.
Here is my app structure (condensed for clarity):
config/
-- settings.py
instance/
-- settings.py
myapp/
-- app.py
blueprints/
user/
-- models.py
.env
docker-compose
Dockerfile
My app-factory function looks like this in app.py:
def create_app(settings_override=None):
"""
Create a Flask application using the app factory pattern.
:param settings_override: Override settings
:return: Flask app
"""
app = Flask(__name__, instance_relative_config=True)
app.config.from_object('config.settings')
app.config.from_pyfile('settings.py', silent=True)
if settings_override:
app.config.update(settings_override)
app.register_blueprint(admin)
app.register_blueprint(page)
app.register_blueprint(contact)
app.register_blueprint(user)
extensions(app)
authentication(app, User)
return app
The error is being triggered in the function called authentication:
def authentication(app, user_model):
"""
Initialize the Flask-Login extension (mutates the app passed in).
:param app: Flask application instance
:param user_model: Model that contains the authentication information
:type user_model: SQLAlchemy model
:return: None
"""
login_manager.login_view = 'user.login'
#login_manager.user_loader
def load_user(uid):
return user_model.query.get(uid)
#login_manager.token_loader
def load_token(token):
duration = app.config['REMEMBER_COOKIE_DURATION'].total_seconds()
serializer = URLSafeTimedSerializer(app.secret_key)
data = serializer.loads(token, max_age=duration)
user_uid = data[0]
return user_model.query.get(user_uid)
It's the line where it says data = serializer.loads(token, max_age=duration)
The token is usually generated from the secret_key of the application.
Here are some examples from my User class where a token is generated:
def serialize_token(self, expiration=3600):
"""
Sign and create a token that can be used for things such as resetting
a password or other tasks that involve a one off token.
:param expiration: Seconds until it expires, defaults to 1 hour
:type expiration: int
:return: JSON
"""
private_key = current_app.config['SECRET_KEY']
serializer = TimedJSONWebSignatureSerializer(private_key, expiration)
return serializer.dumps({'user_email': self.email}).decode('utf-8')
The SECRET_KEY variable is being set from my settings.py file from my config folder. Here is its contents:
from datetime import timedelta
DEBUG = True
SERVER_NAME = 'localhost:8000'
SECRET_KEY = 'insecurekeyfordev'
# Flask-Mail.
MAIL_DEFAULT_SENDER = 'contact#local.host'
MAIL_SERVER = 'smtp.gmail.com'
MAIL_PORT = 587
MAIL_USE_TLS = True
MAIL_USE_SSL = False
MAIL_USERNAME = 'you#gmail.com'
MAIL_PASSWORD = 'awesomepassword'
# Celery.
CELERY_BROKER_URL = 'redis://:devpassword#redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://:devpassword#redis:6379/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_REDIS_MAX_CONNECTIONS = 5
# SQLAlchemy.
db_uri = 'postgresql://snakeeyes:devpassword#postgres:5432/snakeeyes'
SQLALCHEMY_DATABASE_URI = db_uri
SQLALCHEMY_TRACK_MODIFICATIONS = False
# User.
SEED_ADMIN_EMAIL = 'dev#local.host'
SEED_ADMIN_PASSWORD = 'devpassword'
REMEMBER_COOKIE_DURATION = timedelta(days=90)
I don't know why this information isn't loading correctly in the app, but when I run docker-compose up --build I get the error message in the title.
If it's at all useful here are the contents of my docker files.
docker-compose.yml
version: '2'
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
build: .
command: >
gunicorn -b 0.0.0.0:8000
--access-logfile -
--reload
"snakeeyes.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/snakeeyes'
ports:
- '8000:8000'
celery:
build: .
command: celery worker -l info -A snakeeyes.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/snakeeyes'
volumes:
postgres:
redis:
And my DOCKERFILE:
FROM python:3.7.5-slim-buster
MAINTAINER My Name <myname#gmail.com>
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /myapp
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -b 0.0.0.0:8000 --access-logfile - "myapp.app:create_app()"
Related
I'm trying to deploy my Flask app by Docker container. When I run the app on the local machine, the log file writes and updates normally. After I deployed it in Docker, the app runs normally but the log file doesn't seem to appear.
This is my Flask code:
Python version: 3.7, Flask version: 1.1.2
from elasticapm.contrib.flask import ElasticAPM
from elasticapm.handlers.logging import LoggingHandler
from flask import Flask, jsonify, request
import logging
from logging.handlers import TimedRotatingFileHandler
formatter = logging.Formatter("[%(asctime)s - %(levelname)s] %(message)s")
handler = TimedRotatingFileHandler('./log/apm-test.log',when='midnight',interval=1,encoding='utf8')
handler.suffix = '%Y_%m_%d'
handler.setFormatter(formatter)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.addHandler(handler)
server_domain = '0.0.0.0'
app = Flask(__name__)
app.config['ELASTIC_APM'] = {
# Set required service name. Allowed characters:
# a-z, A-Z, 0-9, -, _, and space
'SERVICE_NAME': 'Master Node',
# Use if APM Server requires a token
'SECRET_TOKEN': '',
# Set custom APM Server URL (default: http://localhost:8200)
'ELASTIC_APM_SERVER_URL': 'http://elastic-agent:8200',
'ENVIRONMENT':'production'
}
apm = ElasticAPM(app)
#app.route('/',methods=['post'])
def index():
app.logger.info("info log -- POST / 200")
return 'Hello!'
#app.route('/route1',methods=['get'])
def passdata():
a = {'test01':1,'test02':2}
app.logger.info("info log -- GET /route1 200")
return jsonify(a)
#app.route('/route2',methods=['get'])
def postdata():
b = {'test003':3,'test004':4}
app.logger.info('info: "get /route2 HTTP1.1" --200')
return jsonify(b)
if __name__ == '__main__':
apm_handler = LoggingHandler(client=apm.client)
apm_handler.setLevel(logging.INFO)
app.logger.addHandler(apm_handler)
app.run(host=server_domain,port=5000)
This is my Dockerfile and command:
Docker-desktop(Windows) version: 4.13
Dockerfile
FROM python:3.7-slim
COPY ./apm-test.py /usr/share/apm-test.py
RUN pip install elastic-apm flask blinker
ENV ELASTIC_APM_SERVER_URL='http://elastic-agent:8200'
EXPOSE 5000
CMD ["python", "./apm-test.py"]
Docker command
docker run -d --name apm-client00 apm-client00
I tried to mount the log folder and the exact log file in my local machine into the same relative container path as the py file writes the log in local machine, but it is still the same.
I am having trouble getting my docker-compose.yml file to run, I am running the command docker-compose -f docker-compose.yml and am getting the error "TypeError: You must specify a directory to build in path
[7264] Failed to execute script docker-compose"
The file I have this in is called Labsetup and inside the file I have
docker-compose.yml,dockerClient.yml,dokcerServer.yml,and
A file called Volumes that have Server.py and Client.py
To this point I have edit my docker-compose.yml file and tried running "docker-compose -f docker-compose.yml up" to run the yml below is my docker-compose.yml
version: "3"
services:
# new code to execute python scripts in /volumes
server:
image: 85345aa61801 # from DockerServer.yml
container_name: server
build: DockerServer.yml
command: python3 ./volumes/Server.py
ports:
- 8080:8080
client:
image: 700c22b0d9d6 # from DockerClient.yml
container_name: client
build: DockerClient.yml
command: python3 ./volumes/Client.py
network_mode: host
depends_on:
- server
# below is original code for containers
malicious-router:
image: handsonsecurity/seed-ubuntu:large
container_name: malicious-router-10.9.0.111
tty: true
cap_add:
- ALL
sysctls:
- net.ipv4.ip_forward=1
- net.ipv4.conf.all.send_redirects=0
- net.ipv4.conf.default.send_redirects=0
- net.ipv4.conf.eth0.send_redirects=0
privileged: true
volumes:
- ./volumes:/volumes
networks:
net-10.9.0.0:
ipv4_address: 10.9.0.111
command: bash -c "
ip route add 192.168.60.0/24 via 10.9.0.11 &&
tail -f /dev/null "
HostB1:
image: handsonsecurity/seed-ubuntu:large
container_name: host-192.168.60.5
tty: true
cap_add:
- ALL
volumes:
- ./volumes:/volumes
networks:
net-192.168.60.0:
ipv4_address: 192.168.60.5
command: bash -c "python3 Server.py"
HostB2:
image: handsonsecurity/seed-ubuntu:large
container_name: host-192.168.60.6
tty: true
cap_add:
- ALL
volumes:
- ./volumes:/volumes
networks:
net-192.168.60.0:
ipv4_address: 192.168.60.6
command: bash -c "python3 Client.py"
HostB3:
image: handsonsecurity/seed-ubuntu:large
container_name: host-192.168.60.7
tty: true
cap_add:
- ALL
volumes:
- ./volumes:/volumes
networks:
net-192.168.60.0:
ipv4_address: 192.168.60.7
command: bash -c "python3 Client.py 8080"
Router:
image: handsonsecurity/seed-ubuntu:large
container_name: router
tty: true
cap_add:
- ALL
sysctls:
- net.ipv4.ip_forward=1
volumes:
- ./volumes:/volumes
networks:
net-10.9.0.0:
ipv4_address: 10.9.0.11
net-192.168.60.0:
ipv4_address: 192.168.60.11
command: bash -c "
ip route del default &&
ip route add default via 10.9.0.1 &&
tail -f /dev/null "
networks:
net-192.168.60.0:
name: net-192.168.60.0
ipam:
config:
- subnet: 192.168.60.0/24
net-10.9.0.0:
name: net-10.9.0.0
ipam:
config:
- subnet: 10.9.0.0/24
My DockerServer.yml
# Dockerfile.
# FROM ubuntu:20.04 # FROM python:latest
# ... most recent "image" created since DockerServer build = f48ea80eae5a <-- needed in FROM
# after running command: docker build -f DockerServer.yml .
# ... it output this upon completion ...
# ---> Running in 533f2cbd3135
# Removing intermediate container 533f2cbd3135 ---> 85345aa61801 Successfully built 85345aa61801
# FROM requires this syntax:
# FROM [--platform=<platorm>] <image> [AS <name>]
FROM python3:3.8.5 85345aa61801 AS serverImg
ADD Server.py /volumes/
WORKDIR /volumes/
My DokcerClient.yml
# Dockerfile.
# FROM ubuntu:20.04 server # python:latest #FROM ubuntu:latest ?
# step1/3 image --> f48ea80eae5a ... Successfully built 700c22b0d9d6
# image needed for 2nd arg
# after running command: docker build -f DockerClient.yml
# ... it output this upon completion ...
# Removing intermediate container 29037f3d23fb ---> 700c22b0d9d6 Successfully built 700c22b0d9d6
# FROM requires this syntax:
# FROM [--platform=<platorm>] <image> [AS <name>]
FROM python3:3.8.5 700c22b0d9d6 AS clientImg
ADD Client.py /volumes/
WORKDIR /volumes/
I am creating a project for a networking class that connects users to chat on a terminal. I have two python codes one being called Server.py that starts the server and tells who is in the chat room and another called Client.py that asks the user to put in a name and they will be able to chat. I am running this all on a virtual box Ubuntu20.04.
Code for Server.py
import socket
import _thread
import threading
from datetime import datetime
class Server():
def __init__(self):
# For remembering users
self.users_table = {}
# Create a TCP/IP socket and bind it the Socket to the port
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server_address = ('localhost', 8080)
self.socket.bind(self.server_address)
self.socket.setblocking(1)
self.socket.listen(10)
print('Starting up on {} port {}'.format(*self.server_address))
self._wait_for_new_connections()
def _wait_for_new_connections(self):
while True:
connection, _ = self.socket.accept()
_thread.start_new_thread(self._on_new_client, (connection,))
def _on_new_client(self, connection):
try:
# Declare the client's name
client_name = connection.recv(64).decode('utf-8')
self.users_table[connection] = client_name
print(f'{self._get_current_time()} {client_name} joined the room !!')
while True:
data = connection.recv(64).decode('utf-8')
if data != '':
self.multicast(data, owner=connection)
else:
return
except:
print(f'{self._get_current_time()} {client_name} left the room !!')
self.users_table.pop(connection)
connection.close()
def _get_current_time(self):
return datetime.now().strftime("%H:%M:%S")
def multicast(self, message, owner=None):
for conn in self.users_table:
data = f'{self._get_current_time()} {self.users_table[owner]}: {message}'
conn.sendall(bytes(data, encoding='utf-8'))
if __name__ == "__main__":
Server()
Code for Client.py
import sys
import socket
import threading
class Client():
def __init__(self, client_name):
client_name = input("Choose Your Nickname:")
# Create a TCP/IP socket and connect the socket to the port
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server_address = ('localhost', 8080)
self.socket.connect(self.server_address)
self.socket.setblocking(1)
print("{} is connected to the server, you may now chat.".format(client_name))
self.client_name = client_name
send = threading.Thread(target=self._client_send)
send.start()
receive = threading.Thread(target=self._client_receive)
receive.start()
def _client_send(self):
self.socket.send(bytes(self.client_name, encoding='utf-8'))
while True:
try:
c = input()
sys.stdout.write("\x1b[1A\x1b[2K") # Delete previous line
self.socket.send(bytes(c, encoding='utf-8'))
except:
self.socket.close()
return
def _client_receive(self):
while True:
try:
print(self.socket.recv(1024).decode("utf-8"))
except:
self.socket.close()
return
if __name__ == "__main__":
client_name = sys.argv[1]
Client(client_name)
Edit: I found a example online and followed it and changed a few
[11/26/21]seed#VM:~/.../Labsetup$ docker build -f DockerServer.yml .
Sending build context to Docker daemon 33.28kB
Step 1/3 : FROM python:latest
latest: Pulling from library/python
647acf3d48c2: Pull complete
b02967ef0034: Pull complete
e1ad2231829e: Pull complete
5576ce26bf1d: Pull complete
a66b7f31b095: Pull complete
05189b5b2762: Pull complete
af08e8fda0d6: Pull complete
287d56f7527b: Pull complete
dc0580965fb6: Pull complete
Digest: sha256:f44726de10d15558e465238b02966a8f83971fd85a4c4b95c263704e6a6012e9
Status: Downloaded newer image for python:latest
---> f48ea80eae5a
Step 2/3 : ADD Server.py /volumes/
---> df60dab46969
Step 3/3 : WORKDIR /volumes/
---> Running in 533f2cbd3135
Removing intermediate container 533f2cbd3135
---> 85345aa61801
Successfully built 85345aa61801
[11/26/21]seed#VM:~/.../Labsetup$ docker build -f DockerClient.yml .
Sending build context to Docker daemon 33.28kB
Step 1/3 : FROM python:latest
---> f48ea80eae5a
Step 2/3 : ADD Client.py /volumes/
---> 76af351c1c7a
Step 3/3 : WORKDIR /volumes/
---> Running in 29037f3d23fb
Removing intermediate container 29037f3d23fb
---> 700c22b0d9d6
Successfully built 700c22b0d9d6
I am trying to run a docker-compose app that has two services. One to build a web server and the other to run the tests on it.
docker-compose.yml
version: "3.7"
services:
web:
build: .
ports:
- "127.0.0.1:5000:5000"
expose:
- 5000
test:
# expose:
# - 5000
depends_on:
- web
build: test_python/.
./Dockerfile
FROM python:buster
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
# Add .cargo/bin to PATH
ENV PATH="/root/.cargo/bin:${PATH}"
# Check cargo is visible
RUN cargo --help
WORKDIR /code
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
EXPOSE 5000
COPY test_python .
CMD [ "python3", "base_routes.py" ]
test_python/Dockerfile
FROM python:buster
RUN pip3 install pytest requests
COPY . .
base_routes.py
from robyn import Robyn, static_file, jsonify
import asyncio
app = Robyn(__file__)
callCount = 0
#app.get("/")
async def h(request):
print(request)
global callCount
callCount += 1
message = "Called " + str(callCount) + " times"
return message
#app.get("/test")
async def test(request):
import os
path = os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), "test_python/index.html"))
return static_file(path)
#app.get("/jsonify")
async def json_get(request):
return jsonify({"hello": "world"})
#app.post("/jsonify")
async def json(request):
print(request)
return jsonify({"hello": "world"})
#app.post("/post")
async def postreq(request):
return bytearray(request["body"]).decode("utf-8")
#app.put("/put")
async def putreq(request):
return bytearray(request["body"]).decode("utf-8")
#app.delete("/delete")
async def deletereq(request):
return bytearray(request["body"]).decode("utf-8")
#app.patch("/patch")
async def patchreq(request):
return bytearray(request["body"]).decode("utf-8")
#app.get("/sleep")
async def sleeper():
await asyncio.sleep(5)
return "sleep function"
#app.get("/blocker")
def blocker():
import time
time.sleep(10)
return "blocker function"
if __name__ == "__main__":
app.add_header("server", "robyn")
app.add_directory(route="/test_dir",directory_path="./test_dir/build", index_file="index.html")
app.start(port=5000)
These are the files that I have used in my project. When I try to open 127.0.0.1:5000 from my machine, it shows nothing. However, when I log in the web container and do curl http://localhost:5000/, it gives the right response.
I am unable to figure out how to access it on the host machine?
I had to make the python server listen at '0.0.0.0'.
I added the following line in my codebase
app.start(port=5000, url='0.0.0.0')
I currently have a small api written in Flask made for interacting with a CNN, I'm setting up the configuracion for running it in Docker and everything runs fine, this is my actual configuration:
Dockerfile:
FROM python:2.7.15-jessie
RUN mkdir -p usr/src/app
COPY . usr/src/app
WORKDIR usr/src/app
RUN which python
RUN apt-get update && apt-get install -y
RUN pip install flask flask_uploads Werkzeug opencv-python numpy tensorflow
ENV PORT 5000
EXPOSE 5000
CMD ["flask", "run"]
Docker-compose:
version: "2.2"
services:
api:
container_name: "pyserver"
build: .
volumes:
- ".:/usr/src/app"
environment:
FLASK_APP: server.py
ports:
- "5000:5000"
command: ["flask", "run"]
server.py
import os
from base64 import b64encode, b64decode
from flask import Flask, redirect, request, url_for, json, send_file
from flask_uploads import UploadSet, configure_uploads
from werkzeug.utils import secure_filename
from GW_predict import predict
UPLOAD_FOLDER = 'CNN/Files'
ALLOWED_EXTENSIONS = set(['txt', 'csv', 'png', 'jpg', 'jpeg'])
app = Flask(__name__)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.config["DEBUG"] = True
def allowed_file(filename):
ext = '.' in filename and \
filename.rsplit('.', 1)[1].lower()
return ext, ext in ALLOWED_EXTENSIONS
#app.route('/status', methods=['GET'])
def status():
return create_response({ 'online': True, 'message': 'UP AND RUNNING # 5000' }, 200)
#app.route('/uploadFile', methods=['POST'])
def upload_file():
if 'h1' not in request.files or 'l1' not in request.files:
return create_response({'result': False, 'message': 'File missing'}, 422)
h1_file = request.files['h1']
l1_file = request.files['l1']
if h1_file.filename == '' or l1_file.filename == '':
return create_response({'result': False, 'message': 'File missing'}, 422)
h1_ext, h1res = allowed_file(h1_file.filename)
l1_ext, l1res = allowed_file(l1_file.filename)
if h1_file and l1_file and h1res and l1res:
h1_filename = secure_filename(h1_file.filename)
l1_filename = secure_filename(l1_file.filename)
h1 = os.path.join(app.config['UPLOAD_FOLDER'], h1_filename)
l1 = os.path.join(app.config['UPLOAD_FOLDER'], l1_filename)
h1_file.save(h1)
l1_file.save(l1)
# img = b64encode((open(img, "rb").read()))
if h1_ext == 'png' and l1_ext == 'png':
result = predict(h1, l1)
return create_response({'result': True, 'prediction': result}, 200)
return create_response({'result': False, 'message': 'Images format must be png'}, 422)
return create_response({'result': False, 'message': 'No allowed file'}, 422)
def create_response(message, status):
response = app.response_class(
response= json.dumps(message),
status= status,
mimetype='application/json'
)
return response
if __name__ == '__main__':
app.run()
The problem is that in my docker-compose file I have configured the "ports" instruction in which I define to expose de port 5000 in my host and to forward to the same port in the container.
This does not work.
Inside the container I can make the request via CURL to the endpoint, but outside of the container I am unable to do so. What could be wrong?
maybe you need to specify the --host=0.0.0.0 flag? (from here).
Can you try to override your command in docker-compose?
version: "2.2"
services:
api:
container_name: "pyserver"
build: .
command: flask run --host=0.0.0.0
volumes:
- ".:/usr/src/app"
environment:
FLASK_APP: server.py
ports:
- "5000:5000"
command: ["flask", "run"]
By the way, not sure that the EXPOSE in dockerfile is necessary if you are communicate only from your host machine (rather than other container).
I am having two dockerized application which needs to run in kubernetes.
Here is the scenario which needs to achieve.
Docker-1 which is flask application.
Docker-2 which is python script will take input from the Docker-1 and execute and need to write some file in a shared volume of the Docker-1 container.
Here is the flask web-app code.
from flask import Flask, request, Response, jsonify
app = Flask(__name__)
#app.route('/')
def root():
return "The API is working fine"
#app.route('/run-docker')
def run_docker_2():
args = "input_combo"
query = <sql query>
<initiate the docker run and pass params>
exit
#No return message need run as async
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0', port=8080, threaded=True)
Docker file
FROM ubuntu:latest
MAINTAINER Abhilash KK "abhilash.kk#searshc.com"
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential python-tk
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["/usr/bin/python"]
CMD ["app.py"]
requirements.txt
flask
Python script for the second docker. start_docker.py
import sys
input_combo = sys.argv[1]
query = sys.argv[2]
def function_to_run(input_combination,query):
#starting the model final creating file
function_to_run(input_combo, query)
Docker file 2
FROM python
COPY . /script
WORKDIR /script
CMD ["python", "start_docker.py"]
Please help me to connect with the docker images. or let me know any other way to achieve this problem. The basic requirement is to add a message to some queue and that queue listens for in time interval and starts the process with FIFO manner.
Any other approach in GCP service to initiate an async job will take input from the client and create a file which is accessible from web-app python.
First, create a Pod running "Docker-1" application. Then Kubernetes python client to spawn a second pod with "Docker-2".
You can share a volume between your pods in order to return the data to Docker1. In my code sample I'm using a host_path volume but you need to ensure that both pods are on the same node. I did add that code for readability.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: docker1
labels:
app: docker1
spec:
replicas: 1
selector:
matchLabels:
app: docker1
template:
metadata:
labels:
app: docker1
spec:
containers:
- name: docker1
image: abhilash/docker1
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /shared
name: shared-volume
volumes:
- name: shared-volume
hostPath:
path: /shared
The code of run_docker_2 handler:
from kubernetes import client, config
...
args = "input_combo"
config.load_incluster_config()
pod = client.V1Pod()
pod.metadata = client.V1ObjectMeta(name="docker2")
container = client.V1Container(name="docker2")
container.image = "abhilash/docker2"
container.args = [args]
volumeMount = client.V1VolumeMount(name="shared", mount_path="/shared")
container.volume_mounts = [volumeMount]
hostpath = client.V1HostPathVolumeSource(path = "/shared")
volume = client.V1Volume(name="shared")
volume.host_path = hostpath
spec = client.V1PodSpec(containers = [container])
spec.volumes = [volume]
pod.spec = spec
v1.create_namespaced_pod(namespace="default", body=pod)
return "OK"
A handler to read the returned results:
#app.route('/read-results')
def run_read():
with open("/shared/results.data") as file:
return file.read()
Note that it could be useful to add a watcher to wait for the pod to finish the job and then do some cleanup (delete the pod for instance)
From what I can understand you'd want the so called "sidecar pattern", you can run multiple containers in one pod and share a volume, e.g.:
apiVersion: v1
kind: Pod
metadata:
name: www
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /srv/www
name: www-data
readOnly: true
- name: git-monitor
image: kubernetes/git-monitor
env:
- name: GIT_REPO
value: http://github.com/some/repo.git
volumeMounts:
- mountPath: /data
name: www-data
volumes:
- name: www-data
emptyDir: {}
You could also benefit from getting to know the basics of how Kubernetes work: Kubernetes Basics