How to properly run Python using Docker/Kubenertes? - python

I have a simple python code, but I can't get it running on Docker nor Kubernetes.
I'm building, tagging and pushing my image and than running
docker build -t test . --no-cache
docker tag test tcnl/test:0.2
docker push tcnl/test
docker run tcnl/test:0.2
#!/usr/bin/env python
# WS server example
print("Listening...")
import socket
print("Listening...")
import sys
print("Listening...")
s = socket.socket()
print("Listening...")
host = "0.0.0.0"
port = 8765
s.bind((host, port))
print("Listening...")
s.listen(5)
print("B4 while")
try:
c, addr = s.accept()
while True:
print("INSIDE TRUE")
print('Got connection from', addr)
c.send(("Connection accepted!").encode('utf-8'))
print(str(c.recv(4096).decode('utf-8')))
except:
print("Oops!",sys.exc_info(),"occured.")
input()
s.close()
It's meant to be just a websocket server, waiting for a connection an then just send messages. But not even that is working...
Now my Dockerfile
FROM python:3.6
MAINTAINER tcnl
# Creating Application Source Code Directory
RUN mkdir -p /test/src
# Setting Home Directory for containers
WORKDIR /test/src
# Installing python dependencies
COPY ./requirements.txt /test/src/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Copying src code to Container
COPY ./test-server.py /test/src/test.py
# Application Environment variables
ENV APP_ENV development
# Exposing Ports
EXPOSE 8765
# Setting Persistent data
VOLUME ["/test-data"]
# Running Python Application
CMD python /test/src/test.py
When I run on docker the program doesn't even print. What am I doing wrong?

Just run docker run -it tcnl/test:0.2 it would bind docker stdout to your terminal and you will see output:
Docker
# docker run -it tcnl/test:0.2
Listening...
Listening...
Listening...
Listening...
Listening...
B4 while
Kubernetes
kubectl run -it test-server --restart=Never --image=tcnl/test:0.2
To get output:
kubectl logs test-server
Listening...
Listening...
Listening...
Listening...
Starting Flask...
Listening...
B4 while

Related

microservices: client service and server service (fastAPI) running as docker

I need to build a small program with microservice architecture:
server service (Python fast API framework)
I run it with Dockerfile command:
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
client service: simple Python CLI textual requires input as username from input CLI and to connect server GET/POST HTTP requests
unsername= input("Please insert your unsername:")
log.info(f"{unsername}")
I run it with Dockerfile command:
CMD ["python", "./main.py"]
I am not sure how to run my client with the docker to run the main but with no existing.
when I am running with venv from 2 different terminals the client and the server all work as expected and succeed to connect (because both of them are on my machine)
with docker.
I got an error related to the username I try to input
EOFError: EOF when reading a line
even if delete the input I still got an error conn = connection.create_connection...Failed to establish a new connection like the client failed to connect my server when it on isolated container.
for enable input from terminal need to adding to docker-compose :
tty: true # docker run -t
stdin_open: true # docker run -i

docker client and server containers connection refused

I want to connect client image and server image. So I tried to use --link, docker-compose.yml, but I failed. However, when I tried to connect local client code and server container, it succeed. I think it may be a problem of Dockerfile, but I can't fix it..
these are my code:
---server
import socket
HOST = socket.gethostbyname('localhost')
PORT = 65456
print('> echo-server is activated')
#print(HOST,PORT)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as serverSocket:
serverSocket.bind(('', PORT))
serverSocket.listen()
clientSocket, clientAddress = serverSocket.accept()
with clientSocket:
print('> client connected by IP address {0} with Port number {1}'.format(clientAddress[0], clientAddress[1]))
while True:
# [=start=]
RecvData = clientSocket.recv(1024)
print('> echoed:', RecvData.decode('utf-8'))
clientSocket.sendall(RecvData)
if RecvData.decode('utf-8') == 'quit':
break
# [==end==]
print('> echo-server is de-activated')
---client
import socket
HOST = socket.gethostbyname('localhost')
PORT = 65456
print('> echo-client is activated')
#print(HOST,PORT)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as clientSocket:
#print(HOST,PORT)
clientSocket.connect((HOST, PORT))
while True:
sendMsg = input("> ")
clientSocket.sendall(bytes(sendMsg, 'utf-8'))
recvData = clientSocket.recv(1024)
print('> received:', recvData.decode('utf-8'))
if sendMsg == "quit":
break
print('> echo-client is de-activated')
---server Dockerfile
FROM python:latest
COPY . /me
RUN apt-get update
RUN mkdir -p /me
CMD ["python", "/me/server.py"]
EXPOSE 65456
---client Dockerfile
FROM python:latest
COPY . /you
RUN apt-get update
RUN mkdir -p /you
CMD ["python", "/you/client.py"]
EXPOSE 65456
This is echo program.
Your code works for me.
You don't include the commands that you tried. These would be helpful additions to your question.
Build both containers. I using different Dockerfile names to disambiguate between the two.
Q="74282751"
docker build \
--tag=${Q}:server \
--file=./Dockerfile.server \
${PWD}
docker build \
--tag=${Q}:client \
--file=./Dockerfile.client \
${PWD}
Then in one shell, run the server container publishing the container's port 65456 on the host's port (65456):
docker run \
--interactive --tty --rm \
--publish=65456:65456 \
${Q}:server
And, in another shell, run the client container, binding the container to the host's network. This is an easy way to provide access to the host's 65456 port which now maps to the server container port 65456:
docker run \
--interactive --tty --rm \
--net=host \
${Q}:client
Feedback
When you FROM python, you're actually implicitly referencing Docker's container registry. Always be explicit and FROM docker.io/python
Try to never use :latest. In this case, you'd want to consider using the actual latest version of the Python container (i.e. 3.9.15).
Conventionally WORKDIR is used to define a working directory in a container. This creates and changes to the folder and save RUN mkdir and repeatedly referencing the destination folder.
Try to be explicit about what is being COPY'd. For the server, it's only server.py rather than . the entire directory.
EXPOSE is for documentation only. It has no effect. It only applies to the server since the client doesn't expose any ports.
Rather than hard-code constants (HOST,PORT) in your code, it's good practice to access these as environment variables (i.e. HOST = os.getenv("HOST")). This makes your code more flexible. Generally (!) for code that's destined to be containerized, you'll be able to default HOST to localhost (or 127.0.0.1 sometimes 0.0.0.0).
FROM docker.io/python:3.9.15
RUN apt-get update
WORKDIR /me
COPY server.py .
EXPOSE 65456
CMD ["python", "/me/server.py"]

Why cannot access a running Docker container in browser?

Here is my dockerfile:
FROM python:3.8
WORKDIR /locust
RUN pip3 install locust
COPY ./ /locust/
EXPOSE 8089
CMD ["locust", "-f", "locustfile.py"]
Here is the response:
Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
Starting Locust 1.2.3
But when I try to access it in the browser - it doesn't load. I feel like I might be missing something simple, but cannot find it.
This statement,
EXPOSE 8089
will only expose your port for inter-container communication, but not to the host.
For allowing host to communicate on the container port you will need to bind the port of host and container in the docker run command as follows
docker run -p <HOST_PORT>:<CONTAINER:PORT> IMAGE_NAME
which in your case will be
docker run -p 8089:8089 IMAGE_NAME

Unable to connect multiple python socket.io inside docker

I built a docker-compose of a simple python3.6 container exposing port 5000. This container run a python server side script waiting for clients to connect. Here are the files:
Dockerfile:
FROM python:3.6-alpine
WORKDIR /app
CMD ["python","serveur.py"]
Docker-compose:
version: '2'
services:
serveur:
build:
context: .
dockerfile: Serveur
ports:
- "127.0.0.1:5000:5000"
volumes:
- "./app:/app"
serveur.py:
#!/usr/bin/python3
import socket
import threading
print("debut du programme")
socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = "0.0.0.0"
port = 5000
socket.bind((host, port))
socket.listen(5)
for i in range(2):
print("ready to connect")
a,b = socket.accept()
print("Client connected")
socket.close()
Here is the issue:
-If I run the docker compose, my client cant connect on the server; the code seems to block.More over, none of the print are showing in the Docker logs. If I take the socket.accept() out of the loop, one client can connect and I see all the print in the logs. If I take the loop out of the code and I just align multiple socket.accept(), well, the code block.
I know the issue is with my Docker settings because if I run this script out of Docker, the serveur.py works perfectly.
Thanks guys for your time.
It turns out that the docker logs are delayed until the python program stop. So I never saw the print because the program, well, never stop. The solution is to
put this env variable in the docker-compose file:
version: '2'
services:
serveur:
build:
context: .
dockerfile: Serveur
environment:
- "PYTHONUNBUFFERED=1"
ports:
- "127.0.0.1:5000:5000"
volumes:
- "./app:/app"
So now I can see the print that confirm connection..

Running a Self-Developed Python Server on Docker Container

I have this python server:
import SocketServer
class TCPHandler(SocketServer.BaseRequestHandler):
def handle(self):
print 'client is connected'
data = self.request.recv(1024)
print data
self.request.sendall('Message received!')
HOST, PORT = '0.0.0.0', 5004
server = SocketServer.TCPServer((HOST, PORT), TCPHandler)
print 'Listening on port {}'.format(PORT)
server.serve_forever()
and this client:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('192.168.56.101', 5004))
s.sendall('Hello my name is John')
response = s.recv(1024)
print 'Response: {}'.format(response)
My OS is macOS and I have Ubuntu 14.04 installed on a virtual machine using VirtualBox. In VirtualBox, I setup a NAT network and I gave Ubuntu this IP address: 192.168.56.101. I put the server program on Ubuntu and added a rule in IPTables to allow incoming connections from port 5004. I started the server on Ubuntu and I tried to connect to the server using the client above on my macOS. The connection went through and the data exchange finished successfully.
Now to the problem. I installed Docker on my virtualized Ubuntu. Docker itself uses another version of Ubuntu 14.04. What I want is to run the server inside the Dockrized version of Ubuntu, so I wrote this Dockerfile:
FROM bamos/ubuntu-opencv-dlib-torch:ubuntu_14.04-opencv_2.4.11-dlib_19.0-torch_2016.07.12
ADD . /root
EXPOSE 5004
CMD ["python2", "/root/server.py"]
I build it using this command: sudo docker build -t boring91/mock and it was built successfully. I ran the Docker container using this command: sudo docker run -p 5004:5004 -t boring91/mock and it showed that it is started listening on port 5004. When I tried to connect to it using my client on my macOS, the socket connected but no data exchange happened. Same thing happens when I run the client on the virtualized Ubuntu. Can anyonw tell me whats the issue here?

Categories

Resources