I want to connect client image and server image. So I tried to use --link, docker-compose.yml, but I failed. However, when I tried to connect local client code and server container, it succeed. I think it may be a problem of Dockerfile, but I can't fix it..
these are my code:
---server
import socket
HOST = socket.gethostbyname('localhost')
PORT = 65456
print('> echo-server is activated')
#print(HOST,PORT)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as serverSocket:
serverSocket.bind(('', PORT))
serverSocket.listen()
clientSocket, clientAddress = serverSocket.accept()
with clientSocket:
print('> client connected by IP address {0} with Port number {1}'.format(clientAddress[0], clientAddress[1]))
while True:
# [=start=]
RecvData = clientSocket.recv(1024)
print('> echoed:', RecvData.decode('utf-8'))
clientSocket.sendall(RecvData)
if RecvData.decode('utf-8') == 'quit':
break
# [==end==]
print('> echo-server is de-activated')
---client
import socket
HOST = socket.gethostbyname('localhost')
PORT = 65456
print('> echo-client is activated')
#print(HOST,PORT)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as clientSocket:
#print(HOST,PORT)
clientSocket.connect((HOST, PORT))
while True:
sendMsg = input("> ")
clientSocket.sendall(bytes(sendMsg, 'utf-8'))
recvData = clientSocket.recv(1024)
print('> received:', recvData.decode('utf-8'))
if sendMsg == "quit":
break
print('> echo-client is de-activated')
---server Dockerfile
FROM python:latest
COPY . /me
RUN apt-get update
RUN mkdir -p /me
CMD ["python", "/me/server.py"]
EXPOSE 65456
---client Dockerfile
FROM python:latest
COPY . /you
RUN apt-get update
RUN mkdir -p /you
CMD ["python", "/you/client.py"]
EXPOSE 65456
This is echo program.
Your code works for me.
You don't include the commands that you tried. These would be helpful additions to your question.
Build both containers. I using different Dockerfile names to disambiguate between the two.
Q="74282751"
docker build \
--tag=${Q}:server \
--file=./Dockerfile.server \
${PWD}
docker build \
--tag=${Q}:client \
--file=./Dockerfile.client \
${PWD}
Then in one shell, run the server container publishing the container's port 65456 on the host's port (65456):
docker run \
--interactive --tty --rm \
--publish=65456:65456 \
${Q}:server
And, in another shell, run the client container, binding the container to the host's network. This is an easy way to provide access to the host's 65456 port which now maps to the server container port 65456:
docker run \
--interactive --tty --rm \
--net=host \
${Q}:client
Feedback
When you FROM python, you're actually implicitly referencing Docker's container registry. Always be explicit and FROM docker.io/python
Try to never use :latest. In this case, you'd want to consider using the actual latest version of the Python container (i.e. 3.9.15).
Conventionally WORKDIR is used to define a working directory in a container. This creates and changes to the folder and save RUN mkdir and repeatedly referencing the destination folder.
Try to be explicit about what is being COPY'd. For the server, it's only server.py rather than . the entire directory.
EXPOSE is for documentation only. It has no effect. It only applies to the server since the client doesn't expose any ports.
Rather than hard-code constants (HOST,PORT) in your code, it's good practice to access these as environment variables (i.e. HOST = os.getenv("HOST")). This makes your code more flexible. Generally (!) for code that's destined to be containerized, you'll be able to default HOST to localhost (or 127.0.0.1 sometimes 0.0.0.0).
FROM docker.io/python:3.9.15
RUN apt-get update
WORKDIR /me
COPY server.py .
EXPOSE 65456
CMD ["python", "/me/server.py"]
Related
Here is my dockerfile:
FROM python:3.8
WORKDIR /locust
RUN pip3 install locust
COPY ./ /locust/
EXPOSE 8089
CMD ["locust", "-f", "locustfile.py"]
Here is the response:
Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
Starting Locust 1.2.3
But when I try to access it in the browser - it doesn't load. I feel like I might be missing something simple, but cannot find it.
This statement,
EXPOSE 8089
will only expose your port for inter-container communication, but not to the host.
For allowing host to communicate on the container port you will need to bind the port of host and container in the docker run command as follows
docker run -p <HOST_PORT>:<CONTAINER:PORT> IMAGE_NAME
which in your case will be
docker run -p 8089:8089 IMAGE_NAME
so I do have the next code, which is a Flask app, and it runs on a Raspberry inside a docker container with NginX and Uwsgi.
I need to be able to connect to local ssh and also to get the correct MAC address of the Raspberry.
import socket
import time
import pwd
import os
from ssh2.session import Session
from flask import Flask
from flask import request
app = Flask(__name__)
usernameSSH = pwd.getpwuid(os.getuid()).pw_name
passwordSSH = "password"
host = 'localhost' # the specified host for the SSH
#app.route("/")
def hello():
return gma() #get mac address of the device
#app.route("/ssh") #This is the connection to ssh
def welcome():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, 22))
session = Session()
session.handshake(sock)
session.userauth_password(usernameSSH, passwordSSH)
channel = session.open_session()
channel.shell()
channel.write("ifconfig\n")
time.sleep(2)
size, data = channel.read()
channel.close()
return data.decode("utf-8")
if __name__ == "__main__":
app.run(threaded=True)
So. If I run my docker image like this:
docker run -p 3134:80 dockapp
I'm able to connect with the external IP and port. But the mac address that I get on external IP:port/ it's wrong, it's the mac address of the container. Also, I get auth error when trying to connect to ssh.
If I run my image like this:
sudo docker run -p 3134:80 --network=host dockapp
I'm able to reach the API only by typing the local address of the RaspBerry but I do get the correct MAC Addres but still I'm not able to connect to local SSH and I get an authError. The program is running fine without docker. Any solutions ?
Also, adding host="0.0.0.0" to the Flask App hasn't helped at all
DockerFile:
FROM cseelye/rpi-nginx-uwsgi-flask
MAINTAINER Carl Seelye <cseelye#gmail.com>
RUN apt-get update && apt-get -y install cmake libssl-dev
COPY exampleapp /app
RUN pip install -U -r /app/requirements.txt
I have a simple python code, but I can't get it running on Docker nor Kubernetes.
I'm building, tagging and pushing my image and than running
docker build -t test . --no-cache
docker tag test tcnl/test:0.2
docker push tcnl/test
docker run tcnl/test:0.2
#!/usr/bin/env python
# WS server example
print("Listening...")
import socket
print("Listening...")
import sys
print("Listening...")
s = socket.socket()
print("Listening...")
host = "0.0.0.0"
port = 8765
s.bind((host, port))
print("Listening...")
s.listen(5)
print("B4 while")
try:
c, addr = s.accept()
while True:
print("INSIDE TRUE")
print('Got connection from', addr)
c.send(("Connection accepted!").encode('utf-8'))
print(str(c.recv(4096).decode('utf-8')))
except:
print("Oops!",sys.exc_info(),"occured.")
input()
s.close()
It's meant to be just a websocket server, waiting for a connection an then just send messages. But not even that is working...
Now my Dockerfile
FROM python:3.6
MAINTAINER tcnl
# Creating Application Source Code Directory
RUN mkdir -p /test/src
# Setting Home Directory for containers
WORKDIR /test/src
# Installing python dependencies
COPY ./requirements.txt /test/src/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Copying src code to Container
COPY ./test-server.py /test/src/test.py
# Application Environment variables
ENV APP_ENV development
# Exposing Ports
EXPOSE 8765
# Setting Persistent data
VOLUME ["/test-data"]
# Running Python Application
CMD python /test/src/test.py
When I run on docker the program doesn't even print. What am I doing wrong?
Just run docker run -it tcnl/test:0.2 it would bind docker stdout to your terminal and you will see output:
Docker
# docker run -it tcnl/test:0.2
Listening...
Listening...
Listening...
Listening...
Listening...
B4 while
Kubernetes
kubectl run -it test-server --restart=Never --image=tcnl/test:0.2
To get output:
kubectl logs test-server
Listening...
Listening...
Listening...
Listening...
Starting Flask...
Listening...
B4 while
I run python container, I want to connect localhost postegresql. And I try some method. But not work. Please talk me. How can I do ? Thanks.
I have run postegresql on port 5432, create datatbase and grant user.
run docker comand
docker run --name=python3 -v ${pwd}:/code -w /code python
python code
import psycopg2
def main():
#Define our connection string
conn_string = "host='localhost' dbname='testDB' user='test' password='test'"
# print the connection string we will use to connect
print ("Connecting to database\n ->{}".format(conn_string))
# get a connection, if a connect cannot be made an exception will be raised here
conn = psycopg2.connect(conn_string)
# conn.cursor will return a cursor object, you can use this cursor to perform queries
cursor = conn.cursor()
print ("Connected!\n")
if __name__ == "__main__":
main()
error message
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
It depends on your host OS and your docker version.
I assume here your database is not running in a container itself, but rather on the host itself (localhost).
For instance, as mentioned in "From inside of a Docker container, how do I connect to the localhost of the machine?", with Docker for Mac v 17.06 and above (June 2017), you can connect to the special Mac-only DNS name docker.for.mac.localhost which will resolve to the internal IP address used by the host.
On Linux directly, you would use the host mode, with an image having ifconfig installed:
docker run --network="host" -id <Docker image ID>
Reading that you're on Windows 10 and running postgresql on the host, I advise you to run postgresql in a container. It makes this way easier.
To connect the python container to the postgres container you'll need a docker network though. Let's call it postgres_backend.
docker network create postgres_backend
You can create the postgresql container with the following command. Just change the /path/to/store/data to a local directory in which you'd like to store the postgres data:
docker run --name postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=test \
-d --restart always \
-v /path/to/store/data:/var/lib/postgresql/data \
--net postgres_backend \
postgres:9.6
Now your postresql container should be up and running :)
To connect your python container to it, you'll have to add a --net postgres_backend to your docker run command and change the host in your script to "postgres" (it's the name we gave the postgres container with --name).
If the python container can't find the host "postgres", try it with the IP shown when entering the command docker exec -ti postgres ip addr show.
to fix this bug:
First, it is normal that it does not work, because postgresql does not run in the same container as the application so the host localhost: 5432 does not exist.
to fix it :
on the properties file isntead localhost:5432 use your IPadresse like IP:5432 and in the pg_hba.conf add this
host all all 0.0.0.0/0 md5
I currently have supervisor serving my Django Application which I then expose on port 8002 in my Docker file. This all works ok...
[program:app]
command=gunicorn app.core.wsgi:application -c /var/projects/app/server/gunicorn.conf
user=webapp
backlog = 2048
chdir = "/var/projects/apps"
bind = "0.0.0.0:8002"
pidfile = "/var/run/webapp/gunicorn.pid"
daemon = False
debug = False
In Docker
# Expose listen ports
EXPOSE 8002
However, I have been told it is better to use a socket over a port but, I'm unsure how to "EXPOSE" a socket in my Docker file. This is how far I have got:
New supervisor config....
backlog = 2048
chdir = "/var/projects/apps"
bind = "unix:/var/run/webapp/gunicorn.sock"
pidfile = "/var/run/webapp/gunicorn.pid"
daemon = False
debug = False
Docker
# Expose listen ports
EXPOSE ???? (may be unix:/var/run/webapp/gunicorn.sock fail_timeout=0;???)
How do I expose the socket?
EXPOSE only works with UDP and TCP sockets.
If you want to make a Unix domain socket available outside of your container, you will need to mount a host directory inside the container and then place the socket there. For example, if you were to:
docker run -v /srv/webapp:/var/run/webapp ...
Then /var/run/webapp/gunicorn.sock in your container would be /srv/webapp/gunicorn.sock on your host.
Of course, this assumes that you have something running on your host, or in another container that also has access to /srv/webapp, that is able to consume that socket and use it to provide a service.