so I do have the next code, which is a Flask app, and it runs on a Raspberry inside a docker container with NginX and Uwsgi.
I need to be able to connect to local ssh and also to get the correct MAC address of the Raspberry.
import socket
import time
import pwd
import os
from ssh2.session import Session
from flask import Flask
from flask import request
app = Flask(__name__)
usernameSSH = pwd.getpwuid(os.getuid()).pw_name
passwordSSH = "password"
host = 'localhost' # the specified host for the SSH
#app.route("/")
def hello():
return gma() #get mac address of the device
#app.route("/ssh") #This is the connection to ssh
def welcome():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, 22))
session = Session()
session.handshake(sock)
session.userauth_password(usernameSSH, passwordSSH)
channel = session.open_session()
channel.shell()
channel.write("ifconfig\n")
time.sleep(2)
size, data = channel.read()
channel.close()
return data.decode("utf-8")
if __name__ == "__main__":
app.run(threaded=True)
So. If I run my docker image like this:
docker run -p 3134:80 dockapp
I'm able to connect with the external IP and port. But the mac address that I get on external IP:port/ it's wrong, it's the mac address of the container. Also, I get auth error when trying to connect to ssh.
If I run my image like this:
sudo docker run -p 3134:80 --network=host dockapp
I'm able to reach the API only by typing the local address of the RaspBerry but I do get the correct MAC Addres but still I'm not able to connect to local SSH and I get an authError. The program is running fine without docker. Any solutions ?
Also, adding host="0.0.0.0" to the Flask App hasn't helped at all
DockerFile:
FROM cseelye/rpi-nginx-uwsgi-flask
MAINTAINER Carl Seelye <cseelye#gmail.com>
RUN apt-get update && apt-get -y install cmake libssl-dev
COPY exampleapp /app
RUN pip install -U -r /app/requirements.txt
Related
I want to connect client image and server image. So I tried to use --link, docker-compose.yml, but I failed. However, when I tried to connect local client code and server container, it succeed. I think it may be a problem of Dockerfile, but I can't fix it..
these are my code:
---server
import socket
HOST = socket.gethostbyname('localhost')
PORT = 65456
print('> echo-server is activated')
#print(HOST,PORT)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as serverSocket:
serverSocket.bind(('', PORT))
serverSocket.listen()
clientSocket, clientAddress = serverSocket.accept()
with clientSocket:
print('> client connected by IP address {0} with Port number {1}'.format(clientAddress[0], clientAddress[1]))
while True:
# [=start=]
RecvData = clientSocket.recv(1024)
print('> echoed:', RecvData.decode('utf-8'))
clientSocket.sendall(RecvData)
if RecvData.decode('utf-8') == 'quit':
break
# [==end==]
print('> echo-server is de-activated')
---client
import socket
HOST = socket.gethostbyname('localhost')
PORT = 65456
print('> echo-client is activated')
#print(HOST,PORT)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as clientSocket:
#print(HOST,PORT)
clientSocket.connect((HOST, PORT))
while True:
sendMsg = input("> ")
clientSocket.sendall(bytes(sendMsg, 'utf-8'))
recvData = clientSocket.recv(1024)
print('> received:', recvData.decode('utf-8'))
if sendMsg == "quit":
break
print('> echo-client is de-activated')
---server Dockerfile
FROM python:latest
COPY . /me
RUN apt-get update
RUN mkdir -p /me
CMD ["python", "/me/server.py"]
EXPOSE 65456
---client Dockerfile
FROM python:latest
COPY . /you
RUN apt-get update
RUN mkdir -p /you
CMD ["python", "/you/client.py"]
EXPOSE 65456
This is echo program.
Your code works for me.
You don't include the commands that you tried. These would be helpful additions to your question.
Build both containers. I using different Dockerfile names to disambiguate between the two.
Q="74282751"
docker build \
--tag=${Q}:server \
--file=./Dockerfile.server \
${PWD}
docker build \
--tag=${Q}:client \
--file=./Dockerfile.client \
${PWD}
Then in one shell, run the server container publishing the container's port 65456 on the host's port (65456):
docker run \
--interactive --tty --rm \
--publish=65456:65456 \
${Q}:server
And, in another shell, run the client container, binding the container to the host's network. This is an easy way to provide access to the host's 65456 port which now maps to the server container port 65456:
docker run \
--interactive --tty --rm \
--net=host \
${Q}:client
Feedback
When you FROM python, you're actually implicitly referencing Docker's container registry. Always be explicit and FROM docker.io/python
Try to never use :latest. In this case, you'd want to consider using the actual latest version of the Python container (i.e. 3.9.15).
Conventionally WORKDIR is used to define a working directory in a container. This creates and changes to the folder and save RUN mkdir and repeatedly referencing the destination folder.
Try to be explicit about what is being COPY'd. For the server, it's only server.py rather than . the entire directory.
EXPOSE is for documentation only. It has no effect. It only applies to the server since the client doesn't expose any ports.
Rather than hard-code constants (HOST,PORT) in your code, it's good practice to access these as environment variables (i.e. HOST = os.getenv("HOST")). This makes your code more flexible. Generally (!) for code that's destined to be containerized, you'll be able to default HOST to localhost (or 127.0.0.1 sometimes 0.0.0.0).
FROM docker.io/python:3.9.15
RUN apt-get update
WORKDIR /me
COPY server.py .
EXPOSE 65456
CMD ["python", "/me/server.py"]
I have docker running on my Windows 10 OS and I have a minimal flask app
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(host ='0.0.0.0', port = 5001, debug = True)
And I am dockerizing it using the following file
FROM python:alpine3.7
COPY . /opt
WORKDIR /opt
RUN pip install -r requirements.txt
EXPOSE 5001
ENTRYPOINT [ "python" ]
CMD ["app.py", "run", "--host", "0.0.0.0"]
From what I am seeing on other posts and on Flask tutorials having a 0.0.0.0 should allow me to connect from the windows firefox browser when I type 0.0.0.0:5001 but it is not connecting, I keep getting a 'unable to connect' message. I remember using the 0.0.0.0:port to connect in localhost on a linux ubuntu machine but for whatever reason its not letting me connect on Windows. Is there a special setting to connect on windows ?
Inside the Docker container, the private port is 5001. This private port then needs to be mapped to a public port when running the container. For example, to set the public port to 8000, you could run:
$ docker run --publish 8000:5001 --name <docker-container> <docker-container>:<version-tag>
The Flask app would then be accessible at URL: http://127.0.0.1:8000
Dockerfile
In addition, since in app.py you are setting the host and port, there is no need to specify these values in the Dockerfile CMD. But the public port (in this example 8000) needs to be exposed.
It also looks like the COPY command is placing everything under an /opt directory, so that needs to be included in the app path when launching the Flask app within Docker.
FROM python:alpine3.7
COPY . /opt
WORKDIR /opt
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["python", "/opt/app.py"]
Docker-Flask Example
For a complete Flask-Docker example, including using Gunicorn, see:
Docker Python Flask Example using the Gunicorn WSGI HTTP Server
I have an iamge which is a flask app.
I am trying to run it, like so:
import docker
import requests
client = docker.from_env()
if __name__ == "__main__":
client.containers.run(IMAGE_NAME, detach=True, ports={"1000/tcp": "1000"})
res = requests.get("http://localhost:1000/health")
print(res.status_code)
The flask app:
#app.route("/health")
def health_check():
return "healthy"
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True, port=int(os.environ.get("PORT", 5000)))
However I get an error:
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
Presumably the connection dies out after run ends?
How do I make it so that the app keeps running in the background?
My dockerfile:
FROM python:3
# set a directory for the app
WORKDIR /usr/src/app
# copy all the files to the container
COPY . .
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# tell the port number the container should expose
EXPOSE 5000
# run the command
CMD ["python", "app.py"]
Running the container using the following should solve the issue
client.containers.run(IMAGE_NAME, detach=True, ports={"5000/tcp": "5000"}) which binds port 5000 of the container to port 5000 of the host machine.
Ports 0 to 1024 are strictly reserved for OS applications, it's always adviced to use ports greater than 1024 for the host machine. However, to use ports less than 1024 you'll need to have sudo access which is not advisable.
Add a few sleeps in conjunction with #Saiprasad's answer solved this issue:
if __name__ == "__main__":
container = client.containers.run(IMAGE_NAME, detach=True, ports={"5000/tcp": "5000"})
import time
time.sleep(1)
res = requests.get("http://localhost:5000/health")
print(res.status_code)
print(res.text)
container.kill()
time.sleep(1)
I am trying to run a flask through docker.
My dockerfile:
FROM python:3
# set a directory for the app
WORKDIR /usr/src/app
# copy all the files to the container
COPY . .
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# tell the port number the container should expose
EXPOSE 3000
# run the command
ENTRYPOINT ["python", "app.py"]
My app.py
import os
from flask import Flask, request, jsonify
from submission import SubmissionResult
app = Flask(__name__)
#app.route("/health")
def health_check():
return "healthy"
if __name__ == "__main__":
app.run(debug=True)
I ran docker run <my_image>.
When I hit "/health" on postman, I get an error saying:
Could not get any response
There was an error connecting to .
Why this might have happened:
The server couldn't send a response:
Ensure that the backend is working properly
Self-signed SSL certificates are being blocked:
Fix this by turning off 'SSL certificate verification' in Settings > General
Proxy configured incorrectly
Ensure that proxy is configured correctly in Settings > Proxy
Request timeout:
Change request timeout in Settings > General
It's weird because my flask app doesn't show any debugging logs, so presumably the request isn't hitting the server. Am I not exposing the port correctly?
How do I fix this?
You are not specifying the output port on the Python code. It maps to the default port, which is 5000. Use app.run(debug=True, port=3000) if you want to expose to port 3000. You also have to export the Docker port on host, do this calling -p 3000:3000 before <my_image>. It will tell Docker to map port 3000 of the image to port 3000 of host.
I have this python server:
import SocketServer
class TCPHandler(SocketServer.BaseRequestHandler):
def handle(self):
print 'client is connected'
data = self.request.recv(1024)
print data
self.request.sendall('Message received!')
HOST, PORT = '0.0.0.0', 5004
server = SocketServer.TCPServer((HOST, PORT), TCPHandler)
print 'Listening on port {}'.format(PORT)
server.serve_forever()
and this client:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('192.168.56.101', 5004))
s.sendall('Hello my name is John')
response = s.recv(1024)
print 'Response: {}'.format(response)
My OS is macOS and I have Ubuntu 14.04 installed on a virtual machine using VirtualBox. In VirtualBox, I setup a NAT network and I gave Ubuntu this IP address: 192.168.56.101. I put the server program on Ubuntu and added a rule in IPTables to allow incoming connections from port 5004. I started the server on Ubuntu and I tried to connect to the server using the client above on my macOS. The connection went through and the data exchange finished successfully.
Now to the problem. I installed Docker on my virtualized Ubuntu. Docker itself uses another version of Ubuntu 14.04. What I want is to run the server inside the Dockrized version of Ubuntu, so I wrote this Dockerfile:
FROM bamos/ubuntu-opencv-dlib-torch:ubuntu_14.04-opencv_2.4.11-dlib_19.0-torch_2016.07.12
ADD . /root
EXPOSE 5004
CMD ["python2", "/root/server.py"]
I build it using this command: sudo docker build -t boring91/mock and it was built successfully. I ran the Docker container using this command: sudo docker run -p 5004:5004 -t boring91/mock and it showed that it is started listening on port 5004. When I tried to connect to it using my client on my macOS, the socket connected but no data exchange happened. Same thing happens when I run the client on the virtualized Ubuntu. Can anyonw tell me whats the issue here?