Port mapping in Docker - python

I created a docker for a sample python pyramid app. My dockerfile is this:
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python-pip python-dev curl && \
pip install --upgrade pip setuptools
WORKDIR /app
COPY . /app
EXPOSE 6543
RUN pip install -e .
ENTRYPOINT [ "pserve" ]
CMD [ "development.ini" ]
My build command is this:
docker build -t pyramid_app:latest .
My run command is this:
docker run -d -p 6543:6543 pyramid_app
When i try to access http://localhost:6543 I get an error
Failed to load resource: net::ERR_SOCKET_NOT_CONNECTED
When I curl inside the machine it works fine.
It would be great if someone could help me figure out why my port mapping isn't working.
Thanks.

in your pserve config, change
[server:main]
listen = 127.0.0.1:6543
to
[server:main]
listen = *:6543
otherwise the web server will only accept connections from the docker container itself

Related

Can't connect to SQL server from Container

My Python container is throwing the error below when trying to connect to my SQL DB hosted on a server:
mariadb.OperationalError: Can't connect to server on 'xxxxxxxxxxx.jcloud-ver-jpc.ik-server.com' (115)
I am trying to run my container from my server as well. If I run the exact same container from my machine, I can connect to the SQL DB.
I am new to Docker, so just for info, here is my Dockerfile:
FROM python:3.10-slim-buster
WORKDIR /app
COPY alpha_gen alpha_gen
COPY poetry.lock .
COPY pyproject.toml .
# install basic utils
RUN apt-get update
RUN apt-get install curl -y
RUN apt-get install gcc -y
# install MariaDB connector
RUN apt install wget -y
RUN wget https://downloads.mariadb.com/MariaDB/mariadb_repo_setup
RUN chmod +x mariadb_repo_setup
RUN ./mariadb_repo_setup \ --mariadb-server-version="mariadb-10.6"
RUN apt install libmariadb3 libmariadb-dev -y
# install poetry
RUN curl -sSL https://install.python-poetry.org | python3 -
ENV PATH="${PATH}:/root/.local/bin"
RUN poetry config virtualenvs.in-project true
# install dependencies
RUN poetry install
CMD poetry run python alpha_gen/main.py --load_pre_process
Any ideas ?
Problem solved. Apparently there is a private port to use for communication from the server and a public port for communication from outside the server. I was using the public one so it was not working.

How can I upload a flask app associated with nginx to aws using docker compose?

I am trying to upload a flask script online associated with nginx image by using the following tutorial: https://www.docker.com/blog/docker-compose-from-local-to-amazon-ecs/
Here is my docker-compose-yml:
version: "3"
services:
app:
image: faust28100/moone-facerecognition:latest
build:
context: app
ports:
- "5000"
nginx:
image: faust28100/nginx-facereco:latest
volumes:
- mydata:/some/container/path
depends_on:
- app
ports:
- "80:80"
volumes:
mydata:
I am also using the nginx image, with some modifications to have my own config.
The dockerfile of the faust28100/nginx-facereco:latest image :
FROM nginx:alpine
COPY ./nginx.conf /etc/nginx/nginx.conf
And the nginx.conf:
events {
worker_connections 1000;
}
http {
server {
listen 80;
location / {
proxy_pass http://app:5000;
}
}
}
The following is the Dockerfile that builds the faust28100/moone-facerecognition:latest image :
FROM python:3.10.4-slim-buster
COPY . /app
WORKDIR /app
RUN python -m pip install --upgrade pip
RUN pip install gevent
RUN apt-get update && apt-get install -y \
curl
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update -y && \
apt-get install build-essential cmake pkg-config -y
RUN pip install dlib==19.23.1
RUN pip install -r requirements.txt
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
# EXPOSE 5000
# # CMD ["gunicorn", "-b", "0.0.0.0:5000", "wsgi:app"]
CMD gunicorn --bind 0.0.0.0:5000 --timeout 600 wsgi:app --worker-class gevent
My problem is that locally I do “docker compose up” and the script works perfectly, and when I go to localhost it redirects to the python script that renders a phrase. But when I connect to my aws context (same as in the link in the beginning) and do “docker compose up”, it says : 504 Gateway Time-out
nginx/1.21.6.
At first I thought that the problem came from my nginx configuration, but by reading more online I think that the problem is with the other image (faust28100/moone-facerecognition:latest), which seems to not be accessible from outside. I have tried a lot of tweaking but it still doesn’t work

flask app not running automatically from Dockerfile

My simple flask app is not automatically starting when I run in docker, though I have added CMD command correctly. I am able to run flask using python3 /app/app.py manually from container shell. Hence, no issue with code or command
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y gcc libffi-dev libssl-dev
RUN apt-get install -y libxml2-dev xmlsec1
RUN apt-get install -y python3-pip python3-dev
RUN pip3 --no-cache-dir install --upgrade pip
RUN rm -rf /var/lib/apt/lists/*
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN pip3 install -r requirements.txt
EXPOSE 5000
CMD ["/usr/bin/python3", "/app/app.py"]
I run docker container as
docker run -it okta /bin/bash
When I log in to docker container and run "ps -eaf" on Ubuntu shell of container, I do not see flask process running. So my question is why below line did not work in Dockerfile?
CMD ["/usr/bin/python3", "/app/app.py"]
Running your docker container and passing the command /bin/bash is overriding the CMD ["/usr/bin/python3", "/app/app.py"] in your Dockerfile.
CMD vs ENTRYPOINT Explained Here
Try changing the last line of your Dockerfile to
ENTRYPOINT ["/usr/bin/python3", "/app/app.py"]
Don't forget to rebuild your image after changing.
Or... you can omit the /bin/bash from the end of your docker run command and see if your app.py starts up successfully.

Python Docker error: [FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect) on SQL Server

I'm trying to make a simple MS SQL Server call from Python by using Docker. The SQL connection is able to establish if I run the python execute script, but it won't work from Docker.
My code is below
Dockerfile
from python:3
WORKDIR /code
COPY requirements.txt .
RUN apt-get update \
&& apt-get install unixodbc -y \
&& apt-get install unixodbc-dev -y \
&& apt-get install freetds-dev -y \
&& apt-get install freetds-bin -y \
&& apt-get install tdsodbc -y \
&& apt-get install --reinstall build-essential -y
RUN echo "[FreeTDS]\n\
Description = FreeTDS Driver\n\
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so\n\
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so" >> /etc/odbcinst.ini
#Pip command without proxy setting
RUN pip install -r requirements.txt
COPY src/ .
CMD ["python", "./producer.py"]
producer.py
import pyodbc
connP = pyodbc.connect('driver={FreeTDS};'
'server={MYSERV01\SQLEXPRESS};'
'database=ABCD;'
'uid=****;'
'pwd=*****')
requirement.txt
kafka-python
pyodbc==4.0.28
Error message
I referred to this article and did. I searched online for resolutions and tried several steps, but nothing helped. I'm pretty new to Docker and no experience in Python, so any help would be really good. Thanks in advance!
In your pyodbc.connect try to give the server as '0.0.0.0' instead of any other value.
If you want to debug it from inside the container, then comment the last CMD line of your Dockerfile.
Build your Docker container
docker build -f Dockerfile -t achu-docker-container .
Run your Docker Container
docker run -it achu-docker-container /bin/bash
This will place you inside the container. This is like, ssh to a different machine.
Go to your WORKDIR
cd code
python ./producer.py
What do you get the above above? (If you install any editor using apt-get install vim you will be able to interactively edit the producer.py file and fix your problem from inside the running container.
Then you can move your changes to your source Dockerfile and build a new image and container with it.
I was trying to connect to the local SQL Server database. I referred many articles and figured out that the following code works:
the server should have host.docker.inter,<port_no> -- this was the catch. When it comes to dedicated database where the sql server is different from the docker image, the server name is provided directly, but when both image and database are in same server, the following code works. Please check the port number in the SQL Configuration TCP Address (IP4All)

Unable to invoke flask server on docker

Dockerfile
FROM python:3.6.8
COPY . /app
WORKDIR /app
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update -y
RUN apt install libgl1-mesa-glx -y
RUN apt-get install 'ffmpeg'\
'libsm6'\
'libxext6' -y
RUN pip3 install --upgrade pip
RUN pip3 install opencv-python==4.3.0.38
RUN pip3 install -r requirements.txt
EXPOSE 80
CMD ["python3", "server.py"]
requirements.txt
Flask==0.12
Werkzeug==0.16.1
boto3==1.14.40
torch
torchvision==0.7.0
numpy==1.15.4
sklearn==0.0
scipy==1.2.1
scikit-image==0.14.2
pandas==0.24.2
server.py (Flask Server)
#app.route('/invoke', methods = ['POST'])
def handlePostRequest():
Insert some log statement here
return
if __name__ == '__main__':
app.run(host="0.0.0.0", port=80)
Commands which I run
docker build -t test .
docker run -p 5000:5000 test
Then invoked the Docker container using POST request in postman on 0.0.0.0/invoke but I am getting Error: connect ECONNREFUSED 0.0.0.0:80
Please let me know what wrong did I do here?
It looks like you are binding to all interfaces with app.run(host="0.0.0.0", port=80)
And it looks like you are mapping host port 5000 to container port 5000
-p 5000:5000
If your process is listening on port 80, you should change the mapping to be
-p 5000:80
And then get the IP address of the host where your container is running, and you should be able to:
curl <ip>:5000
And that will send traffic to your process in your container listening on port 80

Categories

Resources