gunicorn: command not found with Docker and Dash Python - python

I've recently developped a Dash Python Dashboard web app using Docker and I want to deploy it. (it's working perfectly in development).
Since Flask is not stable for deployment, I decided to use gunicorn instead.
I've added gunicorn in the requirements.txt.
I've replaced python app.py by gunicorn app:server in the initial script. And I've rebuilt the docker-compose to install the new image.
But I get the error gunicorn: command not found.
It seems that there is an issue with the path of gunicorn but I don't know how to solve it.
Here is the Dockerfile of the container named container_api:
FROM archlinux:latest
COPY api/requirements.txt ./
RUN pacman-db-upgrade \
&& pacman -Syyu --noconfirm \
&& pacman -S python --noconfirm \
&& pacman -S python-pip --noconfirm \
&& pip install --no-cache-dir -r requirements.txt
WORKDIR /app
CMD chmod a+x entrypoint.sh && ./entrypoint.sh
Here is the entrypoint.sh:
#!/bin/bash
gunicorn app:server
I specify that I have a shared volume named app between the host and the container. So entrypoint.sh is accessible by the container.
The log of the container is displaying:
container_api | ./entrypoint.sh: line 3: gunicorn: command not found
I also add the docker-compose file to see how the containers are built:
version: "3"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
container_name: container_worker
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./api:/app/
- ./worker:/app2/
api:
build:
dockerfile: ./api/Dockerfile
container_name: container_api
volumes:
- ./api:/app/
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "${API_PORT}:8050"
depends_on:
- worker
What is weird is that when I was using Flask as development server, I had no problem using the packages installed via requirement.txt in my Dash app. It seems that using a package outside the Dash app (in the entrypoint script) is making problem. Do you know why?
I hope I was clear in my explanations. Thank you for your help,

Ok it seems I discovered why I have this problem.
My development machine is a distant server and I'm developping on it via Vscode on my local machine with remote-ssh extension. I checked the Path and it's actually something weird with some vscode elements inside.
So I tried launching the docker without Vscode and it's working. I really don't know how to fix this issue but I will find out on vscode posts.
Thank you very much for your help

Related

Docker Max retries exceeded with url: /assets

I'm a bit new to docker and I'm messing around with it. I currently have a server being ran on port 5000 in another container. This server is being ran with express and uses JavaScript. I'm trying to send requests to that server with python. I tried using both localhost:5000 and 127.0.0.1:5000 but neither of these seems to work. What can I do? I noticed if I run the python code without docker it works perfectly fine.
Python Docker File:
FROM ubuntu:latest
RUN apt update
RUN apt install python3 -y
RUN apt-get install -y python3-pip
WORKDIR /usr/app/src
COPY . .
RUN pip install discum
RUN pip install python-dotenv
EXPOSE 5000
CMD ["python3", "./src/index.py"]
JavaScript Docker File:
FROM node:latest
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
You could create a network between the to containers using --net look at this answer How to get Docker containers to talk to each other while running on my local host?
Another way, and my preferred way, is to use docker-compose and create networks between your containers.
Use Service Name and Port is always the best.
So if you have a docker file like the below you could use the URL http://client:5000
version: 3.8
services:
client:
image: blah
ports:
- 5000:5000

Inside Docker Container - python: can't open file './services/web/manage.py': [Errno 2] No such file or directory

I am trying to create 2x Docker containers:
For my WEB API
For PostgreSQL DB
I am using docker-compose in order to build these containers. Even though I can successfully build them using docker-compose build command, whenever I go to inspect the logs using docker-compose logs -f command, I am getting the following error message:
...
db_1 | 2020-08-19 12:39:07.681 UTC [45] LOG: database system was shut down at 2020-08-19 12:39:07 UTC
db_1 | 2020-08-19 12:39:07.686 UTC [1] LOG: database system is ready to accept connections
web_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
nlp-influencertextanalysis_web_1 exited with code 2
Everything seems fine with db container, but for some reason inside web container Python cannot locate manage.py file. Here is my file structure:
And here is code for my docker-compose.yml:
version: '3.7'
services:
web:
build: ./services/web
command: python manage.py run -h 0.0.0.0
volumes:
- ./services/web/:/usr/src/app/
ports:
- 5000:5000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:12-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user1
- POSTGRES_PASSWORD=test123
- POSTGRES_DB=influencer_analysis
volumes:
postgres_data:
And here is my code for Dockerfile:
FROM python:3.8.1-slim-buster AS training
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install system dependencies
RUN apt-get update && apt-get install -y netcat
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# install NLTK dependencies
RUN python -c "import nltk; nltk.download('punkt')"
# copy project
COPY . /usr/src/app/
WORKDIR /usr/src/app/experiments
RUN python train.py --data data/HaInstagramPostDetails.xlsx --update 1
I should note that I've printed out all fines that are located in /usr/src/app when train.py is executed with RUN command from Docker file, and manage.py is there.
I believe the problem is that you have changed the working directory at the end of your Docker file.
You can try to give an exact path to your manage.py file or.
Change the working directory in the Docker file at the end that directs to the app directory.
I think there is problem while changing the Working directory. It should have been
WORKDIR /usr/src/app/web/experiments based on the folder structure.

Copying file to Docker Container during Build but Process claims File Not Found

I am using Docker Toolbox to run a Python API. My Docker-Compose for the Python API can be seen below:
flask-api:
container_name: flask_api
restart: always
build:
context: ./api/
dockerfile: Dockerfile
ports:
- "5000:80"
volumes:
- ./api:/usr/src/app
The Dockerfile for this flask-api is found in the ./api folder. The Dockerfile is:
FROM python:3-onbuild
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install -r requirements.txt
COPY . /usr/src/app
CMD ["python", "app.py"]
The app.py file can be found in the ./api folder. However, given the Dockerfile and Docker-compose setup, running docker-compose finishes with the flask-api container crashing, claiming the "app.py is not found".
Somethings I have tried:
I converted the Python image to an Ubuntu image with the same Dockerfile, ran it in interactive mode, and found that the files were indeed copied over.
I ran the Python image in interactive mode as well, used os.lisrdir() to list files at the current directory, and once again found that the files were indeed copied over.
Any ideas on the cause of this issue? Please let me know if there is any other information I can report.

pserve not working inside docker container

I am trying to develop pyramid application with using docker-container.I build a docker images with below docker file.
FROM ubuntu
RUN apt-get -y update
RUN apt-get -y install python3.6 python3.6-dev libssl-dev wget git python3-pip libmysqlclient-dev
WORKDIR /application
COPY . /application
RUN pip3 install -e .
EXPOSE 6543
This is my docker-compose file
version: '3'
services:
webserver:
ports:
- 6543:6543
build:
context: .
dockerfile: Dockerfile-development
volumes:
- .:/application
command: pserve development.ini --reload
The docker image is created successfully. But when i run the docker-compose up and browse the url localhost:6543 it is showing The site can't be reached now. But when i run it locally with pserve development.ini it is working fine. I tried to connect to the docker interactively and run the command pserve develpment.ini, It is showing as
Starting server in PID 18.
Serving on http://localhost:6543
But when i browse the url from chrome it is not working.
You need to listen in all network interfaces. In your development.ini file, use:
listen = *:6543
You should get a log which says:
Serving on http://0.0.0.0:6543
Then try to access it from your host machine using localhost:6543.

Docker compose installing requirements.txt

In my docker image I am cloning the git master branch to retrieve code. I am using docker-compose for the development environment and running my containers with volumes. I ran across an issue when installing new project requirements from my python requirements.txt file. In the development environment, it will never install new requirements on dev environment because when re-building the image, the latest code is pulled from github.
Below is an example of my dockerfile:
FROM base
# Clone application
RUN git clone repo-url
# Install application requirements
RUN pip3 install -r app/requirements.txt
# ....
Here is my compose file:
myapp:
image: development
env_file: .env
ports:
- "8000:80"
volumes:
- .:/home/app
command: python3 manage.py runserver 0.0.0.0:8000
Is there any way to install newly added requirements after build on development?
There are two ways you can do this.
By hand
You can enter the container and do it yourself. Downside: not automated.
$ docker-compose exec myapp bash
2912d2cd9eab# pip3 install -r /home/app/requirements.txt
Using an entrypoint script
You can use an entrypoint script that runs prep work, then runs the command.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
# ... probably other stuff in here ...
CMD ["python3", "manage.py", "runserver", "0.0.0.0:8000"]
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
cd /home/app
pip3 install -r requirements.txt
# May as well do this too, while we're here.
python3 manage.py migrate
exec "$#"
The entrypoint is run like this at container startup:
/entrypoint.sh $CMD
Which expands to:
/entrypoint.sh python3 manage.py runserver 0.0.0.0:8000
The prep work is run first, then at the end of the entrypoint script, the passed-in argument(s) are exec'd. That's your command, so entrypoint.sh exits and is replaced by your Django app server.
UPDATE:
After taking comments to chat, it came up that it is important to use exec to run the command, instead of running it at the end of the entrypoint script like this:
python3 manage.py runserver 0.0.0.0:8000
I can't exactly recall why it matters, but I ran into this previously as well. You need to exec the command or it will not work properly.
The way I solved this is by running two services:
server: run the server depends on requirements
requirements: installs requirements prior to running server
And this is how the docker-compose.yml file would look like:
version: '3'
services:
django:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
ports:
- 8000:8000
working_dir: /project
command: python manage.py runserver
depends_on:
- requirements
requirements:
image: python:3.7-alpine
volumes:
- pip37:/usr/local/lib/python3.7/site-packages
- .:/project
working_dir: /project
command: pip install -r requirements.txt
volumes:
pip37:
external: true
PS: I created a named volume for the pip modules so I can preserve them across different projects. You can create one yourself by running:
docker volume create mypipivolume

Categories

Resources