My docker-compose looks like this:
version: '3.7'
services:
app:
container_name: container1
#restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
labels:
ofelia.enabled: "true"
ofelia.job-exec.app.schedule: "#every 30m"
ofelia.job-exec.app.command: "/app/Final_DM_For_All_Clients.py"
ofelia:
image: mcuadros/ofelia:latest
#restart: unless-stopped
depends_on:
- app
command: daemon --docker
volumes:
- /var/run/docker.sock:/var/run/docker.sock:r
After the first run, the second job run does not happen in 30 mins and I am getting the
following error during the second run.
container1-ofelia-1 | 2022-05-24T22:59:02.037Z common.go:121 ▶ ERROR [Job "app" (3d8aab0fe293)] Finished in "3.322350449s", failed: true, skipped: false, error: error creating exec: API error (409): Container 65f90ee04e25d1164d29ab911197d239873da3eafbc34823873d3ba2d791a0ad is not running
My docker file looks like this:
FROM python:3.8
#ADD Final_DM_For_All_Clients.py /
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip install pandas
RUN pip install numpy
RUN pip install datetime
RUN pip install uuid
RUN apt-get update && apt-get install -y gcc unixodbc-dev
RUN pip install pyodbc
# install SQL Server drivers
#RUN apt-get update \
# && curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
# && curl https://packages.microsoft.com/config/debian/9/prod.list \
# > /etc/apt/sources.list.d/mssql-release.list \
# msodbcsql17
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/11/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17
RUN ACCEPT_EULA=Y apt-get install -y mssql-tools
RUN curl -L "https://github.com/docker/compose/releases/download/1.29.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN apt-get update && apt-get install -y cron && apt-get install vim -y
COPY . /app
WORKDIR /app
ADD df_curr.csv .
ADD df_prev.csv .
ADD df_roster.csv .
ADD df.csv .
ADD manager_name.csv .
ADD name_data.csv .
EXPOSE 8080
CMD python Final_DM_For_All_Clients.py
After the first run the container exits with code 0 indicating a successful run but on the second scheduled run i.e. at 1 hr, since the container is stopped, the scheduler is not able to run the script during the second run.
I think the issue is that you're using ofelia's exec job, which tries to docker exec into a running container, which fails.
What you are looking for is the run job which, according to the docs, either runs a new container or starts an existing one. So I believe you should change your labels definition like this:
ofelia:
... # Your configs
labels:
ofelia.enabled: "true"
ofelia.job-run.app.schedule: "#every 30m"
ofelia.job-run.app.container: "container1"
... # Your configs
The docs say you have to pass in the container parameter when trying to do a docker start. I am not sure how it plays when you run it for the first time, but I assume that app runs on its own the first time and later times ofelia starts the container again.
Related
I have a django application and I'm trying to run it using docker. But when I run docker-compose build the following error shows up:
> [ 1/15] FROM docker.io/library/python:3.9#sha256:d084f55e2bfeb86ae8e1f3fbac55aad813c7c343c7cbacc91ee11a2d07c32d25:
------
failed to solve: rpc error: code = Unknown desc = failed commit on ref "layer-sha256:bf48494000001a037b72870d2a6a2536f9da8bc5d1ceddd72d79f4a51fe7a60e": "layer-sha256:bf48494000001a037b72870d2a6a2536f9da8bc5d1ceddd72d79f4a51fe7a60e" failed size validation: 2623870 != 10876505: failed precondition
here is my Dockerfile:
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN apt update && apt install -y lsb-release && apt clean all
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
RUN apt -y update && apt -y install gettext
RUN wget -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN sudo apt-key add ACCC4CF8.asc
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg-testing main 13" | tee /etc/apt/sources.list.d/pgdg-testing.list
RUN apt -y update && apt -y install postgresql-client-13
COPY . /code/
RUN chmod +x /code/entrypoint.sh1
ENTRYPOINT ["sh", "/code/entrypoint.sh"]
here is my docker-compose.yml:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_PORT=${POSTGRES_PORT}
volumes:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
- ./static:/static
- ./media:/media
ports:
- ${PORT}:8000
depends_on:
- db
volumes:
postgres-data:
I'm on Windows and my docker-compose version is v2.2.3 and my docker version is 20.10.12, build e91ed57.
Any idea how I can get rid of this error?
Update: I believe it was due to network issues. If you have restricted access to the internet and face this problem, you might want to use a VPN.
I am trying to upload a flask script online associated with nginx image by using the following tutorial: https://www.docker.com/blog/docker-compose-from-local-to-amazon-ecs/
Here is my docker-compose-yml:
version: "3"
services:
app:
image: faust28100/moone-facerecognition:latest
build:
context: app
ports:
- "5000"
nginx:
image: faust28100/nginx-facereco:latest
volumes:
- mydata:/some/container/path
depends_on:
- app
ports:
- "80:80"
volumes:
mydata:
I am also using the nginx image, with some modifications to have my own config.
The dockerfile of the faust28100/nginx-facereco:latest image :
FROM nginx:alpine
COPY ./nginx.conf /etc/nginx/nginx.conf
And the nginx.conf:
events {
worker_connections 1000;
}
http {
server {
listen 80;
location / {
proxy_pass http://app:5000;
}
}
}
The following is the Dockerfile that builds the faust28100/moone-facerecognition:latest image :
FROM python:3.10.4-slim-buster
COPY . /app
WORKDIR /app
RUN python -m pip install --upgrade pip
RUN pip install gevent
RUN apt-get update && apt-get install -y \
curl
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update -y && \
apt-get install build-essential cmake pkg-config -y
RUN pip install dlib==19.23.1
RUN pip install -r requirements.txt
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# set display port to avoid crash
ENV DISPLAY=:99
# EXPOSE 5000
# # CMD ["gunicorn", "-b", "0.0.0.0:5000", "wsgi:app"]
CMD gunicorn --bind 0.0.0.0:5000 --timeout 600 wsgi:app --worker-class gevent
My problem is that locally I do “docker compose up” and the script works perfectly, and when I go to localhost it redirects to the python script that renders a phrase. But when I connect to my aws context (same as in the link in the beginning) and do “docker compose up”, it says : 504 Gateway Time-out
nginx/1.21.6.
At first I thought that the problem came from my nginx configuration, but by reading more online I think that the problem is with the other image (faust28100/moone-facerecognition:latest), which seems to not be accessible from outside. I have tried a lot of tweaking but it still doesn’t work
A docker image I am creating and sending to a client is somehow deleting its source code 24-48 hours after it is started. We can see this by exec onto the running container and talking a look around.
The service is a simple flask app. The service doesn't go down as the application doesn't experience an issue but the static files it should be yielding go missing (along with everything else copied in) so we start getting 404s. I can't think of anything that would explain this (especially considering that it takes time for it to occur)
FROM python:3.8-slim-buster
ARG USERNAME=calibrator
ARG USER_UID=1000
ARG USER_GID=$USER_UID
RUN apt-get update \
&& groupadd --gid $USER_GID $USERNAME \
&& useradd -s /bin/bash --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& apt-get install -y sudo \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME\
&& chmod 0440 /etc/sudoers.d/$USERNAME \
# Install open-cv packaged
&& apt-get install -y libsm6 libxext6 libxrender-dev libgtk2.0-dev libgl1-mesa-glx \
#
## Git
&& sudo apt-get install -y git-lfs \
#
## Bespoke setup
&& apt-get -y install unixodbc-dev \
#
## PostgresSQL
&& apt-get -y install libpq-dev
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
ARG git_user
ARG git_password
RUN pip install --upgrade pip
RUN python3 -m pip install --user git+https://${git_user}:${git_password}#bitbucket.org/****
WORKDIR /home/calibrator
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY app app
ENV FLASK_APP=app/app.py
EXPOSE 80
STOPSIGNAL SIGTERM
CMD ["uwsgi", "--http", ":80", "--module", "app.app", "--callable", "app", "--processes=1", "--master"]
version: "3.7"
services:
calibrator:
container_name: sed-calibrator-ui
image: sed-metadata-calibrator:2.0.3
restart: always
ports:
- "8081:80"
environment:
- STORE_ID=N0001
- DYNAMO_TABLE=****
- DYNAMO_REGION=****
- AWS_DEFAULT_REGION=****
- AWS_ACCESS_KEY_ID=****
- AWS_SECRET_ACCESS_KEY=****
The application reads in a single configuration file and connects to a database on startup and then defines the endpoints - none of which touch the filesystem again. How can the source code be deleting itself!?
Creating a new container resolves the issue.
Any suggestions in checking the client's environment would be appreciated because I cannot replicate it.
Clients versions
Docker Version - 18.09.7
Docker Compose version - 1.24.0
I was able to solve the problem by updating the kernel, it also worked with an older kernel (3.10)
works:
4.1.12-124.45.6.el7uek.x86_64
not work:
4.1.12-124.43.4.el7uek.x86_64
I do not know the reason that causes it, I only know that after updating the kernel the problem was solved. I hope it is your same problem
I have an app ABC, which I want to put on docker environment. I built a Dockerfile and got the image abcd1234 which I used in docker-compose.yml
But on trying to build the docker-compose, All the requirements.txt files are getting reinstalled. Can it not use the already existing image and prevent time from reinstalling it?
I'm new to docker and trying to understand all the parameters. Also, is the 'context' correct? in docker-compose.yml or it should contain path inside the Image?
PS, my docker-compose.yml is not in same directory of project because I'll be using multiple images to expose more ports.
docker-compose.yml:
services:
app:
build:
context: /Users/user/Desktop/ABC/
ports:
- "8000:8000"
image: abcd1234
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- PROJECT_ENV=development
Dockerfile:
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
apt-get install -y awscli && \
apt-get install -y unzip && \
apt-get install -y nano
RUN apt-get install -y libsm6 libxext6 libxrender-dev
COPY . /ABC/
RUN apt-cache search mysql-server
RUN apt-cache search libmysqlclient-dev
RUN apt-get install -y libpq-dev
RUN apt-get install -y postgresql
RUN apt-cache search postgresql-server-dev-9.5
RUN pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic
RUN pip install -r /ABC/requirements.txt
WORKDIR .
Please guide me on how to tackle these 2 scenarios. Thanks!
The context: directory is the directory on your host system that includes the Dockerfile. It's the same directory you would pass to docker build, and it frequently is just the current directory ..
Within the Dockerfile, Docker can cache individual build steps so that it doesn't repeat them, but only until it reaches the point where something has changed. That "something" can be a changed RUN line, but at the point of your COPY, if any file at all changes in your local source tree that also invalidates the cache for everything after it.
For this reason, a typical Dockerfile has a couple of "phases"; you can repeat this pattern in other languages too. You can restructure your Dockerfile in this order:
# 1. Base information; this almost never changes
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
WORKDIR /ABC
# 2. Install OS packages. Doesn't depend on your source tree.
# Frequently just one RUN line (but could be more if you need
# packages that aren't in the default OS package repository).
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential unzip libxrender-dev libpq-dev
# 3. Copy _only_ the file that declares language-level dependencies.
# Repeat starting from here only if this file changes.
COPY requirements.txt .
RUN pip install -r requirements.txt
# 4. Copy the rest of the application in. In a compiled language
# (Javascript/Webpack, Typescript, Java, Go, ...) build it.
COPY . .
# 5. Explain how to run the application.
EXPOSE 8000
CMD python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000
I am using docker to start mysql service in a container. After the container starts, I want to insert some data to database automatically via python scripts. This is my Dockerfile:
FROM mysql:5.7
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD 123456
WORKDIR /app
ADD . /app
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip
RUN pip3 install --user -r requirements.txt
RUN python3 init.py
The last row runs the script to add some data to database but this time mysql service has not started yet so it fails when running docker build. How do I accomplish this? Thanks in advance.
According to the docs, the MySQL entrypoint will automatically execute any files with .sh, .gz or .sql scripts found in /docker-entrypoint-initdb.d. So, create a script to execute your Python script for you. If you call this file 01-my-script.sh, your Dockerfile will look like this:
FROM mysql:5.7
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD 123456
WORKDIR /app
RUN apt-get update && apt-get install -y \
python3 \
python3-pip
# Copy requirements in first, and run them (so cache won't be invalidated)
COPY ./requirements.txt ./requirements.txt
RUN pip3 install --user -r requirements.txt
# Copy SQL Fixture
COPY ./01-my-script.sh /docker-entrypoint-initdb.d/01-my-script.sh
RUN chmod +x /docker-entrypoint-initdb.d/01-my-script.sh
# Copy the rest of your project
COPY . .
And your script will only contain:
#!/bin/sh
python3 /app/init.py
Now, when you bring up your container, your script will execute. Monitor the execution of the running container with docker logs -f <container_name> to make sure your script is running.