When I prescribe docker-compose up, the following error comes out, which I don't quite understand how to fix!
ERROR: for a1e9335fc0e8_bot Cannot start service tgbot: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "python3 main.py": executable file not found in $PATH: unknown
My Dockerfile:
FROM python:latest
WORKDIR /src
COPY req.txt /src
RUN pip install -r req.txt
COPY . /src
My docker-compose.yml:
version: "3.1"
services:
db:
container_name: database
image: sameersbn/postgresql:10-2
environment:
PG_PASSWORD: $PGPASSWORD
restart: always
ports:
- 5432:5432
networks:
- botnet
volumes:
- ./pgdata:/var/lib/postgresql
tgbot:
container_name: bot
build:
context: .
command:
- python3 main.py
restart: always
networks:
- botnet
env_file:
- ".env"
depends_on:
- db
networks:
botnet:
driver: bridge
Your command: is in the array format, so compose thinks that the executable file is called python3 main.py. That doesn't exist.
Change it to this and it'll work
tgbot:
container_name: bot
build:
context: .
command: python3 main.py
restart: always
networks:
- botnet
env_file:
- ".env"
depends_on:
- db
More info here.
Related
I was hoping to get some insight to what I am missing, currently trying to run a docker-compose config with python (walrus for db wrapper) and redis image, but I keep receiving the same error:
redis.exceptions.ConnectionError: Error -2 connecting to redis://redis:6379. Name or service not known.
I tried different solutions on stack overflow to fix this but still nothing is working.
Here are the related docker-compose config:
version: '3.3'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
command: ["redis-server"]
entrypoint: redis-server --appendonly yes
consumers:
build: ./consumers
container_name: consumers
environment:
- REDIS_HOST=redis://redis
command: "./run.sh"
depends_on:
- redis
restart: always
tty: true
networks:
default:
driver: bridge
Dockerfile:
FROM python:3.10
WORKDIR /consumers
# Copy Dependencies
COPY requirements.txt requirements.txt
COPY run.sh .
# Install Dependencies
RUN pip install -r requirements.txt
COPY . .
ENV REDIS_HOST=redis://redis
RUN chmod a+x run.sh
# Run executable consumer.py
CMD [ "./run.sh"]
And connection in python using walrus to redis:
rdb = Database(host=os.getenv("REDIS_HOST", "localhost"), port=6379)
Locally without docker the setup works fine. Any direction in this case would be really appreciated.
Thank you
The following configuration made it work, removed entrypoint, created a new custom network and exposed port. REDIS_HOST was modified to redis i.e. container name. All together made it work but while trying only one of these the problem persisted.
version: '3.5'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
expose:
- 6379:6379
command: ["redis-server"]
networks:
- connections
consumers-g1:
build: ./consumers
container_name: consumers-g1
environment:
- REDIS_HOST=redis
command: "./run.sh"
expose:
- 6379:6379
networks:
- connections
restart: always
tty: true
networks:
connections:
name: connections
driver: bridge
I have an small python backend and a mariaDB. Deperated in Docker Services.
The docker-compse looks like this:
version: '3.5'
networks:
web:
name: web
external: true
wsm:
name: wsm
internal: true
volumes:
wsm-partsfinder-db:
name: wsm-partsfinder-db
services:
wsmbackend:
build:
context: .
dockerfile: ./docker/Dockerfile
container_name: wsm-file-parts-backend
restart: always
depends_on:
- wsmdb
ports:
- "8888:8888"
networks:
- web
- wsm
wsmdb:
container_name: wsmdb
image: mariadb:10.7.1
command: --default-authentication-plugin=mariadb_native_password
restart: unless-stopped
environment:
MARIADB_ROOT_PASSWORD: password
MARIADB_USER: wsm
MARIADB_PASSWORD: password
MARIADB_DATABASE: wsm_parts
volumes:
- wsm-partsfinder-db:/var/lib/mysql
networks:
- wsm
ports:
- "4485:3306"
The Dockerfile which is called for wsmbackend service looks like this:
FROM python
RUN apt-get update -y
RUN apt-get upgrade -y
COPY . /wsm
WORKDIR /wsm
RUN pip install -r requirements.txt
RUN yoyo apply --database mysql://wsm:password#wsmdb:4485/wsm_parts ./migrations
EXPOSE 8888
CMD ["/bin/sh", "-c", "python main.py"]
I got an error in yoyo apply ...
What is the issue in this case?
Thanks in advance
You are not able to run queries on the database in your build stage, because your database container not is started at that current point.
The RUN statement is only executed in the build stage. You need to move it to the CMD (entrypoint), so it's executed when the container and the database is started.
Airflow beginner here. I have a question on how to install a custom utility package in a docker container that will be used in docker compose for airflow. The reason why I want to do this is because this package has a lot of reusable code and I don't want to constantly copy the code into new projects directories.
The custom utility package that I would use would only be needed for the Webserver container.
Since I am aiming not to copy the utility code into my docker compose directory, would I need to install it in a separate container then reference (through extending?) that container somewhere in my airflow directory? I hope I'm not overcomplicating.
My current airflow setup is as follows:
Airflow_ETL
--/airflow
--/scripts
-/data
-/resource
-pull_data.py
--docker-compose.yml
--Dockerfile
--env.list
--requirements.txt
My docker file looks likes this:
FROM puckel/docker-airflow:1.10.9
COPY airflow/airflow.cfg ${AIRFLOW_HOME}/airflow.cfg
RUN pip install --upgrade pip
RUN pip install SQLAlchemy==1.3.15
WORKDIR /usr/src/app
COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
Docker Compose uses this Docker File to build the rest of the containers.
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
redis:
image: redis:5.0.5
flower:
image: flower:latest
build:
context: .
restart: always
depends_on:
- redis
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
command: flower
webserver:
image: webserver:latest
build:
context: .
restart: always
depends_on:
- postgres
- redis
environment:
- LOAD_EX=n
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- EXECUTOR=Celery
- PYTHONPATH=/usr/local/airflow
env_file:
- env.list
volumes:
- ./airflow/dags:/usr/local/airflow/dags
- ./scripts:/usr/local/airflow/scripts
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
scheduler:
image: scheduler:latest
build:
context: .
restart: always
depends_on:
- webserver
volumes:
- ./airflow/dags:/usr/local/airflow/dags
- ./scripts:/usr/local/airflow/scripts
environment:
- LOAD_EX=n
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- EXECUTOR=Celery
- PYTHONPATH=/usr/local/airflow
command: scheduler
env_file:
- env.list
worker1:
image: worker1:latest
build:
context: .
restart: always
depends_on:
- scheduler
volumes:
- ./airflow/dags:/usr/local/airflow/dags
- ./scripts:/usr/local/airflow/scripts
environment:
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- EXECUTOR=Celery
- PYTHONPATH=/usr/local/airflow
command: worker
env_file:
- env.list
worker2:
image: worker2:latest
build:
context: .
restart: always
depends_on:
- scheduler
volumes:
- ./airflow/dags:/usr/local/airflow/dags
- ./scripts:/usr/local/airflow/scripts
environment:
- FERNET_KEY=46BKJoQYlPPOexq0OhDZnIlNepKFf87WFwLbfzqDDho=
- EXECUTOR=Celery
- PYTHONPATH=/usr/local/airflow
command: worker
env_file:
- env.list
Thank you for your time.
I run two different Apps in containers.
Django App
Flask App
Django ran just well. I configured my Flask App as follow:
This Is a docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: username
MYSQL_PASSWORD: pwd
MYSQL_ROOT_PASSWORD: pwd
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
This also is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app
CMD python main.py
Problem: When I run
docker-compose up the following error occurs
backend_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
I don't know why it tries to open manage.py file since it is Flask and not Django App. I need your help. Thanks in Advance.
I'm not sure how this will work for you but, I resolved this by changing my docker-compose.yml to have a command parameter, so it looks like this
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python main.py' << Updated Line >>
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: username
MYSQL_PASSWORD: pwd
MYSQL_ROOT_PASSWORD: pwd
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
I started a new Flask app(but this time on virtualenv) and my problem fixed :)
I have been going bonkers over this one, the celery service in my docker-compose.yml just does not pick up tasks (sometimes). It works at times though
Dockerfile:
FROM python:3.6
RUN apt-get update
RUN mkdir /web_back
WORKDIR /web_back
COPY web/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY web/ .
docker-compose.yml
(Taken out a few services for the sake of understanding)
version: '3'
services:
web_serv:
restart: always
build: .
container_name: web_back_01
env_file:
- ./envs/web_back_01.env
volumes:
- ./web/:/web_back
depends_on:
- web_postgres
expose:
- 8282
extra_hosts:
- "dockerhost:104.10.4.11"
command: bash -c "./initiate.sh"
service_A:
restart: always
build: ../../web-service-A/A/
container_name: web_back_service_a_01
volumes:
- ../../web-service-A/A.:/web-service-A
depends_on:
- web
ports:
- '5100:5100'
command: bash -c "python server.py"
service_B:
restart: always
build: ../../web-service-B/B/
container_name: web_back_service_b_01
volumes:
- ../../web-service-B/B.:/web-service-B
depends_on:
- web
ports:
- '5200:5200'
command: bash -c "python server.py"
web_postgres:
restart: always
build: ./postgres
container_name: web_postgres_01
# restart: unless-stopped
ports:
- "5433:5432"
environment: # will be used by the init script
LC_ALL: C.UTF-8
POSTGRES_USER: web
POSTGRES_PASSWORD: web
POSTGRES_DB: web
volumes:
- pgdata:/var/lib/postgresql/data/
nginx:
restart: always
build: ./nginx/
container_name: web_nginx_01
volumes:
- ./nginx/:/etc/nginx/conf.d
- ./logs/:/code/logs
- ./web/static/:/static_cdn/
- ./web/media/:/media_cdn/
ports:
- "80:80"
links:
- web_serv
redis:
restart: always
container_name: web_redis_01
ports:
- "6379:6379"
links:
- web_serv
image: redis
celery:
build: .
volumes:
- ./web/:/web_back
container_name: web_celery_01
command: celery -A web worker -l info
links:
- redis
depends_on:
- redis
volumes:
pgdata:
media:
static:
settings.py
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
Notice the service_A and service_B, those are the two servies that at times do not get fired up.
Any help in understanding the odd behavior would be very helpful! Thanks
So, I think I ran into a similar problem. I was pulling my hair out because I was updating my worker.py and then not only would the autoload not reflect any changes, but, the when I'd rerun my docker-compose up my changes would still not be reflected.
Sometimes when I'd run docker-compose up --build --force-recreate my changes would be reflected, but not reliably.
I was able to resolve this problem by doing two things:
Remove the __pycache__ in my worker's directory.
Run $ find . -name "*.pyc" -exec rm {} \; before doing docker-compose up --build --force-recreate when caching behavior persists.
I'm not 100% sure what's going on myself, but its clear that Celery + Docker w/o autoload means that Docker has a tendency to use a cached version of the compiled task. I see a bit of chatter regarding ways to setup autoload with Celery + Docker with things like webdog or modd, but, I have yet to set that up for my project.