I am faced with the error below. I have no idea to solve it. If anyone knows the solution, please help me...
Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "django-admin.py": executable file not found in $PATH": unknown
■docker-compose.yml
version: "3"
services:
nginx:
image: nginx:1.13
ports:
- "8000:8000"
volumes:
- ./nginx/conf:/etc/nginx/conf.d
- ./nginx/uwsgi_params:/etc/nginx/uwsgi_params
- ./static:/static
depends_on:
- python
db:
image: mysql:5.7
#MySQLの文字コードの設定です(defaultはlatin1が入っており、日本語入力ができないため)
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: todoList
MYSQL_USER: user
MYSQL_PASSWORD: mitsuki630
TZ: "Asia/Tokyo"
volumes:
- ./mysql:/var/lib/mysql
#./sqlを/docker-entrypoint-initdb.dにマウントすることで、コンテナ起動時に./sql配下のsql文が実行される(DBの初期化)
- ./sql:/docker-entrypoint-initdb.d
python:
build: ./python
#uwsgiを使用してポート8001(任意のポート番号)を開放します。これが、後のnginxへの連携の際に必要な処理となります
#app.wsgiのappはDjangoのプロジェクト名です
#--py-autoreload 1はDjangoアプリ開発の際に、ファイル等に変更があった際は自動リロードするための設定です
#--logto /tmp/mylog.logはログを残すための記述です。
command: uwsgi --socket :8001 --module app.wsgi --py-autoreload 1 --logto /tmp/mylog.log
volumes:
- ./src:/code
- ./static:/static
expose:
- "8001"
depends_on:
- db
■Dockerfile
#python3.6をインストール
FROM python:3.6
#PYTHONUNBUFFEREDでバッファーを無効にするらしい
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN /bin/sh -c pip install -r requirements.txt
COPY . /code
■requirements.txt
Django==2.0.4
uwsgi==2.0.17
PyMySQL==0.8.0
Related
I am using django and postgresql. I am using django-crontab to change the data.
It runs well in the local environment, but we use docker to deploy and watch, and I confirmed that when cron runs, we refer to sqlite3.
I also made a separate cron container in docker composite and ran it, I am using it incorrectly because I am a beginner. Help me
#goods/cron.py
from goods.models import Goods
def test():
print(Goods.objects.all())
./docker-compose.yml
version: '3.8'
volumes:
postgres: {}
django_media: {}
django_static: {}
static_volume: {}
services:
postgres:
container_name: postgres
image: postgres:14.5
volumes:
- postgres:/var/lib/postgresql/data/
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
restart: always
nginx:
container_name: nginx
image: nginx:1.23.2
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- django_media:/media/
- django_static:/static/
depends_on:
- asgiserver
- backend
restart: always
django_backend:/app/media
backend: host:container
container_name: django_backend
build: .
entrypoint: sh -c "python manage.py migrate && gunicorn handsup.wsgi --workers=5 -b 0.0.0.0:8000"
restart: always
volumes:
- ./:/app/
- /etc/localtime:/etc/localtime:ro
- django_media:/app/media/
- django_static:/app/static/
environment: #
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
depends_on:
- postgres
redis:
image: redis:5
asgiserver:
build: .
command: daphne -b 0.0.0.0 -p 8080 handsup.asgi:application
volumes:
- ./:/app/
restart: always
environment:
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
depends_on:
- redis
- postgres
cron:
build: .
restart: always
volumes:
- ./:/app/
depends_on:
- postgres
- backend
environment: #
- DEBUG
- POSTGRES_DB
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_HOST
- POSTGRES_PORT
command: cron -f # as a long-running foreground process
./Dockerfile
FROM python:3.10.8
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN mkdir /app/
WORKDIR /app/
RUN apt-get update -y
RUN apt-get install -y cron
COPY ./requirements.txt .
COPY ./ /app/
RUN pip install --no-cache-dir -r requirements.txt
# RUN service cron start
ENTRYPOINT ["./docker-entrypoint.sh"]
RUN pip install gunicorn psycopg2
./docker-entrypoint.sh
# If this is going to be a cron container, set up the crontab.
if [ "$1" = cron ]; then
./manage.py crontab add
fi
# Launch the main container command passed as arguments.
exec "$#"
I referred to the contents below.
How to make django-crontab execute commands in Docker container?
I am facing below mentioned error, could you please help me to resolve this?
invalid volume specification: '/run/desktop/mnt/host/d/Master/Projects/python_task/image: /app:rw': invalid mount config for type "bind": invalid mount path: ' /app' mount path must be absolute
My file structure could be seen here image
Docker-Compose:
services:
psql_crxdb:
image: postgres:13
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=dvd_rental
volumes:
- "./dvdrental_data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#admin.com
- PGADMIN_DEFAULT_PASSWORD=root
ports:
- "8080:80"
analytics:
build:
context: main
environment:
POSTGRESQL_CS: 'postgresql+psycopg2://postgres:password#psql_crxdb:5432/dvd_rental'
depends_on:
- psql_crxdb
command: ["python", "./main.py" ]
pythontask:
build:
context: python_task
command: ["python", "./circle.py" ]
volumes:
- "./python_task/image: /app"
Dockerfile:
FROM python:3.9
RUN apt-get install wget
RUN pip install Pillow datetime
WORKDIR /app
COPY circle.py circle.py
ENTRYPOINT ["python", "circle.py"]
When I prescribe docker-compose up, the following error comes out, which I don't quite understand how to fix!
ERROR: for a1e9335fc0e8_bot Cannot start service tgbot: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "python3 main.py": executable file not found in $PATH: unknown
My Dockerfile:
FROM python:latest
WORKDIR /src
COPY req.txt /src
RUN pip install -r req.txt
COPY . /src
My docker-compose.yml:
version: "3.1"
services:
db:
container_name: database
image: sameersbn/postgresql:10-2
environment:
PG_PASSWORD: $PGPASSWORD
restart: always
ports:
- 5432:5432
networks:
- botnet
volumes:
- ./pgdata:/var/lib/postgresql
tgbot:
container_name: bot
build:
context: .
command:
- python3 main.py
restart: always
networks:
- botnet
env_file:
- ".env"
depends_on:
- db
networks:
botnet:
driver: bridge
Your command: is in the array format, so compose thinks that the executable file is called python3 main.py. That doesn't exist.
Change it to this and it'll work
tgbot:
container_name: bot
build:
context: .
command: python3 main.py
restart: always
networks:
- botnet
env_file:
- ".env"
depends_on:
- db
More info here.
I run two different Apps in containers.
Django App
Flask App
Django ran just well. I configured my Flask App as follow:
This Is a docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: username
MYSQL_PASSWORD: pwd
MYSQL_ROOT_PASSWORD: pwd
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
This also is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app
CMD python main.py
Problem: When I run
docker-compose up the following error occurs
backend_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
I don't know why it tries to open manage.py file since it is Flask and not Django App. I need your help. Thanks in Advance.
I'm not sure how this will work for you but, I resolved this by changing my docker-compose.yml to have a command parameter, so it looks like this
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python main.py' << Updated Line >>
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: username
MYSQL_PASSWORD: pwd
MYSQL_ROOT_PASSWORD: pwd
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
I started a new Flask app(but this time on virtualenv) and my problem fixed :)
I have been going bonkers over this one, the celery service in my docker-compose.yml just does not pick up tasks (sometimes). It works at times though
Dockerfile:
FROM python:3.6
RUN apt-get update
RUN mkdir /web_back
WORKDIR /web_back
COPY web/requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY web/ .
docker-compose.yml
(Taken out a few services for the sake of understanding)
version: '3'
services:
web_serv:
restart: always
build: .
container_name: web_back_01
env_file:
- ./envs/web_back_01.env
volumes:
- ./web/:/web_back
depends_on:
- web_postgres
expose:
- 8282
extra_hosts:
- "dockerhost:104.10.4.11"
command: bash -c "./initiate.sh"
service_A:
restart: always
build: ../../web-service-A/A/
container_name: web_back_service_a_01
volumes:
- ../../web-service-A/A.:/web-service-A
depends_on:
- web
ports:
- '5100:5100'
command: bash -c "python server.py"
service_B:
restart: always
build: ../../web-service-B/B/
container_name: web_back_service_b_01
volumes:
- ../../web-service-B/B.:/web-service-B
depends_on:
- web
ports:
- '5200:5200'
command: bash -c "python server.py"
web_postgres:
restart: always
build: ./postgres
container_name: web_postgres_01
# restart: unless-stopped
ports:
- "5433:5432"
environment: # will be used by the init script
LC_ALL: C.UTF-8
POSTGRES_USER: web
POSTGRES_PASSWORD: web
POSTGRES_DB: web
volumes:
- pgdata:/var/lib/postgresql/data/
nginx:
restart: always
build: ./nginx/
container_name: web_nginx_01
volumes:
- ./nginx/:/etc/nginx/conf.d
- ./logs/:/code/logs
- ./web/static/:/static_cdn/
- ./web/media/:/media_cdn/
ports:
- "80:80"
links:
- web_serv
redis:
restart: always
container_name: web_redis_01
ports:
- "6379:6379"
links:
- web_serv
image: redis
celery:
build: .
volumes:
- ./web/:/web_back
container_name: web_celery_01
command: celery -A web worker -l info
links:
- redis
depends_on:
- redis
volumes:
pgdata:
media:
static:
settings.py
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
Notice the service_A and service_B, those are the two servies that at times do not get fired up.
Any help in understanding the odd behavior would be very helpful! Thanks
So, I think I ran into a similar problem. I was pulling my hair out because I was updating my worker.py and then not only would the autoload not reflect any changes, but, the when I'd rerun my docker-compose up my changes would still not be reflected.
Sometimes when I'd run docker-compose up --build --force-recreate my changes would be reflected, but not reliably.
I was able to resolve this problem by doing two things:
Remove the __pycache__ in my worker's directory.
Run $ find . -name "*.pyc" -exec rm {} \; before doing docker-compose up --build --force-recreate when caching behavior persists.
I'm not 100% sure what's going on myself, but its clear that Celery + Docker w/o autoload means that Docker has a tendency to use a cached version of the compiled task. I see a bit of chatter regarding ways to setup autoload with Celery + Docker with things like webdog or modd, but, I have yet to set that up for my project.