standard_init_linux.go:211: exec user process caused "operation not permitted" - python

I was trying to run a django-rest-framework app on docker using python 2.7 & django = 1.11 image and postgerss images. here is my docker-compose.yml file.
I am running Docker on windows 10 Enterprise build 1909
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_USER: xxxxxxx
POSTGRES_PASSWORD: xxxxxx
POSTGRES_DB: xxxxxx
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- pgdata_v:/var/lib/postgresql/data/pgdata:Z
ports:
- "5433:5432"
web:
build: .
command: /app/scripts/runserver.sh
environment:
ENV: local
WERKZEUG_DEBUG_PIN: 'off'
DB_USER: xxxxxx
DB_PASSWORD: xxxxxx
DB_NAME: xxxxxx
DB_PORT: 5432
volumes:
- ./config/:/app/config/
- ./v1/:/app/v1/
- ./scripts/:/app/scripts/
ports:
- "8005:8000"
depends_on:
- db
links:
- db:db
volumes:
pgdata_v:
external: true
And here is my Dockerfile
FROM python:2.7
ENV PYTHONUNBUFFERED 1
ENV ENV local
RUN mkdir -p /app/scripts/
WORKDIR /app
ADD ./requirements /app/requirements/
RUN pip install -U setuptools
RUN pip install distribute==0.7.3
RUN pip install urllib3==1.21.1 --force-reinstall
RUN pip install -r /app/requirements/base.txt
RUN mkdir -p /app/static/
ADD ./manage.py /app/
ADD ./config/ /app/config/
ADD ./scripts/ /app/scripts/
ADD ./cert/ /app/cert/
ADD ./v1/ /app/v1/
RUN chmod 755 /app/scripts/runserver.sh
EXPOSE 8000
CMD ["/app/scripts/server.sh"]
while running it i get the error standard_init_linux.go:211: exec user process caused "operation not permitted"
I have looked at some answers on StackOverflow and github but could not fix it.

I tried many fixes but none could work for me so I moved to WSL(Windows sub-system for Linux). I set up my environment there, cloned my repository there and it is working now. To use Docker on the WSL I used this post
This might help others facing similar issue like me.

Related

Specified python version in dockerfile not reflecting in the container

I have a python-django web app that I want to run in a docker container. I am using MySql for my database, so I need to use mysqlclient, which does not work for python 3.10 and above when installing using pip, so I am using python 3.9. The following is my dockerfile:
FROM python:3.9.13
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app
And the docker-compose.yaml file looks like this:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python manage.py runserver 0.0.0.0:8000'
ports:
- 8000:8000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33066:3306
When running with docker-compose up everything gets created as expected, except for the python version, which when I check from the containers terminal tells me it is running 3.10.8. I tried using other images of python 3.9 from https://hub.docker.com/_/python, but still, I get the same result. Thus I cannot run my Django project there because mysqlclient cannot get installed with pip3.10 and above.
The interesting thing is, I have the exact same dockerfile using a flask application, and that container works as I expect it to.
Is something missing here?
EDIT:
For clarification, the Dockerfile and docker-compose.yaml is located at the django projects root directory, if that matters.
Using FROM python:3.9 instead of FROM python:3.9.13 seems to solve the issue. Still not sure why it would go and take python 3.10+. Guess there might have been some problems pulling the image and it defaulted to something.

django.db.utils.OperationalError: could not translate host name "db" to address: Temporary failure in name resolution. Django for professionals book

These are my docker files getting this error while changing my engine from SQLite to PostgreSQL. Doing it for the first time following book called Django for professionals
docker-compose.yml
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
postgres_data:
dockerfile
FROM python:3.9.6
#set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
#set work directory
WORKDIR /code
#install dependencies
COPY Pipfile Pipfile.lock /code/
RUN pip install pipenv && pipenv install --system
# Copy project
COPY . /code/
as suggested by seoul kim above i added ports: -5432:5432 and it worked for me.

Uknown command bash or /bin/bash on Docker Ubuntu:20.04 with docker-compose version 3.5

I am setting up docker with the Django application but the bash command is not working with docker Ubuntu:20.04 and docker-compose version 3.5.
My docker version is Docker version 20.10.7, build f0df350 and docker-compose version is Docker Compose version 2.0.0-beta.4
Can anyone help me to resolve the issue?
Below are my docker files:
Dockerfile:
FROM ubuntu:20.04
ENV PYTHONUNBUFFERED 1
RUN apt-get -y update
RUN apt-get install -y --no-install-recommends default-libmysqlclient-dev
RUN apt-get install -y gcc git libc-dev python3-dev python3-pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN mkdir /app
WORKDIR /app
ADD . /app
RUN pip install --upgrade pip && pip install -r requirements.txt
EXPOSE 8000
ENTRYPOINT [ "/app/manage.py" ]
docker-compose.yml
version: '3.5'
services:
db:
image: mysql:5.6
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: "mydb"
MYSQL_ROOT_PASSWORD: "root"
volumes:
- mysql_data:/var/lib/mysql
restart: always
networks:
default:
aliases:
- app-db
django:
build: .
command: bash -c "while true; do runserver 0.0.0.0:8000; sleep 10; done"
stdin_open: true
tty: true
volumes:
- .:/app
depends_on:
- db
ports:
- "8000:8000"
restart: always
environment:
MYSQL_DATABASE: "mydb"
MYSQL_USER: "root"
MYSQL_ROOT_PASSWORD: "root"
MYSQL_HOST: "app-db"
volumes:
mysql_data: {}
I am getting an error on command while running docker-compose up --build:
bash -c "while true; do runserver 0.0.0.0:8000; sleep 10; done"
Error:
Unknown command: 'bash'
Thanks in advance
When you run the container, the ENTRYPOINT and CMD are combined together into a single command; it doesn't matter if the command part is overridden by Docker Compose, the container still runs only a single command this way. So in effect you're asking for the main container process to be
/app/manage.py bash -c "while true; do runserver 0.0.0.0:8000; sleep 10; done"
and the complaint is that the Django runner doesn't understand manage.py bash as a subcommand.
In your Dockerfile itself, you probably want the default command to be to launch the server. Having ENTRYPOINT as an arbitrary "half of the command" tends to be a little more confusing, and leads to needing to override that too; it's probably better to just put this as the standard container CMD.
# No ENTRYPOINT, but
CMD ["/app/manage.py", "runserver", "0.0.0.0:8000"]
You don't need to put the restart loop into the container command since Docker already allows you to specify a restart policy for containers. You should be able to trim the docker-compose.yml section down to:
django:
build: .
# command: is built into the image
# don't usually need stdin_open: or tty:
# don't overwrite the image code with volumes:
depends_on:
- db
ports:
- "8000:8000"
restart: always # replaces the "while true ... done" shell loop
environment: *as-in-the-question

package in requirements.txt, but not seen in docker

I have valid requirements.txt file, but docker doesn't install one of packages listed in requirements
Docker version 18.09.2
python 3.7.3
requirements.txt
django==2.2.2
celery==4.2.1
selenium==3.141.0
BeautifulSoup4==4.7.1
redis==3.2.0
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED 1
ENV DJANGO_ENV dev
ENV DJANGO_ENV dev
COPY ./requirements.txt /code/requirements.txt
RUN pip3 install --upgrade pip
RUN pip3 install -r /code/requirements.txt
COPY . /code/
WORKDIR /code/
EXPOSE 8000
docker_compose.yml
version: '3'
services:
db:
image: postgres:9.6.5
volumes:
- postgres_data:/var/lib/postgresql/data/
redis:
image: "redis:alpine"
web:
build: .
command: bash -c "python /code/manage.py migrate --noinput && python /code/manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- redis
celery:
build: .
command: celery -A google_scraper worker -l info
volumes:
- .:/code
depends_on:
- db
- redis
volumes:
postgres_data:
actual error while I try to run celery:
ImportError: Missing redis library (pip install redis)
When I run bash for docker, and install it by manual pip3 install redis - it works fine, but still it doesn't solve problem of building it, I don't have idea where I make mistake.
I wonder how come Docker did not complain about a missing /code directory? Are you sure Docker have successfully built the image? If it did, please explain to me how the COPY ./requirements.txt /code/requirements.txt worked? I checked, the python:3.7 image does not have that directory in its root file-system...

Docker compose errors using volumes

Here's the deal:
I want to create an image based on python:latest, for Django development.
I want to create the Django project INSIDE THE CONTAINER and make it reflect on a host folder, via docker volumes.
I want to use the python interpreter from the container for development.
This way, I can have only my Dockerfile, docker-compose.yml and requirements.txt on my project folder, not depending on Python, virtualenvs or anything like that on my host.
Here's my Dockerfile:
FROM python:latest
ARG requirements=requirements/production.txt
COPY ./app /app
WORKDIR /app
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r $requirements && \
django-admin startproject myproject .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
And here's my docker-compose.yml:
version: '3'
services:
web:
build:
context: .
args:
requirements: requirements/development.txt
networks:
- main
depends_on:
- postgres
environment:
- PYTHONUNBUFFERED=1
ports:
- "8000:8000"
volumes:
- "./app:/app:rw"
postgres:
image: postgres:latest
networks:
- main
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=123
volumes:
- ./data:/var/lib/postgresql/data
networks:
main:
The main issue are the volumes in web. If I build the image via docker build -t somename:sometag . the build works fine. If I run docker run -it my_image bash it shows me all the files created inside /app.
But if I try docker-compose up it fails the web part, saying that could not find manage.py, and exiting with code 2. Only Postgres is running after that.
So, finally, my questions are:
This kind of workflow is possible? Is it the best option, since it does not depend on Python on the host?
Thanks a lot.

Categories

Resources