Can't build Dockerfile on Ubuntu Server - python

I'm working on a python project and I get this problem on the Ubuntu Server while working on my local Windows. It stops in the second step, when trying to run mkdir instruction. It seems that I can't run the typical Ubuntu instructions (apt-get clean, apt-get update)
Dockerfile
FROM python:3
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install --upgrade pip==20.0.2 && pip install -r requirements.txt
COPY . /code/
Output error
OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mountpoint for devices not found\"": unknown

Are you able to run the Docker hello-world image? If not, this may indicate a problem with your installation/configuration
$ docker run hello-world
More information about post-installation steps can be found here. Otherwise, first option is to try restarting Docker
$ sudo systemctl restart docker
The Docker daemon must run with root privileges in the background, I have experienced issues before where on a newly-installed machine the updated group permissions for the daemon have not been fully applied. Restarting the daemon, or logging out & in might fix this.
Furthermore, when you declare a WORKDIR inside a Dockerfile that path will automatically be created it if does not already exist. Once you have set your WORKDIR all your paths can and should be relative to it if possible. Knowing this, we can simplify the Dockerfile
FROM python:3
WORKDIR /code
COPY requirements.txt .
RUN pip install --upgrade pip==20.0.2 && pip install -r requirements.txt
COPY . .
That may be enough to solve your issue. In my experience the Docker build tracebacks can be rather vague at times, but it sounds like that particular error could be stemming from a failed attempt to create a directory, either from a permission issue on the host machine or a syntax issue inside the container.

I solved this problem by (re)installing with apt, instead of snap:
sudo snap remove docker
sudo apt install docker-io
Test with (now working):
sudo docker run hello-world

Related

how to successfully run docker image as container

Below my docker file,
FROM python:3.9.0
ARG WORK_DIR=/opt/quarter_1
RUN apt-get update && apt-get install cron -y && apt-get install -y default-jre
# Install python libraries
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
WORKDIR $WORK_DIR
EXPOSE 8888
VOLUME /home/data/quarter_1/
# Copy etl code
# copy code on container under your workdir "/opt/quarter_1"
COPY . .
I tried to connect to the server then i did the build with docker build -t my-python-app .
when i tried to run the container from a build image i got nothing and was not able to do it.
docker run -p 8888:8888 -v /home/data/quarter_1/:/opt/quarter_1 image_id
work here is opt
Update based on comments
If I understand everything you've posted correctly, my suggestion here is to use a base Docker Jupyter image, modify it to add your pip requirements, and then add your files to the work path. I've tested the following:
Start with a dockerfile like below
FROM jupyter/base-notebook:python-3.9.6
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
COPY ./quarter_1 /home/jovyan/quarter_1
Above assumes you are running the build from the folder containing dockerfile, "requirements.txt", and the "quarter_1" folder with your build files.
Note "home/joyvan" is the default working folder in this image.
Build the image
docker build -t biwia-jupyter:3.9.6 .
Start the container with open port to 8888. e.g.
docker run -p 8888:8888 biwia-jupyter:3.9.6
Connect to the container to access token. A few ways to do but for example:
docker exec -it CONTAINER_NAME bash
jupyter notebook list
Copy the token in the URL and connect using your server IP and port. You should be able to paste the token there, and afterwards access the folder you copied into the build, as I did below.
Jupyter screenshot
If you are deploying the image to different hosts this is probably the best way to do it using COPY/ADD etc., but otherwise look at using Docker Volumes which give you access to a folder (for example quarter_1) from the host, so you don't constantly have to rebuild during development.
Second edit for Python 3.9.0 request
Using the method above, 3.9.0 is not immediately available from DockerHub. I doubt you'll have much compatibility issues between 3.9.0 and 3.9.6, but we'll build it anyway. We can download the dockerfile folder from github, update a build argument, create our own variant with 3.9.0, and do as above.
Assuming you have git. Otherwise download the repo manually.
Download the Jupyter Docker stack repo
git clone https://github.com/jupyter/docker-stacks
change into the base-notebook directory of the cloned repo
cd ./base-notebook
Build the image with python 3.9.0 instead
docker build --build-arg PYTHON_VERSION=3.9.0 -t jupyter-base-notebook:3.9.0 .
Create the version with your copied folders and 3.9.0 version from the steps above, replacing the first line in the dockerfile instead with:
FROM jupyter-base-notebook:3.9.0
I've tested this and it works, running Python 3.9.0 without issue.
There are lots of ways to build Jupyter images, this is just one method. Check out docker hub for Jupyter to see their variants.

Flask, React and Docker: Non-Zero Codes

I am trying to follow the Flask/React tutorial here, on a plain Windows machine.
On Windows 10, without considering Docker, I have the tutorial working.
On Windows 10 under a docker system (ubuntu-based containers and docker-compose), I do not:
The React server works under the docker.
The Flask server won't successfully build.
The Dockerfile for the Flask server is:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y python3-pip yarn
RUN pip3 install flask
#RUN pip3 install venv
RUN mkdir -p /app
WORKDIR /app
COPY . /app
#RUN python3 -m venv venv
RUN cd api/venv/Scripts
RUN flask run --no-debugger
This fails at the very last line:
The command '/bin/sh -c flask run --no-debugger' returned a non-zero code: 1
Note that I find myself in the unenviable position of trying to use/teach myself all of Docker, venv, react, and flask at the same time. The venv commands are commented out because I'm not even sure venv makes sense in a docker (but what would I know?) and also because the pip3 install venv command halts with a non-zero code:2.
Any advice is welcome.
There are two obvious issues in the Dockerfile you show.
Each RUN command runs in a clean environment starting from the last known state of the image. Settings like the current directory (and also environment variable values) are not preserved when a RUN command exits. So RUN cd ... starts the RUN command from the old directory, changes to the new directory, and then doesn't remember that; the following RUN command starts again from the old directory. You need the WORKDIR directive to actually change directories.
The RUN commands also run during the build phase. They won't publish network ports or have access to databases; in a multi-container Compose setup they can't connect to other containers. You probably want to run the Flask app as the main container CMD.
So you can update your Dockerfile to look like:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y python3-pip yarn
WORKDIR /app # Creates the directory as well
COPY requirements.txt ./ # Includes "flask"
RUN pip install -r requirements.txt
COPY . ./
WORKDIR /app/api/venv/Scripts # Not `RUN cd ...`
CMD flask run --no-debugger # Not `RUN ...`
It is in fact common to just not use a virtual environment in Docker; the Docker image is isolated from any other Python installation and so it's safe to use the "system" Python package tree. (I am a little suspicious of the venv directory in there, since virtual environments can't be transplanted into other setups very well.)
Note that I find myself in the unenviable position of trying to use/teach myself all of Docker, venv, react, and flask at the same time.
Put Docker away for another day. It's not necessary, especially during the development phase of your application. If you read through SO questions there are a lot of questions trying to contort Docker into acting just like a local development environment, where it's really not designed for it. There's nothing wrong with locally installing the tools you need to do your job, especially when they're very routine tools like Python and Node.
I believe that flask can't find your app when you run your docker (especially as the docker build attempts to run it). If you want to use the docker only for the purpose of running your app through that docker, use CMD in the dockerfile, thus when running the docker image, it will start your flask app first thing.

CI/CD tests involving pyspark - JAVA_HOME is not set

I am working on a project which uses pyspark, and would like to set up automated tests.
Here's what my .gitlab-ci.yml file looks like:
image: "myimage:latest"
stages:
- Tests
pytest:
stage: Tests
script:
- pytest tests/.
I built the docker image myimage using a Dockerfile such as the following (see this excellent answer):
FROM python:3.7
RUN python --version
# Create app directory
WORKDIR /app
# copy requirements.txt
COPY local-src/requirements.txt ./
# Install app dependencies
RUN pip install -r requirements.txt
# Bundle app source
COPY src /app
However, when I run this, the gitlab CI job errors with the following:
/usr/local/lib/python3.7/site-packages/pyspark/java_gateway.py:95: in launch_gateway
raise Exception("Java gateway process exited before sending the driver its port number")
E Exception: Java gateway process exited before sending the driver its port number
------------------------------- Captured stderr --------------------------------
JAVA_HOME is not set
I understand that pyspark requires me to have JAVA8 or higher installed on my computer. I have this set up alright locally, but...what about during the CI process? How can I install Java so it works?
I have tried adding
RUN sudo add-apt-repository ppa:webupd8team/java
RUN sudo apt-get update
RUN apt-get install oracle-java8-installer
to the Dockerfile which created the image, but got the error
/bin/sh: 1: sudo: not found
.
How can I modify the Dockerfile so that tests using pyspark will work?
Solution that worked for me: add
RUN apt-get update
RUN apt-get install default-jdk -y
before
RUN pip install -r requirements.txt
It then all worked as expected with no further modifications needed!
EDIT
To make this work, I've had to update my base image to python:3.7-stretch
Write in your .bash_profile:
export JAVA_HOME=(the home directory in your jdk i.e. /Library/Java/JavaVirtualMachines/[yourjdk]/Contents/Home)

Dockerize existing Django project

I can't wrap my head around how to dockerize existing Django app.
I've read this official manual by Docker explaining how to create Django project during the creation of Docker image, but what I need is to dockerize existing project using the same method.
The main purpose of this approach is that I have no need to build docker images locally all the time, instead what I want to achieve is to push my code to a remote repository which has docker-hub watcher attached to it and as soon as the code base is updated it's being built automatically on the server.
For now my Dockerfile looks like:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install Django
RUN pip install djangorestframework
RUN pip install PyQRCode
ADD . /code/
Can anyone please explain how should I compose Dockerfile and do I need to use docker-compose.yml (if yes: how?) to achieve functionality I've described?
Solution for this question:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN pip install *name of package*
RUN pip install *name of another package*
ADD . /code/
EXPOSE 8000
CMD python3 manage.py runserver 0.0.0.0:8000
OR
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
EXPOSE 8000
CMD python3 manage.py runserver 0.0.0.0:8000
requirements.txt should be a plain list of packages, for example:
Django==1.11
djangorestframework
pyqrcode
pypng
This question is too broad. What happens with the Dockerfile you've created?
You don't need docker compose unless you have multiple containers that need to interact.
Some general observations from your current Dockerfile:
It would be better to collapse the pip install commands into a single statement. In docker, each statement creates a file system layer, and the layers in between the pip install commmands probably serve no useful purpose.
It's better to declare dependencies in setup.py or a requirements.txt file (pip install -r requirements.txt), with fixed version numbers (foopackage==0.0.1) to ensure a repeatable build.
I'd recommend packaging your Django app into a python package and installing it with pip (cd /code/; pip install .) rather than directly adding the code directory.
You're missing a statement (CMD or ENTRYPOINT) to execute the app. See https://docs.docker.com/engine/reference/builder/#cmd
Warning: -onbuild images have been deprecated.
#AlexForbes raised very good points. But if you want a super simple Dockerfile for Django, you can probably just do:
FROM python:3-onbuild
RUN python manage.py collectstatic
CMD ["python", "manage.py"]
You then run your container with:
docker run myimagename runserver
The little -onbuild modifier does most of what you need. It creates /usr/src/app, sets it as the working directory, copies all your source code inside, and runs pip install -r requirements.txt (which you forgot to run). Finally we collect statics (might not be required in your case if statics are hosted somewhere), and set the default command to manage.py so everything is easy to run.
You would need docker-compose if you had to run other containers like Celery, Redis or any other background task or server not supplied by your environment.
I actually wrote an article about this in https://rehalcon.blogspot.mx/2018/03/dockerize-your-django-app-for-local.html
My case is very similar, but it adds a MySQL db service and environment variables for code secrets, as well as the use of docker-compose (needed in macOS). I also use the python:2.7-slim docker parten image instead, to make the image much maller (under 150MB).

Building Docker image for a python flask web app. Image does not appear or requirements.txt not found

I am trying to create a build of a webapp I have created using Docker, but I have had no success. I've tried to follow two different tutorials but neither worked for me
Following Tutorial 1:
The build seemed to complete without any problems but I could not find the image file anywhere, and 'sudo docker ps -a' returned nothing.
Following through thtutorial 2:
I am now getting another error, that the requirements file is not found. I looked up solutions to that here, but it seems I am doing the correct thing by adding it to the build with the 'ADD requirements.txt /webapp' command. I checked that I spelled requirements right, haha. Now I do see it in 'sudo docker ps -a', but I dont see any image file and presumably it would not work if I did, since it could not find the requirements.
I'm quite confused as to what is wrong and how I should properly build a docker. How to I get it to find the requirements file, and then upon completing the "Build" command, actually have an image. Where is this image stored?
Below is the setup I have after following the second tutorial.
Dockerfile
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
# Add requirements.txt
ADD requirements.txt /webapp
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp
# Set the default directory for our environment
ENV HOME /webapp
WORKDIR /webapp
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "0.0.0.0:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
CMD ["app.py"]
Requirements
Flask==0.12.1
itsdangerous==0.24
Jinja2==2.8
MarkupSafe==0.23
Werkzeug==0.11.5
SQLite3==3.18.0
Directory Structure (if it matters)
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
Current output of ' sudo docker ps -a'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 17 minutes ago Exited (1) 17 minutes ago admiring_agnesi
The requirements.txt should be in the same directory as your dockerfile exists. I see from the dockerfile that the requirements.txt is added to webapp but the
RUN pip install -r requirements.txt
is trying to find it in the current directory; You probably need to copy rquirement.txt to the current directory like
ADD requirements.txt .
Lets see if that works. I did not test it.
You can see the images by
docker images
and then run it like
docker run -t image_name

Categories

Resources