I am trying to follow the Flask/React tutorial here, on a plain Windows machine.
On Windows 10, without considering Docker, I have the tutorial working.
On Windows 10 under a docker system (ubuntu-based containers and docker-compose), I do not:
The React server works under the docker.
The Flask server won't successfully build.
The Dockerfile for the Flask server is:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y python3-pip yarn
RUN pip3 install flask
#RUN pip3 install venv
RUN mkdir -p /app
WORKDIR /app
COPY . /app
#RUN python3 -m venv venv
RUN cd api/venv/Scripts
RUN flask run --no-debugger
This fails at the very last line:
The command '/bin/sh -c flask run --no-debugger' returned a non-zero code: 1
Note that I find myself in the unenviable position of trying to use/teach myself all of Docker, venv, react, and flask at the same time. The venv commands are commented out because I'm not even sure venv makes sense in a docker (but what would I know?) and also because the pip3 install venv command halts with a non-zero code:2.
Any advice is welcome.
There are two obvious issues in the Dockerfile you show.
Each RUN command runs in a clean environment starting from the last known state of the image. Settings like the current directory (and also environment variable values) are not preserved when a RUN command exits. So RUN cd ... starts the RUN command from the old directory, changes to the new directory, and then doesn't remember that; the following RUN command starts again from the old directory. You need the WORKDIR directive to actually change directories.
The RUN commands also run during the build phase. They won't publish network ports or have access to databases; in a multi-container Compose setup they can't connect to other containers. You probably want to run the Flask app as the main container CMD.
So you can update your Dockerfile to look like:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y python3-pip yarn
WORKDIR /app # Creates the directory as well
COPY requirements.txt ./ # Includes "flask"
RUN pip install -r requirements.txt
COPY . ./
WORKDIR /app/api/venv/Scripts # Not `RUN cd ...`
CMD flask run --no-debugger # Not `RUN ...`
It is in fact common to just not use a virtual environment in Docker; the Docker image is isolated from any other Python installation and so it's safe to use the "system" Python package tree. (I am a little suspicious of the venv directory in there, since virtual environments can't be transplanted into other setups very well.)
Note that I find myself in the unenviable position of trying to use/teach myself all of Docker, venv, react, and flask at the same time.
Put Docker away for another day. It's not necessary, especially during the development phase of your application. If you read through SO questions there are a lot of questions trying to contort Docker into acting just like a local development environment, where it's really not designed for it. There's nothing wrong with locally installing the tools you need to do your job, especially when they're very routine tools like Python and Node.
I believe that flask can't find your app when you run your docker (especially as the docker build attempts to run it). If you want to use the docker only for the purpose of running your app through that docker, use CMD in the dockerfile, thus when running the docker image, it will start your flask app first thing.
Related
I have a web app built with a framework like FastAPI or Django, and my project uses Poetry to manage the dependencies.
I didn't find any topic similar to this.
The question is: should I install poetry in my production dockerfile and install the dependencies using the poetry, or should I export the requirements.txt and just use pip inside my docker image?
Actually, I am exporting the requirements.txt to the project's root before deploy the app and just using it inside the docker image.
My motivation is that I don't need the "complexity" of using poetry inside a dockerfile, since the requirements.txt is already generated by the poetry and use it inside the image will generate a new step into docker build that can impact the build speed.
However, I have seen much dockerfiles with poetry installation, what makes me think that I am doing a bad use of the tool.
There's no need to use poetry in production. To understand this we should look back to what the original reason poetry exists. There are basically two main reasons for poetry:-
To manage python venv for us - in the past people use different range of tools, from home grown script to something like virtualenvwrapper to automatically manage the virtual env.
To help us publishing packages to PyPI
Reason no. 2 not really a concern for this question so let just look at reason no. 1. Why we need something like poetry in dev? It because dev environment could be different between developers. My venv could be in /home/kamal/.venv while John probably want to be fancy and place his virtualenv in /home/john/.local/venv.
When writing notes on how to setup and run your project, how would you write the notes to cater the difference between me and John? We probably use some placeholder such as /path/to/your/venv. Using poetry, we don't have to worry about this. Just write in the notes that you should run the command as:-
poetry run python manage.py runserver ...
Poetry take care of all the differences. But in production, we don't have this problem. Our app in production will be in single place, let say in /app. When writing notes on how to run command on production, we can just write:-
/app/.venv/bin/myapp manage collectstatic ...
Below is a sample Dockerfile we use to deploy our app using docker:-
FROM python:3.10-buster as py-build
# [Optional] Uncomment this section to install additional OS packages.
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends netcat util-linux \
vim bash-completion yamllint postgresql-client
RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/opt/poetry python3 -
COPY . /app
WORKDIR /app
ENV PATH=/opt/poetry/bin:$PATH
RUN poetry config virtualenvs.in-project true && poetry install
FROM node:14.20.0 as js-build
COPY . /app
WORKDIR /app
RUN npm install && npm run production
FROM python:3.10-slim-buster
EXPOSE 8000
COPY --from=py-build /app /app
COPY --from=js-build /app/static /app/static
WORKDIR /app
CMD /app/.venv/bin/run
We use multistage build where in the build stage, we still use poetry to install all the dependecies but in the final stage, we just copy /app which would also include .venv virtualenv folder.
I'm working on a python project and I get this problem on the Ubuntu Server while working on my local Windows. It stops in the second step, when trying to run mkdir instruction. It seems that I can't run the typical Ubuntu instructions (apt-get clean, apt-get update)
Dockerfile
FROM python:3
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install --upgrade pip==20.0.2 && pip install -r requirements.txt
COPY . /code/
Output error
OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mountpoint for devices not found\"": unknown
Are you able to run the Docker hello-world image? If not, this may indicate a problem with your installation/configuration
$ docker run hello-world
More information about post-installation steps can be found here. Otherwise, first option is to try restarting Docker
$ sudo systemctl restart docker
The Docker daemon must run with root privileges in the background, I have experienced issues before where on a newly-installed machine the updated group permissions for the daemon have not been fully applied. Restarting the daemon, or logging out & in might fix this.
Furthermore, when you declare a WORKDIR inside a Dockerfile that path will automatically be created it if does not already exist. Once you have set your WORKDIR all your paths can and should be relative to it if possible. Knowing this, we can simplify the Dockerfile
FROM python:3
WORKDIR /code
COPY requirements.txt .
RUN pip install --upgrade pip==20.0.2 && pip install -r requirements.txt
COPY . .
That may be enough to solve your issue. In my experience the Docker build tracebacks can be rather vague at times, but it sounds like that particular error could be stemming from a failed attempt to create a directory, either from a permission issue on the host machine or a syntax issue inside the container.
I solved this problem by (re)installing with apt, instead of snap:
sudo snap remove docker
sudo apt install docker-io
Test with (now working):
sudo docker run hello-world
I have written Dockerfile for my python application.
Requirement is :
Install & start mysql server.
Run the application in screen in detach mode.
Below is my Dockerfile:
FROM ubuntu:16.04
# Update OS
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip screen npm vim net-tools
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
COPY src /usr/src/app/src
COPY ./src/nsd.ini /etc/
RUN pwd
RUN cd /usr/src/app
RUN service mysql start
RUN /bin/bash -c "chmod +x src/run_demo_app.sh && src/run_demo_app.sh"
Below is the content of bash script
$ cat src/run_demo_app.sh
$ screen -dm bash -c "sleep 10; python -m src.app";
The problem is Mysql doesn't start. I need to start it manually from container.
Also, the screen becomes dead and application do not start. Manually running the script works fine.
So this is a understanding gap and nothing else. Note below issues in your docker file
Never use service command
RUN service mysql start
Docker doesn't use a init system. So never use a service command inside docker.
Don't put everything in same container
You should not put everything in the same container. So mysql should run in its own container and python in its own
Use official images
You don't need to re-invent the wheel. Use official images as much as possible. You should be using mysql and python images in your case
Use docker-compose when multiple services are needed
In your case since you are requiring multiple services, use docker-compose.
No need to use screen in docker
Screen is used when your want your process to be running even if your SSH disconnects. So that in not needed in docker. If you run your docker run or docker-compose up command with an additional -d flag then your container will automatically be launched in background
I have pulled jenkins container from docker hub like this:
docker pull jenkins
The container runs and I can access Jenkins UI in :
http://localhost:8080
My question is:
If I want to be able to create a jenkins job that pulls from a github repo and I want to run some python tests from one of the test files of that repo, how can I install extra packages such as virtualenvwrapper, pip, pytest, nose, selenium etc?
It appears that the docker container does not share any reference with local host file system.
How can I install such packages in this running container?
Thanks
You will need to install all your dependencies at docker container build time.
You can make your own Dockerfile off of the jenkins library, and then put custom stuff in there. Your Dockerfile can look like
FROM jenkins:latest
MAINTAINER Becks
RUN apt-get update && apt-get install -y {space delimited list of package}
Then, you can do something like...
docker build -t jenkins-docker --file Dockerfile .
docker run -it -d --name=jenkins-docker jenkins-docker
I might not have written all the syntax correctly, but this is basically what you need to do. If you want the run step to spin up jenkins, follow along with what they are doing in the existing Dockerfile here and add relevant sections to your dockerfile, to add some RUN steps to run jenkins.
Came across this page, which approaches a similar problem, although it also mounts the docker sock inside another container, to kind of connect one container to another. Given that its an external link, here's the relevant dockerfile from there,
FROM jenkins:1.596
USER root
RUN apt-get update \
&& apt-get install -y sudo \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
And this is how you can spin it up.
docker build -t myjenk .
...
Successfully built 471fc0d22bff
$ docker run -d -v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker -p 8080:8080 myjenk
I strongly suggest going through that post. Its pretty awesome.
I'm currently building a docker image and running the container to run some tests in it for a Python application I'm working on. Currently the Dockerfile copies the files over from the host machine, sets the working directory to those copied files, runs a sudo apt-get and installs pip, and finally runs the tests from setup.py. The Dockerfile can be seen below.
FROM ubuntu
ADD . /home/dev/ProjectName
WORKDIR /home/dev/ProjectName
RUN apt-get update && \
apt-get install -y python3-pip && \
python3 setup.py test
I was curious if there were a more conventional way to avoid having to run the apt-get and apt-get install pip every time I'd like to run a test. The main idea I had was to build an image with pip already on it, and then build this image from that one.
Docker builds using cached layers if it can. By adding files you have changed it invalidates the cache for all subsequent rules. Put the apt commands first and those will only be run the first time you build. See this blog for more info.