I am starting with docker and built an image with jupyter and some python libraries. The end user should be able to use jupyter and access specific host data directories throught the container (read/write rights), but must be a non-root user. Here is my dockerfile so far:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y \
python-pip
RUN pip install --upgrade pip && pip install jupyter \
pandas \
numpy
RUN useradd -r -g users A && \
mkdir /myhome && \
chown -R A:users /myhome
EXPOSE 8888
WORKDIR /myhome
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0"]
I run this by doing docker run -it -p 8888:8888 -u="A" -v /some/host/files:/myhome
But then I got a jupyter error that says OSError: [Errno 13] Permission denied: '/home/A'
Any help appreciated. Many thanks!
When you start your container with --entrypoint=bash, you will find that the home directory /home/A of your user has not been created. To do that, you need to add the -m flag to the useradd command
Some more info: You might want to take a look at the docker-stacks projects (https://github.com/jupyter/docker-stacks/tree/master/base-notebook and derived images). That seems to match with what you're trying to do and adds some other helpful stuff. E.g. when running a dockerized jupyter, you need a "PID 1 reaper"; otherwise your exited notebook kernels turn into zombies (you can google for that :-)
Also, when sharing host files with a non-root user inside the container, you will often need to set the UID of your container user to some specific value matching with the host system, so the file system permissions are properly matched. The docker-stacks containers support that too. Their Dockerfiles might at least help as a boilerplate to run your own.
Related
I've installed and configured docker (as per documentation) and I am trying to build a flask application using tiangolo/uwsgi-nginx-flask:python3.8. I've built a hello-world application, and have tested it locally by running python manage.py and the application runs successfully. Link to full Code-File.
My docker version and installation is as below:
Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.8
ENV INSTALL_PATH /usr/src/helloworld
RUN mkdir -p $INSTALL_PATH
# install net-tools
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set working directory
WORKDIR $INSTALL_PATH
# setup flask environment
# install all requirements
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# copy all files and folder to docker
COPY . .
# run the application in docker environment
CMD [ "python", "./manage.py" ]
I built the application with docker build --tag hello-world:test . and running the application as: docker run -d -p 5000:5000 hello-world:test successfully.
However, I'm unable to open the application in localhost:5000 or 0.0.0.0:5000 or any other port. The application is running, as I can see it from the CLI:
But, from browser the page is not reachable:
The question mentions to check the IP address:
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" hungry_engelbart
>> <no value>
Found another solution at this link, but docker-machine is currently deprecated.
I'm new to docker, but I have tried to run the same thing following this tutorial, but faced similar issues.
Finally, I am able to solve this. I had to configure a new inbound rules under Windows Firewall > Advanced Settings > Inbound Rules > New Inbound Rules. Create a new rule that will allow a range of local IP addresses, which in my case was 198.168.0.1:198.168.0.100. Finally, you need to run the application at 0.0.0.0 as pointed by #tentative in the comments. :)
I am trying to follow the Flask/React tutorial here, on a plain Windows machine.
On Windows 10, without considering Docker, I have the tutorial working.
On Windows 10 under a docker system (ubuntu-based containers and docker-compose), I do not:
The React server works under the docker.
The Flask server won't successfully build.
The Dockerfile for the Flask server is:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y python3-pip yarn
RUN pip3 install flask
#RUN pip3 install venv
RUN mkdir -p /app
WORKDIR /app
COPY . /app
#RUN python3 -m venv venv
RUN cd api/venv/Scripts
RUN flask run --no-debugger
This fails at the very last line:
The command '/bin/sh -c flask run --no-debugger' returned a non-zero code: 1
Note that I find myself in the unenviable position of trying to use/teach myself all of Docker, venv, react, and flask at the same time. The venv commands are commented out because I'm not even sure venv makes sense in a docker (but what would I know?) and also because the pip3 install venv command halts with a non-zero code:2.
Any advice is welcome.
There are two obvious issues in the Dockerfile you show.
Each RUN command runs in a clean environment starting from the last known state of the image. Settings like the current directory (and also environment variable values) are not preserved when a RUN command exits. So RUN cd ... starts the RUN command from the old directory, changes to the new directory, and then doesn't remember that; the following RUN command starts again from the old directory. You need the WORKDIR directive to actually change directories.
The RUN commands also run during the build phase. They won't publish network ports or have access to databases; in a multi-container Compose setup they can't connect to other containers. You probably want to run the Flask app as the main container CMD.
So you can update your Dockerfile to look like:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y software-properties-common
RUN add-apt-repository universe
RUN apt-get update && apt-get install -y python3-pip yarn
WORKDIR /app # Creates the directory as well
COPY requirements.txt ./ # Includes "flask"
RUN pip install -r requirements.txt
COPY . ./
WORKDIR /app/api/venv/Scripts # Not `RUN cd ...`
CMD flask run --no-debugger # Not `RUN ...`
It is in fact common to just not use a virtual environment in Docker; the Docker image is isolated from any other Python installation and so it's safe to use the "system" Python package tree. (I am a little suspicious of the venv directory in there, since virtual environments can't be transplanted into other setups very well.)
Note that I find myself in the unenviable position of trying to use/teach myself all of Docker, venv, react, and flask at the same time.
Put Docker away for another day. It's not necessary, especially during the development phase of your application. If you read through SO questions there are a lot of questions trying to contort Docker into acting just like a local development environment, where it's really not designed for it. There's nothing wrong with locally installing the tools you need to do your job, especially when they're very routine tools like Python and Node.
I believe that flask can't find your app when you run your docker (especially as the docker build attempts to run it). If you want to use the docker only for the purpose of running your app through that docker, use CMD in the dockerfile, thus when running the docker image, it will start your flask app first thing.
I am using a centos os base image and installing python3 with the following dockerfile
FROM centos:7
ENV container docker
ARG USER=dsadmin
ARG HOMEDIR=/home/${USER}
RUN yum clean all \
&& yum update -q -y -t \
&& yum install file -q -y
RUN useradd -s /bin/bash -d ${HOMEDIR} ${USER}
RUN export LC_ALL=en_US.UTF-8
# install Development Tools to get gcc
RUN yum groupinstall -y "Development Tools"
# install python development so that pip can compile packages
RUN yum -y install epel-release && yum clean all \
&& yum install -y python34-setuptools \
&& yum install -y python34-devel
# install pip
RUN easy_install-3.4 pip
# install virtualenv or virtualenvwrapper
RUN pip3 install virtualenv \
&& pip3 install virtualenvwrapper \
&& pip3 install pandas
# # install django
# RUN pip3 install django
USER ${USER}
WORKDIR ${HOMEDIR}
I build and tag the above as follows:
docker build . --label validation --tag validation
I then need to add a .tar.gz file to the home directory. This file contains all the python scripts I maintain. This file will change frequently. If I add it to the dockerfile above, python is installed every time I change the .gz file. This adds a lot of time to development. As a workaround, I tried creating a second dockerfile file that uses the above image as the base and then just adds the .tar.gz file on it.
FROM validation:latest
ARG USER=dsadmin
ARG HOMEDIR=/home/${USER}
ADD code/validation_utility.tar.gz ${HOMEDIR}/.
USER ${USER}
WORKDIR ${HOMEDIR}
After that if I run docker image and do an ls, all the files in the folder have a owner of games.
-rw-r--r-- 1 501 games 35785 Nov 2 21:24 Validation_utility.py
To fix the above, I added the following lines to the second docker file:
ADD code/validation_utility.tar.gz ${HOMEDIR}/.
RUN chown -R ${USER}:${USER} ${HOMEDIR} \
&& chmod +x ${HOMEDIR}/Validation_utility.py
but I get the error:
chown: changing ownership of '/home/dsadmin/Validation_utility.py': Operation not permitted
The goal is to have two docker files. The users will run the first docker file to install centos and python dependencies. The second dockerfile will install the custom python scripts. If the scripts change, they will just run the second docker file again. Is that the right way to think about docker? Thank you.
Is that the right way to think about docker?
This is the easy part of your question. Yes. You're thinking about the proper way to structure your Dockerfiles, reuse them, and keep your image builds efficient. Good job.
As for the error you're receiving, I'm less confident in answering why the ADD command is un-tarballing your tar.gz as the games user. I'm not nearly as familiar with CentOS. That's the start of the problem. dsadmin, as a regular non-privileged user, can't change ownership of files he doesn't own. Since this un-tarballed script is owned by games, the chown command fails.
I used your Dockerfiles and got the same issue on MacOS.
You can get around this by, well, not using ADD. Which is funny because local tarball extraction is the one use case where Docker thinks you should prefer ADD over COPY.
COPY code/validation_utility.tar.gz ${HOMEDIR}/.
RUN tar -xvf validation_utility.tar.gz
Properly extracts the tarball and, since dsadmin was the user at the time, the contents come out properly owned by dsadmin.
(An uglier route might be to switch the USER to root to set permissions, then set it back to dsadmin. I think this is icky, but it's an option.)
I have written Dockerfile for my python application.
Requirement is :
Install & start mysql server.
Run the application in screen in detach mode.
Below is my Dockerfile:
FROM ubuntu:16.04
# Update OS
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip screen npm vim net-tools
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install --no-cache-dir -r requirements.txt
COPY src /usr/src/app/src
COPY ./src/nsd.ini /etc/
RUN pwd
RUN cd /usr/src/app
RUN service mysql start
RUN /bin/bash -c "chmod +x src/run_demo_app.sh && src/run_demo_app.sh"
Below is the content of bash script
$ cat src/run_demo_app.sh
$ screen -dm bash -c "sleep 10; python -m src.app";
The problem is Mysql doesn't start. I need to start it manually from container.
Also, the screen becomes dead and application do not start. Manually running the script works fine.
So this is a understanding gap and nothing else. Note below issues in your docker file
Never use service command
RUN service mysql start
Docker doesn't use a init system. So never use a service command inside docker.
Don't put everything in same container
You should not put everything in the same container. So mysql should run in its own container and python in its own
Use official images
You don't need to re-invent the wheel. Use official images as much as possible. You should be using mysql and python images in your case
Use docker-compose when multiple services are needed
In your case since you are requiring multiple services, use docker-compose.
No need to use screen in docker
Screen is used when your want your process to be running even if your SSH disconnects. So that in not needed in docker. If you run your docker run or docker-compose up command with an additional -d flag then your container will automatically be launched in background
I have pulled jenkins container from docker hub like this:
docker pull jenkins
The container runs and I can access Jenkins UI in :
http://localhost:8080
My question is:
If I want to be able to create a jenkins job that pulls from a github repo and I want to run some python tests from one of the test files of that repo, how can I install extra packages such as virtualenvwrapper, pip, pytest, nose, selenium etc?
It appears that the docker container does not share any reference with local host file system.
How can I install such packages in this running container?
Thanks
You will need to install all your dependencies at docker container build time.
You can make your own Dockerfile off of the jenkins library, and then put custom stuff in there. Your Dockerfile can look like
FROM jenkins:latest
MAINTAINER Becks
RUN apt-get update && apt-get install -y {space delimited list of package}
Then, you can do something like...
docker build -t jenkins-docker --file Dockerfile .
docker run -it -d --name=jenkins-docker jenkins-docker
I might not have written all the syntax correctly, but this is basically what you need to do. If you want the run step to spin up jenkins, follow along with what they are doing in the existing Dockerfile here and add relevant sections to your dockerfile, to add some RUN steps to run jenkins.
Came across this page, which approaches a similar problem, although it also mounts the docker sock inside another container, to kind of connect one container to another. Given that its an external link, here's the relevant dockerfile from there,
FROM jenkins:1.596
USER root
RUN apt-get update \
&& apt-get install -y sudo \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
And this is how you can spin it up.
docker build -t myjenk .
...
Successfully built 471fc0d22bff
$ docker run -d -v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker -p 8080:8080 myjenk
I strongly suggest going through that post. Its pretty awesome.