FROM ubuntu:14.04.2
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -y update && apt-get upgrade -y
RUN apt-get install python build-essential python-dev python-pip python-setuptools -y
RUN apt-get install libxml2-dev libxslt1-dev python-dev -y
RUN apt-get install libpq-dev postgresql-common postgresql-client -y
RUN apt-get install openssl openssl-blacklist openssl-blacklist-extra -y
RUN apt-get install nginx -y
RUN pip install "pip>=7.0"
RUN pip install virtualenv uwsgi
ADD canonicaliser_api /home/ubuntu/canonicaliser_api
ADD config_local.py /home/ubuntu/canonicaliser_api/config/config_local.py
RUN virtualenv /home/ubuntu/canonicaliser_api/venv
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt
RUN export CFLAGS=-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && cd /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/ && python setup.py build_ext --inplace
RUN cp /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/canonicaliser/cython_extensions/*.so /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions
RUN rm -rf /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/canonicaliser
RUN rm -r /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/build
RUN mkdir /var/run/flask-uwsgi
RUN chown -R www-data:www-data /var/run/flask-uwsgi
RUN mkdir /var/log/flask-uwsgi
RUN touch /var/log/flask-uwsgi/dqs_canon.log
RUN chown -R www-data:www-data /var/log/flask-uwsgi
RUN mkdir /etc/flask-uwsgi
ADD configs/new-canon/flask-uwsgi/flask-uwsgi.conf /etc/init/
ADD configs/new-canon/flask-uwsgi/flask-uwsgi.ini /etc/flask-uwsgi/
EXPOSE 8888
CMD service flask-uwsgi restart
# RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# CMD service nginx start
When I try to run this docker I get the error message:
flask-uwsgi: unrecognized service
So I ended up uncommenting the last two lines, so that nginx gets started and keeps the docker process alive. I then ssh'ed into it to debug.
docker exec -it 20b2ff3a4cac bash
Now when I try to run the service, it is the same problem and I can't find any missing step. Maybe services are not allowed to be started like this in docker?
root#30b2ff3a4cac:/# service flask-uwsgi start
flask-uwsgi: unrecognized service
/etc/flask-uwsgi/flask-uwsgi.ini:
[uwsgi]
socket = /var/run/flask-uwsgi/flask-uwsgi.sock
home = /home/ubuntu/canonicaliser_api/venv
wsgi-file = flask_uwsgi.py
callable = app
master = true
; www-data uid/gid
uid = 33
gid = 33
http-socket = :8888
die-on-term = true
processes = 4
threads = 2
logger = file:/var/log/flask-uwsgi/flask-uwsgi.log
/etc/init/flask-uwsgi.conf:
start on [2345]
stop on [06]
script
cd /home/ubuntu/canonicaliser_api
exec uwsgi --ini /etc/flask-uwsgi/flask-uwsgi.ini
end script
While ssh'ed into the process, I could run the uwsgi like this directly and it works:
exec uwsgi --ini /etc/flask-uwsgi/flask-uwsgi.ini
So services are not supported in docker and I have to run this directly in docker image like this:
RUN exec uwsgi --ini /etc/flask-uwsgi/flask-uwsgi.ini
Or I'm missing something.
Yeah, don't use services.
You can't do this though:
RUN exec uwsgi --ini /etc/flask-uwsgi/flask-uwsgi.ini
That line will complete, and be committed to an image. But the process will no longer be running in subsequent instructions or when the container is started.
Instead, you can do it in an ENTRYPOINT or CMD command, as they are executed when the container starts. This should work:
CMD uwsgi --ini /etc/flask-uwsgi/flask-uwsgi.ini
Some other points:
you might find things easier if you use one of the official python images.
I would just get rid of virtualenv; I don't see the benefit of virtualenv in an isolated container.
Running RUN rm -rf ... doesn't save any space; those files were already committed to a previous layer. You need to delete files in the same instruction they are added to avoid using space in the image.
It might make sense to do USER www-data rather than chowning files.
Related
My simple flask app is not automatically starting when I run in docker, though I have added CMD command correctly. I am able to run flask using python3 /app/app.py manually from container shell. Hence, no issue with code or command
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y gcc libffi-dev libssl-dev
RUN apt-get install -y libxml2-dev xmlsec1
RUN apt-get install -y python3-pip python3-dev
RUN pip3 --no-cache-dir install --upgrade pip
RUN rm -rf /var/lib/apt/lists/*
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN pip3 install -r requirements.txt
EXPOSE 5000
CMD ["/usr/bin/python3", "/app/app.py"]
I run docker container as
docker run -it okta /bin/bash
When I log in to docker container and run "ps -eaf" on Ubuntu shell of container, I do not see flask process running. So my question is why below line did not work in Dockerfile?
CMD ["/usr/bin/python3", "/app/app.py"]
Running your docker container and passing the command /bin/bash is overriding the CMD ["/usr/bin/python3", "/app/app.py"] in your Dockerfile.
CMD vs ENTRYPOINT Explained Here
Try changing the last line of your Dockerfile to
ENTRYPOINT ["/usr/bin/python3", "/app/app.py"]
Don't forget to rebuild your image after changing.
Or... you can omit the /bin/bash from the end of your docker run command and see if your app.py starts up successfully.
A docker image I am creating and sending to a client is somehow deleting its source code 24-48 hours after it is started. We can see this by exec onto the running container and talking a look around.
The service is a simple flask app. The service doesn't go down as the application doesn't experience an issue but the static files it should be yielding go missing (along with everything else copied in) so we start getting 404s. I can't think of anything that would explain this (especially considering that it takes time for it to occur)
FROM python:3.8-slim-buster
ARG USERNAME=calibrator
ARG USER_UID=1000
ARG USER_GID=$USER_UID
RUN apt-get update \
&& groupadd --gid $USER_GID $USERNAME \
&& useradd -s /bin/bash --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& apt-get install -y sudo \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME\
&& chmod 0440 /etc/sudoers.d/$USERNAME \
# Install open-cv packaged
&& apt-get install -y libsm6 libxext6 libxrender-dev libgtk2.0-dev libgl1-mesa-glx \
#
## Git
&& sudo apt-get install -y git-lfs \
#
## Bespoke setup
&& apt-get -y install unixodbc-dev \
#
## PostgresSQL
&& apt-get -y install libpq-dev
ENV PATH="/home/${USERNAME}/.local/bin:${PATH}"
ARG git_user
ARG git_password
RUN pip install --upgrade pip
RUN python3 -m pip install --user git+https://${git_user}:${git_password}#bitbucket.org/****
WORKDIR /home/calibrator
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY app app
ENV FLASK_APP=app/app.py
EXPOSE 80
STOPSIGNAL SIGTERM
CMD ["uwsgi", "--http", ":80", "--module", "app.app", "--callable", "app", "--processes=1", "--master"]
version: "3.7"
services:
calibrator:
container_name: sed-calibrator-ui
image: sed-metadata-calibrator:2.0.3
restart: always
ports:
- "8081:80"
environment:
- STORE_ID=N0001
- DYNAMO_TABLE=****
- DYNAMO_REGION=****
- AWS_DEFAULT_REGION=****
- AWS_ACCESS_KEY_ID=****
- AWS_SECRET_ACCESS_KEY=****
The application reads in a single configuration file and connects to a database on startup and then defines the endpoints - none of which touch the filesystem again. How can the source code be deleting itself!?
Creating a new container resolves the issue.
Any suggestions in checking the client's environment would be appreciated because I cannot replicate it.
Clients versions
Docker Version - 18.09.7
Docker Compose version - 1.24.0
I was able to solve the problem by updating the kernel, it also worked with an older kernel (3.10)
works:
4.1.12-124.45.6.el7uek.x86_64
not work:
4.1.12-124.43.4.el7uek.x86_64
I do not know the reason that causes it, I only know that after updating the kernel the problem was solved. I hope it is your same problem
I'm currently pretty stuck finding a solution for the following error:
LibreOfficeError: [Java framework] Error in function createSettingsDocument (elements.cxx).
javaldx failed!
Warning: failed to read path from javaldx
I start libreOffice in headless mode with subprocess.run from a Python / gunicorn application, for converting docx into pdf files:
args = ['/usr/lib64/libreoffice/program/soffice', '--headless', '--convert-to', 'pdf', '--outdir', pdfDocFolder, tmpDocName]
process = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, timeout=timeout)
The error message above is what I get when trying to start the conversion.
My application runs in a docker container. The odd thing is that it worked out pretty well previously, when I used the S2I build process of OpenShift to build and deploy the image. Now, after abandoning S2I, building the image locally, and deploying it on OpenShift, I get that error message. I found some discussions of the very same error message in other contexts, stating that the working directory must be made writeable for non-root users and exported as HOME. Unfortunately, that didn't make a difference. I made the working dir writeable for all users. HOME is set to the correct directory. There must be some difference in the S2I build process compared to a local docker build, which makes a difference permission-wise.
That's the two Dockerfiles I use for building the image locally:
Base image:
FROM centos/python-36-centos7
EXPOSE 8080
USER root
RUN yum -y --disablerepo=\* --enablerepo=base,updates update && \
yum -y install libreoffice && \
yum -y install unoconv && \
yum -y install cairo && \
yum -y install cups-libs && \
yum -y install java-1.8.0-openjdk && \
yum clean all -y && \
rm -rf /var/cache/yum
RUN chown 1001:0 /usr/bin/soffice && \
chown 1001:0 /usr/share/fonts/local && \
chown -R 1001:0 /usr/lib64/libreoffice && \
fix-permissions /usr/lib64/libreoffice -P && \
rpm-file-permissions
USER 1001
And that's the Dockerfile built on top of the base image:
ARG REGISTRY_PATH=
ARG BRANCH_NAME=
FROM $REGISTRY_PATH:$BRANCH_NAME-latest
USER root
ENV APP_ROOT=/opt/app-root
ENV PATH=${APP_ROOT}/bin:${PATH} HOME=${APP_ROOT}/src
COPY src ${APP_ROOT}/src
RUN pip install -r requirements.txt
RUN mkdir -p ${APP_ROOT}/.config/libreoffice/4/user && \
chmod -R a+rwx ${APP_ROOT}/src && \
chgrp -R 0 ${APP_ROOT}/src && \
chmod -R g=u ${APP_ROOT}/src /etc/passwd
EXPOSE 8080
USER 1001
WORKDIR ${APP_ROOT}/src
CMD ["gunicorn", "wsgi", "--bind", "0.0.0.0:8080", "--config", "config.py"]
Some hints or ideas to try out would really help me, since I completely ran out of options to pursue.
Thanks a lot.
I know this comes too late, but I was experiencing the same issue on a SpringBoot microservice I wrote recently, and for me the solution was to set the HOME env variable to a folder assigned to the group "0", since Openshift runs containers with random users for security reasons, but all of them belonging to the '0' group. Here's my dockerfile:
FROM docker.io/eclipse-temurin:17.0.5_8-jre-alpine
RUN apk update && apk upgrade --no-cache
RUN apk add libreoffice
RUN chgrp -R 0 /tmp && chmod -R g=u /tmp
ENV HOME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar /app.jar"]
The most important instruction is the
RUN chgrp -R 0 /tmp && chmod -R g=u /tmp
assigning the right group to the folder which I will set as the HOME path. This kind of directive helped me solve several problems when migrating docker containers to Openshift and it's suggested by redhat itself in the official guide to build images:
https://docs.openshift.com/container-platform/4.9/openshift_images/create-images.html
hope this helps! :)
I create a docker image in order to set a python code with schedule,so I use python-crontab module, how can i solve permission denied problem?
Ubuntu 16.04.6 LTS
python 3.5.2
I create sche.py and it can trigger weather.py,
it is success in local,but it can't package to docker image
```
#dockerfile
FROM python:3.5.2
WORKDIR /weather
ENTRYPOINT ["/weather"]
ADD . /weather
RUN chmod u+x sche.py
RUN chmod u+x weather.py
RUN mkdir /usr/bin/crontab
#add due to /usr/bin/crontab not found
RUN pip3 install python-crontab
RUN pip3 install -r requirements.txt
EXPOSE 80
#ENV NAME World
CMD ["sudo"]
#CMD ["python", "sche.py"] ## build step fail
ENTRYPOINT ["python","sche.py"]
## can build same as "RUN ["python","sche.py"] "
```
I expect it can run in docker image rather than each python file only.
Try USER root after FROM python:3.5.2 line.
Remove CMD ["sudo"] and ENTRYPOINT ["/weather"]
Updated
Replace RUN mkdir /usr/bin/crontab
RUN apt-get update \
&& apt-get install -y cron \
&& apt-get autoremove -y
I have a scrapy project run continously by cron hosted inside a docker image.
When I run and deploy this locally everything works fine. If I try to deploy the same to AWS I get the following error inside the logs:
No EXPOSE directive found in Dockerfile, abort deployment (ElasticBeanstalk::ExternalInvocationError)
The console shows that my container was build correctly but I can not use it without an EXPOSED port.
INFO: Successfully pulled python:2.7
WARN: Failed to build Docker image aws_beanstalk/staging-app, retrying...
INFO: Successfully built aws_beanstalk/staging-app
ERROR: No EXPOSE directive found in Dockerfile, abort deployment
ERROR: [Instance: i-6eebaeaf] Command failed on instance. Return code: 1 Output: No EXPOSE directive found in Dockerfile, abort deployment.
Hook /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
But why is it not possible?
My Dockerfile looks like the following:
FROM python:2.7
MAINTAINER XDF
ENV DIRECTORY /opt/the-flat
# System
##########
RUN apt-get update -y && apt-get upgrade -y && apt-get install -y ntp vim apt-utils
WORKDIR $DIRECTORY
# GIT
##########
# http://stackoverflow.com/questions/23391839/clone-private-git-repo-with-dockerfile
RUN apt-get install -y git
RUN mkdir /root/.ssh/
ADD deploy/git-deply-key /root/.ssh/id_rsa
RUN chmod 0600 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan -t rsa bitbucket.org >> /root/.ssh/known_hosts
RUN ssh -T -o 'ConnectionAttempts=1' git#bitbucket.org
RUN git clone --verbose git#bitbucket.org:XDF/the-flat.git .
# Install
##########
RUN pip install scrapy
RUN pip install MySQL-python
# not working
# apt-get install -y wkhtmltopdf && pip install pdfkit
# else
# https://pypi.python.org/pypi/pdfkit
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y openssl build-essential xorg libssl-dev
RUN wget http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN tar xvjf wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN chown root:root wkhtmltopdf-amd64
RUN mv wkhtmltopdf-amd64 /usr/bin/wkhtmltopdf
RUN pip install pdfkit
# Cron
##########
# http://www.ekito.fr/people/run-a-cron-job-with-docker/
# http://www.corntab.com/pages/crontab-gui
RUN apt-get install -y cron
RUN crontab "${DIRECTORY}/deploy/crontab"
CMD ["cron", "-f"]
It's by design. You need to have an EXPOSE port directive in your Dockerfile to tell beanstalk what port your app will be listening on. Do you have a usecase where you cannot or do not want to have EXPOSE in your Dockerfile?
ElasticBeanstalk is designed for web applications, hence the EXPOSE requirement. The use case you demonstrated is that of a jobs (workers) server, which Elastic Beanstalk doesn't handle well.
For your case, either expose a dummy port number or launch an EC2 instance yourself to bypass the EB overload.