I’m trying to setup a Docker image running python 2.7. The code I want to run within the container relies on packages I’m trying to get using pip during the Docker image built. My problem is that pip only gets some part of the packages issuing an error while trying to get the others. Here is my Docker file:
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "./my_script.py"]
An here the requiremnts.txt
Flask
redis
time
sys
opcua
Pip has no problem with collecting Flask and redis, but issues error when it comes to collect time (the same problem with sys and opcua)
what should I do to make it work with all pip packages? Thanks in advance!
The issue here is time is a part of Python's standard library, so it's installed with the rest of Python. This means you do not (and cannot) pip install it. This goes for sys as well. Take these out of your requirements.txt and you should be good to go!
Related
Below my docker file,
FROM python:3.9.0
ARG WORK_DIR=/opt/quarter_1
RUN apt-get update && apt-get install cron -y && apt-get install -y default-jre
# Install python libraries
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
WORKDIR $WORK_DIR
EXPOSE 8888
VOLUME /home/data/quarter_1/
# Copy etl code
# copy code on container under your workdir "/opt/quarter_1"
COPY . .
I tried to connect to the server then i did the build with docker build -t my-python-app .
when i tried to run the container from a build image i got nothing and was not able to do it.
docker run -p 8888:8888 -v /home/data/quarter_1/:/opt/quarter_1 image_id
work here is opt
Update based on comments
If I understand everything you've posted correctly, my suggestion here is to use a base Docker Jupyter image, modify it to add your pip requirements, and then add your files to the work path. I've tested the following:
Start with a dockerfile like below
FROM jupyter/base-notebook:python-3.9.6
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
COPY ./quarter_1 /home/jovyan/quarter_1
Above assumes you are running the build from the folder containing dockerfile, "requirements.txt", and the "quarter_1" folder with your build files.
Note "home/joyvan" is the default working folder in this image.
Build the image
docker build -t biwia-jupyter:3.9.6 .
Start the container with open port to 8888. e.g.
docker run -p 8888:8888 biwia-jupyter:3.9.6
Connect to the container to access token. A few ways to do but for example:
docker exec -it CONTAINER_NAME bash
jupyter notebook list
Copy the token in the URL and connect using your server IP and port. You should be able to paste the token there, and afterwards access the folder you copied into the build, as I did below.
Jupyter screenshot
If you are deploying the image to different hosts this is probably the best way to do it using COPY/ADD etc., but otherwise look at using Docker Volumes which give you access to a folder (for example quarter_1) from the host, so you don't constantly have to rebuild during development.
Second edit for Python 3.9.0 request
Using the method above, 3.9.0 is not immediately available from DockerHub. I doubt you'll have much compatibility issues between 3.9.0 and 3.9.6, but we'll build it anyway. We can download the dockerfile folder from github, update a build argument, create our own variant with 3.9.0, and do as above.
Assuming you have git. Otherwise download the repo manually.
Download the Jupyter Docker stack repo
git clone https://github.com/jupyter/docker-stacks
change into the base-notebook directory of the cloned repo
cd ./base-notebook
Build the image with python 3.9.0 instead
docker build --build-arg PYTHON_VERSION=3.9.0 -t jupyter-base-notebook:3.9.0 .
Create the version with your copied folders and 3.9.0 version from the steps above, replacing the first line in the dockerfile instead with:
FROM jupyter-base-notebook:3.9.0
I've tested this and it works, running Python 3.9.0 without issue.
There are lots of ways to build Jupyter images, this is just one method. Check out docker hub for Jupyter to see their variants.
I'm experiencing differences with the contents of a container depending on whether I open a bash shell via docker run -i -t <container> bash or docker-compose run <container> bash and I don't know/understand how this is possible.
To aid in the explanation, please see this screenshot from my terminal. In both instances, I am running the image called blaze which has been built from the Dockerfile in my code. One of the steps during the build is to create a virutalenv called venv, however when I open a bash shell via docker-compose this virtualenv doesn't seem to exist unlike when I run docker run ....
I am relatively new to setting up my own builds with Docker, but surely if they are both referencing the same image, the output of ls within a bash shell should be the same? I would greatly appreciate any help or guidance to resources that would explain what exactly is going wrong here...
As an additional point, running docker images shows that both commands must be using the same image...
Thanks in advance!
This is my Dockerfile:
FROM blaze-base-image:latest
# add an URL that PIP automatically searches (e.g., Azure Artifact Store URL)
ARG INDEX_URL
ENV PIP_EXTRA_INDEX_URL=$INDEX_URL
# Copy source code to docker image
RUN mkdir /opt/app
COPY . /opt/app
RUN ls /opt/app
# Install Blaze pip dependencies
WORKDIR /opt/app
RUN python3.7 -m venv /opt/app/venv
RUN /opt/app/venv/bin/python -m pip install --upgrade pip
RUN /opt/app/venv/bin/python -m pip install keyring artifacts-keyring
RUN touch /opt/app/venv/pip.conf
RUN echo $'[global]\nextra-index-url=https://www.index.com' > /opt/app/venv/pip.conf
RUN /opt/app/venv/bin/python -m pip install -r /opt/app/requirements.txt
RUN /opt/app/venv/bin/python -m spacy download en_core_web_sm
# Comment
CMD ["echo", "Container build complete"]
And this is my docker-compose.yml:
version: '3'
services:
blaze:
build: .
image: blaze
volumes:
- .:/opt/app
There are two intersecting things going on here:
When you have a Compose volumes: or docker run -v option mounting host content over a container directory, the host content completely replaces what's in the image. If you don't have a ./venv directory on the host, then there won't be a /opt/app/venv directory in the container. That's why, when you docker-compose run blaze ..., the virtual environment is missing.
If you docker run a container, the only options that are considered are those in that specific docker run command. docker run doesn't know about the docker-compose.yml file and won't take options from there. That means there isn't this volume mount in the docker run case, which is why the virtual environment reappears.
Typically in Docker you don't need a virtual environment at all: the Docker image is isolated from other images and Python installations, and so it's safe and normal to install your application into the "system" Python. You also typically want your image to be self-contained and not depend on content from the host, so you wouldn't generally need the bind mount you show.
That would simplify your Dockerfile to:
FROM blaze-base-image:latest
# Any ARG will automatically appear as an environment variable to
# RUN directives; this won't be needed at run time
ARG PIP_EXTRA_INDEX_URL
# Creates the directory if it doesn't exist
WORKDIR /opt/app
# Install the Python-level dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# The requirements.txt file should list every required package
# Install the rest of the application
COPY . .
# Set the main container command to run the application
CMD ["./app.py"]
The docker-compose.yml file can be similarly simplified to
version: '3.8' # '3' means '3.0'
services:
blaze:
build: .
# Compose picks its own image name
# Do not need volumes:, the image is self-contained
and then it will work consistently with either docker run or docker-compose run (or docker-compose up).
I'm trying to get google.appengine.ext working in a docker image
Dockerfile:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ENV PYTHONPATH /app:/app/lib:/opt/google-cloud-sdk/platform/google_appengine:$PYTHONPATH
ADD requirements.txt /app/
RUN pip install -r /app/requirements.txt
ADD . /app
If i do print(google.path) i get this
['/env/local/lib/python2.7/site-packages/google', '/env/lib/python2.7/site-packages/google']
The google.appengine module is baked into the first-generation Python (2.7) runtime. It's not available to install via pip, in the second-generation (3.7) runtime, or in any Docker environment.
The only way to use it is by writing and deploying a first-generation App Engine app.
Depending on what you're doing, you should be able to replace it with a client library call instead.
See https://cloud.google.com/appengine/docs/standard/python/migrate-to-python3 for more details.
I have a flask python app that uses a spacy model (md or lg). I am running in a docker container in VSCode and all work correctly on my laptop.
When I push the image to my azure container registry the app restarts but it doesn't seem to get past this line in the log:
Initiating warmup request to the container.
If I comment out the line nlp = spacy.load('en_core_web_lg'), the website loads fine (of course it doesn't work as expected).
I am installing the model in the docker file after installing the requirements.txt:
RUN python -m spacy download en_core_web_lg.
Docker file:
FROM python:3.6
EXPOSE 5000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# steps needed for scipy
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev libc-dev build-essential
RUN pip install -U pip
# Install pip requirements
ADD requirements.txt.
RUN python -m pip install -r requirements.txt
RUN python -m spacy download en_core_web_md
WORKDIR /app
ADD . /app
# During debugging, this entry point will be overridden. For more information, refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "Application.webapp:app"]
Try using en_core_web_sm instead en_core_web_lg.
You can install the module by python -m spacy download en_core_web_sm
Noticed you asked your question over on MSDN. If en_core_web_sm worked but _md and _lg doesn't, increase your timeout by setting WEBSITES_CONTAINER_START_TIME_LIMIT to a value up to 1800 sec). The size might be taking a while to load the image and simply times out.
If you already done that, email us at AzCommunity[at]microsoft[dot]com ATTN Ryan so we can take a closer look. Include your subscription id and app service name.
I can't wrap my head around how to dockerize existing Django app.
I've read this official manual by Docker explaining how to create Django project during the creation of Docker image, but what I need is to dockerize existing project using the same method.
The main purpose of this approach is that I have no need to build docker images locally all the time, instead what I want to achieve is to push my code to a remote repository which has docker-hub watcher attached to it and as soon as the code base is updated it's being built automatically on the server.
For now my Dockerfile looks like:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install Django
RUN pip install djangorestframework
RUN pip install PyQRCode
ADD . /code/
Can anyone please explain how should I compose Dockerfile and do I need to use docker-compose.yml (if yes: how?) to achieve functionality I've described?
Solution for this question:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN pip install *name of package*
RUN pip install *name of another package*
ADD . /code/
EXPOSE 8000
CMD python3 manage.py runserver 0.0.0.0:8000
OR
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
EXPOSE 8000
CMD python3 manage.py runserver 0.0.0.0:8000
requirements.txt should be a plain list of packages, for example:
Django==1.11
djangorestframework
pyqrcode
pypng
This question is too broad. What happens with the Dockerfile you've created?
You don't need docker compose unless you have multiple containers that need to interact.
Some general observations from your current Dockerfile:
It would be better to collapse the pip install commands into a single statement. In docker, each statement creates a file system layer, and the layers in between the pip install commmands probably serve no useful purpose.
It's better to declare dependencies in setup.py or a requirements.txt file (pip install -r requirements.txt), with fixed version numbers (foopackage==0.0.1) to ensure a repeatable build.
I'd recommend packaging your Django app into a python package and installing it with pip (cd /code/; pip install .) rather than directly adding the code directory.
You're missing a statement (CMD or ENTRYPOINT) to execute the app. See https://docs.docker.com/engine/reference/builder/#cmd
Warning: -onbuild images have been deprecated.
#AlexForbes raised very good points. But if you want a super simple Dockerfile for Django, you can probably just do:
FROM python:3-onbuild
RUN python manage.py collectstatic
CMD ["python", "manage.py"]
You then run your container with:
docker run myimagename runserver
The little -onbuild modifier does most of what you need. It creates /usr/src/app, sets it as the working directory, copies all your source code inside, and runs pip install -r requirements.txt (which you forgot to run). Finally we collect statics (might not be required in your case if statics are hosted somewhere), and set the default command to manage.py so everything is easy to run.
You would need docker-compose if you had to run other containers like Celery, Redis or any other background task or server not supplied by your environment.
I actually wrote an article about this in https://rehalcon.blogspot.mx/2018/03/dockerize-your-django-app-for-local.html
My case is very similar, but it adds a MySQL db service and environment variables for code secrets, as well as the use of docker-compose (needed in macOS). I also use the python:2.7-slim docker parten image instead, to make the image much maller (under 150MB).