I'm currently working on the "Google cloud platform fundamentals" labs and I'm running into issues.
Each time I have to use a CoreOS instance to spin up a docker instance there is an error I get.
For example: in the Cloud SQL lab, at some point I have to build a docker image of the folder I just cloned from a git repo using the command:
docker build -t cp100/cloudsql-python cp100-cloud-sql-python
which gives me a wall of text that ends with an error :
Downloading/unpacking flask
Cannot fetch index base URL http://pypi.python.org/simple/
Could not find any downloads that satisfy the requirement flask
No distributions at all found for flask
Storing complete log in /root/.pip/pip.log`
The thing Is, there is no "/root/.pip/pip.log" file.
So here are my questions :
Are the tutorials outdated, and if yes, where can I find the up-to-date tutorials?
Why does it happen? I think It is because pip or Python or both are not installed but shouldn't the command docker build take the installation in charge?
How can I fix it?
the cp100-cloud-sql-python file is available at https://github.com/GoogleCloudPlatformTraining/cp100-cloud-sql-python.git
Thanks for your answers.
Ok I found the answers by myself:
So the reason it doesn't work is that pip (and easy install) use HTTP and pypi.python.org requires HTTPS, the issue is further documented here :
https://bugzilla.redhat.com/show_bug.cgi?id=1510444
So in order to fix it I modified the Dockerfile inside the app from
FROM google/debian:wheezy
MAINTAINER Sharif Salah <sharif.salah+docker#gmail.com>
RUN apt-get update && \
apt-get install -y python-dev python-pip python-mysqldb && \
pip install flask
ADD app /app
EXPOSE 80
CMD [ "python", "/app/app.py" ]
to
FROM google/debian:wheezy
MAINTAINER Sharif Salah <sharif.salah+docker#gmail.com>
RUN apt-get update && \
apt-get install -y python-dev python-setuptools python-mysqldb && \
easy_install -i https://pypi.python.org/simple flask
ADD app /app
EXPOSE 80
CMD [ "python", "/app/app.py" ]
which will force easy_install to use the address specified after the -i.
It worked in my case but as documented on Bugzilla, it may not work for everything.
I hope this will help someone
Related
in aws-eb I am deployed an application -django- and there was no error on that process. Health is green and OK but page is giving Internal Server Error. so I checked the logs and saw the below error.
... web: from .cv2 import
... web: ImportError: libGL.so.1: cannot open shared object file: No such file or directory
while installing requirements.txt on deployment process opencv must be installed. because it includes opencv-python==4.5.5.64
so I not quite sure what is the above error pointing at.
and helpers.py this is how I am import it.
import requests
import cv2
libGL.so is installed with the package libgl1, pip3 install opencv-python is not sufficient here.
Connect the aws via ssh and run;
apt-get update && apt-get install libgl1
Or even better, consider using docker containers for the project and add the installation commands to the Dockerfile.
Also, as https://stackoverflow.com/a/66473309/12416058 suggests, Package python3-opencv includes all system dependencies of OpenCV. so installing it may prevent further errors.
To install python3-opencv;
apt-get update && apt-get install -y python3-opencv
pip install -r requirements.txt
To install in Dockerfile:
RUN apt-get update && apt-get install -y python3-opencv
RUN pip install -r requirements.txt
I have a flask python app that uses a spacy model (md or lg). I am running in a docker container in VSCode and all work correctly on my laptop.
When I push the image to my azure container registry the app restarts but it doesn't seem to get past this line in the log:
Initiating warmup request to the container.
If I comment out the line nlp = spacy.load('en_core_web_lg'), the website loads fine (of course it doesn't work as expected).
I am installing the model in the docker file after installing the requirements.txt:
RUN python -m spacy download en_core_web_lg.
Docker file:
FROM python:3.6
EXPOSE 5000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# steps needed for scipy
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev libc-dev build-essential
RUN pip install -U pip
# Install pip requirements
ADD requirements.txt.
RUN python -m pip install -r requirements.txt
RUN python -m spacy download en_core_web_md
WORKDIR /app
ADD . /app
# During debugging, this entry point will be overridden. For more information, refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "Application.webapp:app"]
Try using en_core_web_sm instead en_core_web_lg.
You can install the module by python -m spacy download en_core_web_sm
Noticed you asked your question over on MSDN. If en_core_web_sm worked but _md and _lg doesn't, increase your timeout by setting WEBSITES_CONTAINER_START_TIME_LIMIT to a value up to 1800 sec). The size might be taking a while to load the image and simply times out.
If you already done that, email us at AzCommunity[at]microsoft[dot]com ATTN Ryan so we can take a closer look. Include your subscription id and app service name.
Up until recently I've been using openssl library within python:3.6.6-jessie docker image and thing worked as intented.
I'm using very basic Dockerfile configuration to install all necessary dependencies:
FROM python:3.6.6-jessie
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
RUN apt-get -qq update
RUN apt-get install openssl
RUN apt-get upgrade -y openssl
ADD requirements.txt /code/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
And access and initialize the library itself with these 2 lines:
openssl = cdll.LoadLibrary("libssl.so")
openssl.SSL_library_init()
Things were working great with this approach.
This week I was doing upgrade of python and libraries and as result I switched to newer docker image:
FROM python:3.7.5
...
This immediatelly caused openssl to stop working because of this exception:
AttributeError: /usr/lib/x86_64-linux-gnu/libssl.so.1.1: undefined symbol: SSL_library_init
From this error I can understand that libssl no longer provides SSL_library_init method (or so it seems to be) which is rather weird issue because the initializer name in openssl documentation is the same.
I also tried to resolve this using -stretch and -buster distributions but the issue remains.
What is the correct approach to run SSL_library_init in those newer distributions? Maybe some additional dockerfile configuration is required?
I think you need to install libssl1.0-dev
RUN apt-get install -y libssl1.0-dev
I am using a centos os base image and installing python3 with the following dockerfile
FROM centos:7
ENV container docker
ARG USER=dsadmin
ARG HOMEDIR=/home/${USER}
RUN yum clean all \
&& yum update -q -y -t \
&& yum install file -q -y
RUN useradd -s /bin/bash -d ${HOMEDIR} ${USER}
RUN export LC_ALL=en_US.UTF-8
# install Development Tools to get gcc
RUN yum groupinstall -y "Development Tools"
# install python development so that pip can compile packages
RUN yum -y install epel-release && yum clean all \
&& yum install -y python34-setuptools \
&& yum install -y python34-devel
# install pip
RUN easy_install-3.4 pip
# install virtualenv or virtualenvwrapper
RUN pip3 install virtualenv \
&& pip3 install virtualenvwrapper \
&& pip3 install pandas
# # install django
# RUN pip3 install django
USER ${USER}
WORKDIR ${HOMEDIR}
I build and tag the above as follows:
docker build . --label validation --tag validation
I then need to add a .tar.gz file to the home directory. This file contains all the python scripts I maintain. This file will change frequently. If I add it to the dockerfile above, python is installed every time I change the .gz file. This adds a lot of time to development. As a workaround, I tried creating a second dockerfile file that uses the above image as the base and then just adds the .tar.gz file on it.
FROM validation:latest
ARG USER=dsadmin
ARG HOMEDIR=/home/${USER}
ADD code/validation_utility.tar.gz ${HOMEDIR}/.
USER ${USER}
WORKDIR ${HOMEDIR}
After that if I run docker image and do an ls, all the files in the folder have a owner of games.
-rw-r--r-- 1 501 games 35785 Nov 2 21:24 Validation_utility.py
To fix the above, I added the following lines to the second docker file:
ADD code/validation_utility.tar.gz ${HOMEDIR}/.
RUN chown -R ${USER}:${USER} ${HOMEDIR} \
&& chmod +x ${HOMEDIR}/Validation_utility.py
but I get the error:
chown: changing ownership of '/home/dsadmin/Validation_utility.py': Operation not permitted
The goal is to have two docker files. The users will run the first docker file to install centos and python dependencies. The second dockerfile will install the custom python scripts. If the scripts change, they will just run the second docker file again. Is that the right way to think about docker? Thank you.
Is that the right way to think about docker?
This is the easy part of your question. Yes. You're thinking about the proper way to structure your Dockerfiles, reuse them, and keep your image builds efficient. Good job.
As for the error you're receiving, I'm less confident in answering why the ADD command is un-tarballing your tar.gz as the games user. I'm not nearly as familiar with CentOS. That's the start of the problem. dsadmin, as a regular non-privileged user, can't change ownership of files he doesn't own. Since this un-tarballed script is owned by games, the chown command fails.
I used your Dockerfiles and got the same issue on MacOS.
You can get around this by, well, not using ADD. Which is funny because local tarball extraction is the one use case where Docker thinks you should prefer ADD over COPY.
COPY code/validation_utility.tar.gz ${HOMEDIR}/.
RUN tar -xvf validation_utility.tar.gz
Properly extracts the tarball and, since dsadmin was the user at the time, the contents come out properly owned by dsadmin.
(An uglier route might be to switch the USER to root to set permissions, then set it back to dsadmin. I think this is icky, but it's an option.)
I'm currently building a docker image and running the container to run some tests in it for a Python application I'm working on. Currently the Dockerfile copies the files over from the host machine, sets the working directory to those copied files, runs a sudo apt-get and installs pip, and finally runs the tests from setup.py. The Dockerfile can be seen below.
FROM ubuntu
ADD . /home/dev/ProjectName
WORKDIR /home/dev/ProjectName
RUN apt-get update && \
apt-get install -y python3-pip && \
python3 setup.py test
I was curious if there were a more conventional way to avoid having to run the apt-get and apt-get install pip every time I'd like to run a test. The main idea I had was to build an image with pip already on it, and then build this image from that one.
Docker builds using cached layers if it can. By adding files you have changed it invalidates the cache for all subsequent rules. Put the apt commands first and those will only be run the first time you build. See this blog for more info.