I'm trying to get google.appengine.ext working in a docker image
Dockerfile:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ENV PYTHONPATH /app:/app/lib:/opt/google-cloud-sdk/platform/google_appengine:$PYTHONPATH
ADD requirements.txt /app/
RUN pip install -r /app/requirements.txt
ADD . /app
If i do print(google.path) i get this
['/env/local/lib/python2.7/site-packages/google', '/env/lib/python2.7/site-packages/google']
The google.appengine module is baked into the first-generation Python (2.7) runtime. It's not available to install via pip, in the second-generation (3.7) runtime, or in any Docker environment.
The only way to use it is by writing and deploying a first-generation App Engine app.
Depending on what you're doing, you should be able to replace it with a client library call instead.
See https://cloud.google.com/appengine/docs/standard/python/migrate-to-python3 for more details.
Related
I'm deploying an app to parse pdfs and return their highlighted content. After submitting my build and deploying it on cloud run, I ran into this error:
ModuleNotFoundError: No module named 'popplerqt5'
I previously ran into this error when running it python3 virtualenv on my local machine. However, I resolved it by running
/usr/bin/python3 main.py
instead of
python3 main.py
Currently I am running the app from my Dockerfile and am hence unable to pull of the same method. This is my Dockerfile configuration.
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN apt-get update
RUN apt-get install poppler-utils -y
RUN virtualenv -p python3 /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Copy the application's requirements.txt and run pip to install all
# dependencies into the virtualenv.
RUN apt-get install -y python3-poppler-qt5
ADD requirements.txt /app/requirements.txt
RUN pip install Flask gunicorn
RUN pip install -r /app/requirements.txt
# Add the application source code.
ADD . /app
# Run a WSGI server to serve the application. gunicorn must be declared as
# a dependency in requirements.txt.
CMD gunicorn -b :$PORT main:app
How do I get about this error?
Common advice (example) for carrying out CI is to use an image with pre-installed dependencies. Unfortunately for a n00b like me, the link in question doesn't go into further detail.
When I look for docker tutorials, it seems that usually teach you how to containerise an app rather than, say, Python with some pre-installed dependencies.
For example, if this is what my .gitlab-ci.yml file looks like:
image: "python:3.7"
before_script:
- python --version
- pip install -r requirements.txt
stages:
- Static Analysis
flake8:
stage: Static Analysis
script:
- flake8 --max-line-length=120
how can I containerise Python with some pre-installed dependencies (here, the ones in requirements.txt), and how should I change the .gitlab-ci.yml file, so that the CI process runs faster?
To make it faster I will recommend creating your custom Dockerfile based on python:3.7 that has installed all the dependency during the build. So this will save your time and your job will do not need to install dependency during each job build.
FROM python:3.7
RUN python --version
# Create app directory
WORKDIR /app
# copy requirements.txt
COPY local-src/requirements.txt ./
# Install app dependencies
RUN pip install -r requirements.txt
# Bundle app source
COPY src /app
You can read more about this practice docker-python-pip-requirements and write-effective-docker-files-with-python
Another option is to add git client in the Dockerfile and pull code during creating the container.
I have created a Python command line application that is available through PyPi / pip install.
The application has native dependencies.
To make the installation less painful for Windows users I would like to create a Dockerised version out of this command line application.
What are the steps to convert setup.py with an entry point and requirements.txt to a command line application easily? Are there any tooling around this, or should I just write Dockerfile by hand?
Well, You have to create a Dockerfile and build an image off of it. There are best practices regarding the docker image creation that you need to apply. There are also language specific best practices.
Just to give you some ideas about the process:
FROM python:3.7.1-alpine3.8 #base image
ADD . /myapp # add project files
WORKDIR /myapp
RUN apk add dep1 dep2 #put your dependency packages here
RUN pip-3.7 install -r requirements.txt #install pip packages
RUN pip-3.7 install .
CMD myapp -h
Now build image and push it to some public registry:
sudo docker build -t <yourusername>/myapp:0.1 .
users can just pull image and use it:
sudo docker run -it myapp:0.1 myapp.py <switches/arguments>
I’m trying to setup a Docker image running python 2.7. The code I want to run within the container relies on packages I’m trying to get using pip during the Docker image built. My problem is that pip only gets some part of the packages issuing an error while trying to get the others. Here is my Docker file:
# Use an official Python runtime as a parent image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "./my_script.py"]
An here the requiremnts.txt
Flask
redis
time
sys
opcua
Pip has no problem with collecting Flask and redis, but issues error when it comes to collect time (the same problem with sys and opcua)
what should I do to make it work with all pip packages? Thanks in advance!
The issue here is time is a part of Python's standard library, so it's installed with the rest of Python. This means you do not (and cannot) pip install it. This goes for sys as well. Take these out of your requirements.txt and you should be good to go!
I can't wrap my head around how to dockerize existing Django app.
I've read this official manual by Docker explaining how to create Django project during the creation of Docker image, but what I need is to dockerize existing project using the same method.
The main purpose of this approach is that I have no need to build docker images locally all the time, instead what I want to achieve is to push my code to a remote repository which has docker-hub watcher attached to it and as soon as the code base is updated it's being built automatically on the server.
For now my Dockerfile looks like:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install Django
RUN pip install djangorestframework
RUN pip install PyQRCode
ADD . /code/
Can anyone please explain how should I compose Dockerfile and do I need to use docker-compose.yml (if yes: how?) to achieve functionality I've described?
Solution for this question:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
RUN pip install *name of package*
RUN pip install *name of another package*
ADD . /code/
EXPOSE 8000
CMD python3 manage.py runserver 0.0.0.0:8000
OR
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
EXPOSE 8000
CMD python3 manage.py runserver 0.0.0.0:8000
requirements.txt should be a plain list of packages, for example:
Django==1.11
djangorestframework
pyqrcode
pypng
This question is too broad. What happens with the Dockerfile you've created?
You don't need docker compose unless you have multiple containers that need to interact.
Some general observations from your current Dockerfile:
It would be better to collapse the pip install commands into a single statement. In docker, each statement creates a file system layer, and the layers in between the pip install commmands probably serve no useful purpose.
It's better to declare dependencies in setup.py or a requirements.txt file (pip install -r requirements.txt), with fixed version numbers (foopackage==0.0.1) to ensure a repeatable build.
I'd recommend packaging your Django app into a python package and installing it with pip (cd /code/; pip install .) rather than directly adding the code directory.
You're missing a statement (CMD or ENTRYPOINT) to execute the app. See https://docs.docker.com/engine/reference/builder/#cmd
Warning: -onbuild images have been deprecated.
#AlexForbes raised very good points. But if you want a super simple Dockerfile for Django, you can probably just do:
FROM python:3-onbuild
RUN python manage.py collectstatic
CMD ["python", "manage.py"]
You then run your container with:
docker run myimagename runserver
The little -onbuild modifier does most of what you need. It creates /usr/src/app, sets it as the working directory, copies all your source code inside, and runs pip install -r requirements.txt (which you forgot to run). Finally we collect statics (might not be required in your case if statics are hosted somewhere), and set the default command to manage.py so everything is easy to run.
You would need docker-compose if you had to run other containers like Celery, Redis or any other background task or server not supplied by your environment.
I actually wrote an article about this in https://rehalcon.blogspot.mx/2018/03/dockerize-your-django-app-for-local.html
My case is very similar, but it adds a MySQL db service and environment variables for code secrets, as well as the use of docker-compose (needed in macOS). I also use the python:2.7-slim docker parten image instead, to make the image much maller (under 150MB).