How to deploy google cloud functions using custom container image - python

To enable the webdriver in my google cloud function, I created a custom container using a docker file:
FROM python:3.7
COPY . /
WORKDIR /
RUN pip3 install -r requirements.txt
RUN apt-get update
RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
#download and install chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
#install python dependencies
COPY requirements.txt requirements.txt
RUN pip install -r ./requirements.txt
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
#some envs
ENV PORT 5000
#copy local files
COPY . .
CMD exec gunicorn --bind :${PORT} --workers 1 --threads 8 main:app
ENTRYPOINT ["webcrawler"]
I installed gcloud in this docker so that I will be able to use gcloud deploy to deploy my cloud functions. Then, I deploy my script using this cloudbuild.yaml:
steps:
- name: 'us-central1-docker.pkg.dev/$PROJECT_ID/webcrawler-repo/webcrawler:tag1'
entrypoint: 'gcloud'
args: ['functions', 'deploy', 'MY_FUN', '--trigger-topic=MY_TOPIC', '--runtime=python37', '--entry-point=main', '--region=us-central1', '--memory=512MB', '--timeout=540s']
id: 'deploying MY_FUN'
dir: 'MY_DIR'
However, I end up getting this error for my deployment:
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: invalid storage source object "MY_FUN-ba7acf95-4297-46b3-b76e-1c25ba21ba03/version-14/function-source.zip" in bucket "gcf-sources-967732204245-us-central1": failed to get storage object: Get "https://storage.googleapis.com/storage/v1/b/gcf-sources-967732204245-us-central1/o/MY_FUN-ba7acf95-4297-46b3-b76e-1c25ba21ba03%2Fversion-14%2Ffunction-source.zip?alt=json&prettyPrint=false": RPC::UNREACHABLE: gslb: no reachable backends
ERROR
ERROR: build step 0 "us-central1-docker.pkg.dev/PROJECT_ID/webcrawler-repo/webcrawler:tag1" failed: step exited with non-zero status: 1
Any idea how to resolve this issue?
Thanks!

Cloud functions allows you to deploy only your code. The packaging into a container, with buildpack, is performed automatically for you.
If you have already a container, the best solution is to deploy it on Cloud Run. If your webserver listen on the port 5000, don't forget to override this value during the deployment (use --port parameter).
To plug your PubSub topic to your Cloud Run service, you have 2 solutions
Either manually, you create a PubSub push subscription to your Cloud Run service
Or you can use EventArc to plug it to your Cloud Run service
In both cases, you need to take care of the security by using a service account with the role run.invoker on the Cloud Run service that you pass to PubSub push subscription or to EventArc

Related

Install github organization private package using docker build

I am part of an organization X. Here, we have a python package which is added into requirements.txt. I have access to this repository.
When I am doing pip install https://github.com/X/repo.git, it is working fine. Because it was using my git identity present in the host or my local machine.
However, when I do pip install with docker as follows
FROM python:3.8
COPY ./app ./app
COPY ./requirements.txt ./requirements.txt
# Install git
RUN apt-get update && apt-get install -y git openssh-client
RUN mkdir -p -m 0600 ~/.ssh
RUN ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh pip install git+ssh://git#github.com/X/repo_name.git#setup#egg=repo_name
# Install Dependencies
RUN pip install -r ./requirements.txt
# Configure server
ENV HOST="0.0.0.0"
ENV PORT=5000
# CMD
ENTRYPOINT uvicorn app:app --host ${HOST} --port ${PORT}
# Remove SSH Key
RUN rm ~/.ssh/id_rsa
it is throwing the following error
I have set the ssh key in github as well using following approach
But, when I do ssh -T username#github.com it is throwing Permission denied. But, I have the owner rights of the repository which is under an organization.
Not sure, how to resolve this issue!
when I do ssh -T username#github.com it is throwing Permission denied
It will always throw "Permission denied" is you are using username#github.com instead of git#github.com, since the 'username' part is inferred from your public key registered to your GitHub profile. But the remote user for SSH query must be git, a service account for GitHub.
Second, if you import your private key in the image, make sure to use multi-stage build, as described in "Access Private Repositories from Your Dockerfile Without Leaving Behind Your SSH Keys" from Vladislav Supalov.
And make sure to set the right permission to ~/.ssh elements.

Docker-compose with extra_hosts fails, but docker build add-host succeeds

Why can't I map this docker command, which pip installs packages from a local network repository hosted at nexus.corp.com:
$> docker build -t demo --no-cache --add-host nexus.corp.com:1.2.3.4 .
which succeeds, into this docker-compose configuration:
version: "3"
services:
app:
build:
context: .
extra_hosts: ['nexus.corp.com:1.2.3.4']
command: >
sh -c "ping -c 4 nexus.corp.com"
which fails during the build step involving pip installing packages from the local repository?
Dockerfile
FROM python:3.8-slim
ENV PYTHONUNBUFFERED 1
# Install postgres client
RUN apt-get update
RUN apt-get install -y python3.8-dev
# For testing/debugging
RUN apt-get install -y iputils-ping
RUN pip install -U pip setuptools
WORKDIR /work
COPY ./pip.conf /etc/pip.conf # use custom pip config, see below
# Install a package hosted at the custom location
RUN pip3 install custom_package
pip.conf
[global]
index = https://nexus.corp.com/repository/corp-pypi-group/pypi
index-url = https://nexus.corp.com/repository/corp-pypi-group/simple
All of this networking is going on through a VPN, so the nexus.corp.com name isn't being served out by a DNS.

Unable to run Flask App using Docker in Windows-10

I've installed and configured docker (as per documentation) and I am trying to build a flask application using tiangolo/uwsgi-nginx-flask:python3.8. I've built a hello-world application, and have tested it locally by running python manage.py and the application runs successfully. Link to full Code-File.
My docker version and installation is as below:
Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.8
ENV INSTALL_PATH /usr/src/helloworld
RUN mkdir -p $INSTALL_PATH
# install net-tools
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set working directory
WORKDIR $INSTALL_PATH
# setup flask environment
# install all requirements
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# copy all files and folder to docker
COPY . .
# run the application in docker environment
CMD [ "python", "./manage.py" ]
I built the application with docker build --tag hello-world:test . and running the application as: docker run -d -p 5000:5000 hello-world:test successfully.
However, I'm unable to open the application in localhost:5000 or 0.0.0.0:5000 or any other port. The application is running, as I can see it from the CLI:
But, from browser the page is not reachable:
The question mentions to check the IP address:
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" hungry_engelbart
>> <no value>
Found another solution at this link, but docker-machine is currently deprecated.
I'm new to docker, but I have tried to run the same thing following this tutorial, but faced similar issues.
Finally, I am able to solve this. I had to configure a new inbound rules under Windows Firewall > Advanced Settings > Inbound Rules > New Inbound Rules. Create a new rule that will allow a range of local IP addresses, which in my case was 198.168.0.1:198.168.0.100. Finally, you need to run the application at 0.0.0.0 as pointed by #tentative in the comments. :)

Correct way for deploying dbt with docker and cloud run

I'm trying to deploy dbt on a Google cloud run service with a docker container. following david vasquez and dbt Docker images However when trying to deploy the builded image to cloud run. I'm getting an error.
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
This is my dockerfile
FROM python:3.8.1-slim-buster
RUN apt-get update && apt-get dist-upgrade -y && apt-get install -y --no-install-recommends git software-properties-common make build-essential ca-certificates libpq-dev && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY requirements/requirements.0.17.0rc4.txt ./requirements.0.17.0rc4.txt
RUN pip install --upgrade pip setuptools
RUN pip install -U pip
RUN pip install dbt==0.17.0
RUN pip install --requirement ./requirements.0.17.0rc4.txt
RUN useradd -mU dbt_user
ENV PYTHONIOENCODING=utf-8
ENV LANG C.UTF-8
ENV PORT = 8080
ENV HOST = 0.0.0.0
WORKDIR /usr/app
VOLUME /usr/app
USER dbt_user
CMD ['dbt', 'run']
I understand the health check fails because it can't find a port to listen to, except i specify one in my ENV
Can anyone help me with a solution? thx in advance
According to the documentation one of the requirements to deploy an application on Cloud Run is to listen requests on 0.0.0.0 and expose a port:
The container must listen for requests on 0.0.0.0 on the port to which requests are sent. By default, requests are sent to 8080, but you can configure Cloud Run to send requests to the port of your choice.
dbt is a command line tool which means it doesn't expose any PORT, hence when you're trying to deploy Cloud Run and it verifies if the build is listening it fails with the mentioned error.

No EXPOSE in aws docker fails deployment

I have a scrapy project run continously by cron hosted inside a docker image.
When I run and deploy this locally everything works fine. If I try to deploy the same to AWS I get the following error inside the logs:
No EXPOSE directive found in Dockerfile, abort deployment (ElasticBeanstalk::ExternalInvocationError)
The console shows that my container was build correctly but I can not use it without an EXPOSED port.
INFO: Successfully pulled python:2.7
WARN: Failed to build Docker image aws_beanstalk/staging-app, retrying...
INFO: Successfully built aws_beanstalk/staging-app
ERROR: No EXPOSE directive found in Dockerfile, abort deployment
ERROR: [Instance: i-6eebaeaf] Command failed on instance. Return code: 1 Output: No EXPOSE directive found in Dockerfile, abort deployment.
Hook /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
But why is it not possible?
My Dockerfile looks like the following:
FROM python:2.7
MAINTAINER XDF
ENV DIRECTORY /opt/the-flat
# System
##########
RUN apt-get update -y && apt-get upgrade -y && apt-get install -y ntp vim apt-utils
WORKDIR $DIRECTORY
# GIT
##########
# http://stackoverflow.com/questions/23391839/clone-private-git-repo-with-dockerfile
RUN apt-get install -y git
RUN mkdir /root/.ssh/
ADD deploy/git-deply-key /root/.ssh/id_rsa
RUN chmod 0600 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan -t rsa bitbucket.org >> /root/.ssh/known_hosts
RUN ssh -T -o 'ConnectionAttempts=1' git#bitbucket.org
RUN git clone --verbose git#bitbucket.org:XDF/the-flat.git .
# Install
##########
RUN pip install scrapy
RUN pip install MySQL-python
# not working
# apt-get install -y wkhtmltopdf && pip install pdfkit
# else
# https://pypi.python.org/pypi/pdfkit
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y openssl build-essential xorg libssl-dev
RUN wget http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN tar xvjf wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN chown root:root wkhtmltopdf-amd64
RUN mv wkhtmltopdf-amd64 /usr/bin/wkhtmltopdf
RUN pip install pdfkit
# Cron
##########
# http://www.ekito.fr/people/run-a-cron-job-with-docker/
# http://www.corntab.com/pages/crontab-gui
RUN apt-get install -y cron
RUN crontab "${DIRECTORY}/deploy/crontab"
CMD ["cron", "-f"]
It's by design. You need to have an EXPOSE port directive in your Dockerfile to tell beanstalk what port your app will be listening on. Do you have a usecase where you cannot or do not want to have EXPOSE in your Dockerfile?
ElasticBeanstalk is designed for web applications, hence the EXPOSE requirement. The use case you demonstrated is that of a jobs (workers) server, which Elastic Beanstalk doesn't handle well.
For your case, either expose a dummy port number or launch an EC2 instance yourself to bypass the EB overload.

Categories

Resources