How to deploy a custom docker image on Elastic Beanstalk? - python

Looking at this blog - 5. Create Dockerfile. It seems I had to create a new Dockerfile pointing to my private image on Docker.io.
And since the last command shall be starting an executable or the docker image would end up in nirvana, there is the supervisrd at the end:
FROM flux7/wp-site # This is the location of our docker container.
RUN apt-get install supervisor
RUN mkdir -p /var/log/supervisor
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 80
CMD supervisord -c /etc/supervisor/conf.d/supervisord.conf
This is a bit confusing to me, because I have a fully tested custom Docker image that ends with supervisord, see below:
FROM ubuntu:14.04.2
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -y update && apt-get upgrade -y
RUN apt-get install supervisor python build-essential python-dev python-pip python-setuptools -y
RUN apt-get install libxml2-dev libxslt1-dev python-dev -y
RUN apt-get install libpq-dev postgresql-common postgresql-client -y
RUN apt-get install openssl openssl-blacklist openssl-blacklist-extra -y
RUN apt-get install nginx -y
RUN pip install "pip>=7.0"
RUN pip install virtualenv uwsgi
RUN mkdir -p /var/log/supervisor
ADD canonicaliser_api /home/ubuntu/canonicaliser_api
ADD config_local.py /home/ubuntu/canonicaliser_api/config/config_local.py
RUN virtualenv /home/ubuntu/canonicaliser_api/venv
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt
RUN export CFLAGS=-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && cd /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/ && python setup.py build_ext --inplace
RUN cp /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/canonicaliser/cython_extensions/*.so /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions
RUN rm -rf /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/canonicaliser
RUN rm -r /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/build
RUN mkdir /var/run/flask-uwsgi
RUN chown -R www-data:www-data /var/run/flask-uwsgi
RUN mkdir /var/log/flask-uwsgi
ADD flask-uwsgi.ini /etc/flask-uwsgi/
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 8888
CMD ["/usr/bin/supervisord"]
So how do I serve my custom image (CMD ?) instead of using supervisord? Unless I'm overlooking something....
UPDATE
I have applied the suggested updates, but it fails to authenticate to private repo on DockerHub.
[2015-08-11T14:02:10.489Z] INFO [1858] - [CMD-Startup/StartupStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: WARNING: Invalid auth configuration file
Pulling repository houmie/canon
time="2015-08-11T14:02:08Z" level="fatal" msg="Error: image houmie/canon:latest not found"
Failed to pull Docker image houmie/canon:latest, retrying...
WARNING: Invalid auth configuration file
The dockercfg inside a folder called docker inside the S3 bucket is
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "xxxx",
"email": "xxx#gmail.com"
}
}
}
The Dockerrun.aws.json is:
{
"AWSEBDockerrunVersion":"1",
"Authentication":{
"Bucket":"dd-xxx-ir-01",
"Key":"docker/dockercfg"
},
"Image":{
"Name":"houmie/canon",
"Update":"true"
},
"Ports":[
{
"ContainerPort":"8888"
}
]
}

When deploying containers with Elastic Beanstalk, you can tell it to build your image locally on each host from a Dockerfile defined by you, or to use a pre-built image from a registry.
You don't necessarily have to recreate your image, you may just use one you already have (be it on Docker Hub or a private registry).
If your application runs on an image that is available in a hosted repository, you can specify the image in a Dockerrun.aws.json file and omit the Dockerfile.
If your registry account demands authentication, then you need to provide a .dockercfg file on a S3 bucket, which will be pulled by Docker hosts (so you need the proper permissions given to instances via IAM Role).
Declare the .dockercfg file in the Authentication parameter of the Dockerrun.aws.json file. Make sure that the Authentication parameter contains a valid Amazon S3 bucket and key. The Amazon S3 bucket must be hosted in the same region as the environment that is using it. Elastic Beanstalk will not download files from Amazon S3 buckets hosted in other regions. Grant permissions for the action s3:GetObject to the IAM role in the instance profile.
So, your Dockerrun.aws.json may look like this (considering your image is hosted on Docker Hub).
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "myBucket",
"Key": ".dockercfg"
},
"Image":
{
"Name": "yourRegistryUser/yourImage",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "1234"
}
],
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
],
"Logging": "/var/log/nginx"
{
Look at the official documentation for further details about the configuration and available options.
As for what command you run (supervisord, whatever), it doesn't matter.

Related

Install github organization private package using docker build

I am part of an organization X. Here, we have a python package which is added into requirements.txt. I have access to this repository.
When I am doing pip install https://github.com/X/repo.git, it is working fine. Because it was using my git identity present in the host or my local machine.
However, when I do pip install with docker as follows
FROM python:3.8
COPY ./app ./app
COPY ./requirements.txt ./requirements.txt
# Install git
RUN apt-get update && apt-get install -y git openssh-client
RUN mkdir -p -m 0600 ~/.ssh
RUN ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh pip install git+ssh://git#github.com/X/repo_name.git#setup#egg=repo_name
# Install Dependencies
RUN pip install -r ./requirements.txt
# Configure server
ENV HOST="0.0.0.0"
ENV PORT=5000
# CMD
ENTRYPOINT uvicorn app:app --host ${HOST} --port ${PORT}
# Remove SSH Key
RUN rm ~/.ssh/id_rsa
it is throwing the following error
I have set the ssh key in github as well using following approach
But, when I do ssh -T username#github.com it is throwing Permission denied. But, I have the owner rights of the repository which is under an organization.
Not sure, how to resolve this issue!
when I do ssh -T username#github.com it is throwing Permission denied
It will always throw "Permission denied" is you are using username#github.com instead of git#github.com, since the 'username' part is inferred from your public key registered to your GitHub profile. But the remote user for SSH query must be git, a service account for GitHub.
Second, if you import your private key in the image, make sure to use multi-stage build, as described in "Access Private Repositories from Your Dockerfile Without Leaving Behind Your SSH Keys" from Vladislav Supalov.
And make sure to set the right permission to ~/.ssh elements.

Unable to run Flask App using Docker in Windows-10

I've installed and configured docker (as per documentation) and I am trying to build a flask application using tiangolo/uwsgi-nginx-flask:python3.8. I've built a hello-world application, and have tested it locally by running python manage.py and the application runs successfully. Link to full Code-File.
My docker version and installation is as below:
Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.8
ENV INSTALL_PATH /usr/src/helloworld
RUN mkdir -p $INSTALL_PATH
# install net-tools
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set working directory
WORKDIR $INSTALL_PATH
# setup flask environment
# install all requirements
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# copy all files and folder to docker
COPY . .
# run the application in docker environment
CMD [ "python", "./manage.py" ]
I built the application with docker build --tag hello-world:test . and running the application as: docker run -d -p 5000:5000 hello-world:test successfully.
However, I'm unable to open the application in localhost:5000 or 0.0.0.0:5000 or any other port. The application is running, as I can see it from the CLI:
But, from browser the page is not reachable:
The question mentions to check the IP address:
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" hungry_engelbart
>> <no value>
Found another solution at this link, but docker-machine is currently deprecated.
I'm new to docker, but I have tried to run the same thing following this tutorial, but faced similar issues.
Finally, I am able to solve this. I had to configure a new inbound rules under Windows Firewall > Advanced Settings > Inbound Rules > New Inbound Rules. Create a new rule that will allow a range of local IP addresses, which in my case was 198.168.0.1:198.168.0.100. Finally, you need to run the application at 0.0.0.0 as pointed by #tentative in the comments. :)

How to deploy google cloud functions using custom container image

To enable the webdriver in my google cloud function, I created a custom container using a docker file:
FROM python:3.7
COPY . /
WORKDIR /
RUN pip3 install -r requirements.txt
RUN apt-get update
RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
#download and install chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
#install python dependencies
COPY requirements.txt requirements.txt
RUN pip install -r ./requirements.txt
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
#some envs
ENV PORT 5000
#copy local files
COPY . .
CMD exec gunicorn --bind :${PORT} --workers 1 --threads 8 main:app
ENTRYPOINT ["webcrawler"]
I installed gcloud in this docker so that I will be able to use gcloud deploy to deploy my cloud functions. Then, I deploy my script using this cloudbuild.yaml:
steps:
- name: 'us-central1-docker.pkg.dev/$PROJECT_ID/webcrawler-repo/webcrawler:tag1'
entrypoint: 'gcloud'
args: ['functions', 'deploy', 'MY_FUN', '--trigger-topic=MY_TOPIC', '--runtime=python37', '--entry-point=main', '--region=us-central1', '--memory=512MB', '--timeout=540s']
id: 'deploying MY_FUN'
dir: 'MY_DIR'
However, I end up getting this error for my deployment:
ERROR: (gcloud.functions.deploy) OperationError: code=3, message=Build failed: invalid storage source object "MY_FUN-ba7acf95-4297-46b3-b76e-1c25ba21ba03/version-14/function-source.zip" in bucket "gcf-sources-967732204245-us-central1": failed to get storage object: Get "https://storage.googleapis.com/storage/v1/b/gcf-sources-967732204245-us-central1/o/MY_FUN-ba7acf95-4297-46b3-b76e-1c25ba21ba03%2Fversion-14%2Ffunction-source.zip?alt=json&prettyPrint=false": RPC::UNREACHABLE: gslb: no reachable backends
ERROR
ERROR: build step 0 "us-central1-docker.pkg.dev/PROJECT_ID/webcrawler-repo/webcrawler:tag1" failed: step exited with non-zero status: 1
Any idea how to resolve this issue?
Thanks!
Cloud functions allows you to deploy only your code. The packaging into a container, with buildpack, is performed automatically for you.
If you have already a container, the best solution is to deploy it on Cloud Run. If your webserver listen on the port 5000, don't forget to override this value during the deployment (use --port parameter).
To plug your PubSub topic to your Cloud Run service, you have 2 solutions
Either manually, you create a PubSub push subscription to your Cloud Run service
Or you can use EventArc to plug it to your Cloud Run service
In both cases, you need to take care of the security by using a service account with the role run.invoker on the Cloud Run service that you pass to PubSub push subscription or to EventArc

Unable to locate credentials inside fargate container, using Boto3

I am trying to create a container that splits files and then uploads them to S3 container. The process works as intended, but when trying to send the file to S3 it fails with the error Unable to locate credentials inside fargate container.
My dockerfile looks like this:
FROM python:3.8-slim
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /tmp/splitter
RUN pip install --upgrade pip && \
pip install --no-cache-dir requests boto3
RUN wget -O /tmp/init.sh (WHATEVER) \
&& chmod +x /tmp/init.sh
CMD /tmp/init.sh
I have my role set up like the ecsTaskExecutionRole that appears in Amazon's documentation.
ecsTaskExecutionRole is not for your container to access S3. It is for ECS itself to be able to, e.g. pull your docker image from ECR.
For your application permissions in the container, you need a task role, not the task execution role. It can be confusing because both are named similarly and both have same trust policy.
The task role can be specified along task execution role:
The roles can also be set at task definition level:

Port mapping in Docker

I created a docker for a sample python pyramid app. My dockerfile is this:
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python-pip python-dev curl && \
pip install --upgrade pip setuptools
WORKDIR /app
COPY . /app
EXPOSE 6543
RUN pip install -e .
ENTRYPOINT [ "pserve" ]
CMD [ "development.ini" ]
My build command is this:
docker build -t pyramid_app:latest .
My run command is this:
docker run -d -p 6543:6543 pyramid_app
When i try to access http://localhost:6543 I get an error
Failed to load resource: net::ERR_SOCKET_NOT_CONNECTED
When I curl inside the machine it works fine.
It would be great if someone could help me figure out why my port mapping isn't working.
Thanks.
in your pserve config, change
[server:main]
listen = 127.0.0.1:6543
to
[server:main]
listen = *:6543
otherwise the web server will only accept connections from the docker container itself

Categories

Resources