Install github organization private package using docker build - python

I am part of an organization X. Here, we have a python package which is added into requirements.txt. I have access to this repository.
When I am doing pip install https://github.com/X/repo.git, it is working fine. Because it was using my git identity present in the host or my local machine.
However, when I do pip install with docker as follows
FROM python:3.8
COPY ./app ./app
COPY ./requirements.txt ./requirements.txt
# Install git
RUN apt-get update && apt-get install -y git openssh-client
RUN mkdir -p -m 0600 ~/.ssh
RUN ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh pip install git+ssh://git#github.com/X/repo_name.git#setup#egg=repo_name
# Install Dependencies
RUN pip install -r ./requirements.txt
# Configure server
ENV HOST="0.0.0.0"
ENV PORT=5000
# CMD
ENTRYPOINT uvicorn app:app --host ${HOST} --port ${PORT}
# Remove SSH Key
RUN rm ~/.ssh/id_rsa
it is throwing the following error
I have set the ssh key in github as well using following approach
But, when I do ssh -T username#github.com it is throwing Permission denied. But, I have the owner rights of the repository which is under an organization.
Not sure, how to resolve this issue!

when I do ssh -T username#github.com it is throwing Permission denied
It will always throw "Permission denied" is you are using username#github.com instead of git#github.com, since the 'username' part is inferred from your public key registered to your GitHub profile. But the remote user for SSH query must be git, a service account for GitHub.
Second, if you import your private key in the image, make sure to use multi-stage build, as described in "Access Private Repositories from Your Dockerfile Without Leaving Behind Your SSH Keys" from Vladislav Supalov.
And make sure to set the right permission to ~/.ssh elements.

Related

Unable to run Flask App using Docker in Windows-10

I've installed and configured docker (as per documentation) and I am trying to build a flask application using tiangolo/uwsgi-nginx-flask:python3.8. I've built a hello-world application, and have tested it locally by running python manage.py and the application runs successfully. Link to full Code-File.
My docker version and installation is as below:
Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.8
ENV INSTALL_PATH /usr/src/helloworld
RUN mkdir -p $INSTALL_PATH
# install net-tools
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set working directory
WORKDIR $INSTALL_PATH
# setup flask environment
# install all requirements
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# copy all files and folder to docker
COPY . .
# run the application in docker environment
CMD [ "python", "./manage.py" ]
I built the application with docker build --tag hello-world:test . and running the application as: docker run -d -p 5000:5000 hello-world:test successfully.
However, I'm unable to open the application in localhost:5000 or 0.0.0.0:5000 or any other port. The application is running, as I can see it from the CLI:
But, from browser the page is not reachable:
The question mentions to check the IP address:
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" hungry_engelbart
>> <no value>
Found another solution at this link, but docker-machine is currently deprecated.
I'm new to docker, but I have tried to run the same thing following this tutorial, but faced similar issues.
Finally, I am able to solve this. I had to configure a new inbound rules under Windows Firewall > Advanced Settings > Inbound Rules > New Inbound Rules. Create a new rule that will allow a range of local IP addresses, which in my case was 198.168.0.1:198.168.0.100. Finally, you need to run the application at 0.0.0.0 as pointed by #tentative in the comments. :)

Correct way for deploying dbt with docker and cloud run

I'm trying to deploy dbt on a Google cloud run service with a docker container. following david vasquez and dbt Docker images However when trying to deploy the builded image to cloud run. I'm getting an error.
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
This is my dockerfile
FROM python:3.8.1-slim-buster
RUN apt-get update && apt-get dist-upgrade -y && apt-get install -y --no-install-recommends git software-properties-common make build-essential ca-certificates libpq-dev && apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY requirements/requirements.0.17.0rc4.txt ./requirements.0.17.0rc4.txt
RUN pip install --upgrade pip setuptools
RUN pip install -U pip
RUN pip install dbt==0.17.0
RUN pip install --requirement ./requirements.0.17.0rc4.txt
RUN useradd -mU dbt_user
ENV PYTHONIOENCODING=utf-8
ENV LANG C.UTF-8
ENV PORT = 8080
ENV HOST = 0.0.0.0
WORKDIR /usr/app
VOLUME /usr/app
USER dbt_user
CMD ['dbt', 'run']
I understand the health check fails because it can't find a port to listen to, except i specify one in my ENV
Can anyone help me with a solution? thx in advance
According to the documentation one of the requirements to deploy an application on Cloud Run is to listen requests on 0.0.0.0 and expose a port:
The container must listen for requests on 0.0.0.0 on the port to which requests are sent. By default, requests are sent to 8080, but you can configure Cloud Run to send requests to the port of your choice.
dbt is a command line tool which means it doesn't expose any PORT, hence when you're trying to deploy Cloud Run and it verifies if the build is listening it fails with the mentioned error.

Can't build Dockerfile on Ubuntu Server

I'm working on a python project and I get this problem on the Ubuntu Server while working on my local Windows. It stops in the second step, when trying to run mkdir instruction. It seems that I can't run the typical Ubuntu instructions (apt-get clean, apt-get update)
Dockerfile
FROM python:3
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install --upgrade pip==20.0.2 && pip install -r requirements.txt
COPY . /code/
Output error
OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mountpoint for devices not found\"": unknown
Are you able to run the Docker hello-world image? If not, this may indicate a problem with your installation/configuration
$ docker run hello-world
More information about post-installation steps can be found here. Otherwise, first option is to try restarting Docker
$ sudo systemctl restart docker
The Docker daemon must run with root privileges in the background, I have experienced issues before where on a newly-installed machine the updated group permissions for the daemon have not been fully applied. Restarting the daemon, or logging out & in might fix this.
Furthermore, when you declare a WORKDIR inside a Dockerfile that path will automatically be created it if does not already exist. Once you have set your WORKDIR all your paths can and should be relative to it if possible. Knowing this, we can simplify the Dockerfile
FROM python:3
WORKDIR /code
COPY requirements.txt .
RUN pip install --upgrade pip==20.0.2 && pip install -r requirements.txt
COPY . .
That may be enough to solve your issue. In my experience the Docker build tracebacks can be rather vague at times, but it sounds like that particular error could be stemming from a failed attempt to create a directory, either from a permission issue on the host machine or a syntax issue inside the container.
I solved this problem by (re)installing with apt, instead of snap:
sudo snap remove docker
sudo apt install docker-io
Test with (now working):
sudo docker run hello-world

Creating non-root user in jupyter dockerfile

I am starting with docker and built an image with jupyter and some python libraries. The end user should be able to use jupyter and access specific host data directories throught the container (read/write rights), but must be a non-root user. Here is my dockerfile so far:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y \
python-pip
RUN pip install --upgrade pip && pip install jupyter \
pandas \
numpy
RUN useradd -r -g users A && \
mkdir /myhome && \
chown -R A:users /myhome
EXPOSE 8888
WORKDIR /myhome
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0"]
I run this by doing docker run -it -p 8888:8888 -u="A" -v /some/host/files:/myhome
But then I got a jupyter error that says OSError: [Errno 13] Permission denied: '/home/A'
Any help appreciated. Many thanks!
When you start your container with --entrypoint=bash, you will find that the home directory /home/A of your user has not been created. To do that, you need to add the -m flag to the useradd command
Some more info: You might want to take a look at the docker-stacks projects (https://github.com/jupyter/docker-stacks/tree/master/base-notebook and derived images). That seems to match with what you're trying to do and adds some other helpful stuff. E.g. when running a dockerized jupyter, you need a "PID 1 reaper"; otherwise your exited notebook kernels turn into zombies (you can google for that :-)
Also, when sharing host files with a non-root user inside the container, you will often need to set the UID of your container user to some specific value matching with the host system, so the file system permissions are properly matched. The docker-stacks containers support that too. Their Dockerfiles might at least help as a boilerplate to run your own.

No EXPOSE in aws docker fails deployment

I have a scrapy project run continously by cron hosted inside a docker image.
When I run and deploy this locally everything works fine. If I try to deploy the same to AWS I get the following error inside the logs:
No EXPOSE directive found in Dockerfile, abort deployment (ElasticBeanstalk::ExternalInvocationError)
The console shows that my container was build correctly but I can not use it without an EXPOSED port.
INFO: Successfully pulled python:2.7
WARN: Failed to build Docker image aws_beanstalk/staging-app, retrying...
INFO: Successfully built aws_beanstalk/staging-app
ERROR: No EXPOSE directive found in Dockerfile, abort deployment
ERROR: [Instance: i-6eebaeaf] Command failed on instance. Return code: 1 Output: No EXPOSE directive found in Dockerfile, abort deployment.
Hook /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
But why is it not possible?
My Dockerfile looks like the following:
FROM python:2.7
MAINTAINER XDF
ENV DIRECTORY /opt/the-flat
# System
##########
RUN apt-get update -y && apt-get upgrade -y && apt-get install -y ntp vim apt-utils
WORKDIR $DIRECTORY
# GIT
##########
# http://stackoverflow.com/questions/23391839/clone-private-git-repo-with-dockerfile
RUN apt-get install -y git
RUN mkdir /root/.ssh/
ADD deploy/git-deply-key /root/.ssh/id_rsa
RUN chmod 0600 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan -t rsa bitbucket.org >> /root/.ssh/known_hosts
RUN ssh -T -o 'ConnectionAttempts=1' git#bitbucket.org
RUN git clone --verbose git#bitbucket.org:XDF/the-flat.git .
# Install
##########
RUN pip install scrapy
RUN pip install MySQL-python
# not working
# apt-get install -y wkhtmltopdf && pip install pdfkit
# else
# https://pypi.python.org/pypi/pdfkit
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y openssl build-essential xorg libssl-dev
RUN wget http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN tar xvjf wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN chown root:root wkhtmltopdf-amd64
RUN mv wkhtmltopdf-amd64 /usr/bin/wkhtmltopdf
RUN pip install pdfkit
# Cron
##########
# http://www.ekito.fr/people/run-a-cron-job-with-docker/
# http://www.corntab.com/pages/crontab-gui
RUN apt-get install -y cron
RUN crontab "${DIRECTORY}/deploy/crontab"
CMD ["cron", "-f"]
It's by design. You need to have an EXPOSE port directive in your Dockerfile to tell beanstalk what port your app will be listening on. Do you have a usecase where you cannot or do not want to have EXPOSE in your Dockerfile?
ElasticBeanstalk is designed for web applications, hence the EXPOSE requirement. The use case you demonstrated is that of a jobs (workers) server, which Elastic Beanstalk doesn't handle well.
For your case, either expose a dummy port number or launch an EC2 instance yourself to bypass the EB overload.

Categories

Resources