Unable to run Flask App using Docker in Windows-10 - python

I've installed and configured docker (as per documentation) and I am trying to build a flask application using tiangolo/uwsgi-nginx-flask:python3.8. I've built a hello-world application, and have tested it locally by running python manage.py and the application runs successfully. Link to full Code-File.
My docker version and installation is as below:
Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.8
ENV INSTALL_PATH /usr/src/helloworld
RUN mkdir -p $INSTALL_PATH
# install net-tools
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set working directory
WORKDIR $INSTALL_PATH
# setup flask environment
# install all requirements
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# copy all files and folder to docker
COPY . .
# run the application in docker environment
CMD [ "python", "./manage.py" ]
I built the application with docker build --tag hello-world:test . and running the application as: docker run -d -p 5000:5000 hello-world:test successfully.
However, I'm unable to open the application in localhost:5000 or 0.0.0.0:5000 or any other port. The application is running, as I can see it from the CLI:
But, from browser the page is not reachable:
The question mentions to check the IP address:
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" hungry_engelbart
>> <no value>
Found another solution at this link, but docker-machine is currently deprecated.
I'm new to docker, but I have tried to run the same thing following this tutorial, but faced similar issues.

Finally, I am able to solve this. I had to configure a new inbound rules under Windows Firewall > Advanced Settings > Inbound Rules > New Inbound Rules. Create a new rule that will allow a range of local IP addresses, which in my case was 198.168.0.1:198.168.0.100. Finally, you need to run the application at 0.0.0.0 as pointed by #tentative in the comments. :)

Related

No such file or directory error when running Docker container

I have a REST Api for a Flask app with an Oracle database for which I use Oracle Instant Client.
I managed to run the app from my computer and it works fine and my task is to make a Docker file for this app. I don`t have much experience with Docker.
This is the Dockerfile that I have written
FROM python:3.7.5-slim-buster
# Installing Oracle instant client
WORKDIR /opt/oracle
RUN apt-get update && apt-get install -y libaio1 wget unzip \
&& wget
https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-
linuxx64.zip \
&& unzip instantclient-basiclite-linuxx64.zip \
&& rm -f instantclient-basiclite-linuxx64.zip \
&& cd /opt/oracle/instantclient* \
&& rm -f *jdbc* *occi* *mysql* *README *jar uidrvci genezi adrci \
&& echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-
instantclient.conf \
&& ldconfig
WORKDIR /app
COPY . .
EXPOSE 5000
CMD ["python", "/app/__init__.py"]
I use the following commands:
docker build - < Dockerfile
And the Docker image build with no errors
docker run -d -p 5000:5000 (docker image id)
docker start -ai (docker container id)
And I get this error: python: can't open file '/app/__init__.py': [Errno 2] No such file or directory
The folder structure of the app on my computer is the following:
C:\Proiecte_python\Flask_Docker_App-Start\app
and in app are the instant oracle client the python file and the Dockerfile.
Can please someone help me because I think it`s something wrong in the Dockerfile CMD path or something like that. I have tried many variants but none work
The last line of your Dockerfile
CMD ["python", "/app/__init__.py"]
is equivalent to executing
python /app/__init__.py
The error you are getting is that the file __init__.py does not exist.
The lines
WORKDIR /app
COPY . .
Are telling your container to CD into the /app directory then copy all files from your host machine (Eg your physical machine) into the /app directory of the container. (the COPY . . means to copy from the current directory of your host - eg the location you're running docker commands from - into the current directory of the container - /app).
It seems that as part of receiving the Dockerfile you should have also downloaded the __init.py__ file and then the Dockerfile would have copied that into your container for you.
Alternatively you may have missed steps in your instructions where you were meant to write your own __init.py__ file for testing.
Either way your solutions are to find the __init.py__ file and put it into your current working directory ( C:\Proiecte_python\Flask_Docker_App-Start\app ) and ensure that you run your docker build and docker run commands from that same directory eg -
cd C:\Proiecte_python\Flask_Docker_App-Start\app
docker build <....>
docker run <....>
docker start <....>
Or your other solution is to go back to the instructions and ensure that you have created the python file and put it in the correct place.
As a very basic Flask/Docker tutorial see the below link
https://runnable.com/docker/python/dockerize-your-flask-application

socket.error: [Errno 99] Address not available..Get this error while trying to link a container with another container of CouchDB in Docker

I am trying to make a Docker Image of a Web Application which is based on python. As I am trying to link the official CouchDB container with the Docker Container that I have created I am getting the "Address not available error". docker ps -a shows me that the couchdb is working in a perfect way.
The command that I have used in the terminal is:
docker run -p 5984:5984 --name my-couchdb -d couchdb
docker run --name my-couchdb-app --link my-couchdb:couchdb webpage:test
My Docker File for webpage:test is given below:
FROM python:2.7.15-alpine3.6
RUN apk update && \^M
apk upgrade && \^M
apk add bash vim sudo
RUN mkdir /home/WebDocker/
ADD ./Webpage1 /home/WebDocker/Webpage1
ADD ./requirements.txt /home/WebDocker/requirements.txt
WORKDIR /home/WebDocker
RUN pip install -r /home/WebDocker/requirements.txt
RUN chmod +x /home/WebDocker/Webpage1/main.py
EXPOSE 8080
ENTRYPOINT ["python","/home/WebDocker/Webpage1/main.py"]

How can I add software or other packages to a docker container?

I have pulled jenkins container from docker hub like this:
docker pull jenkins
The container runs and I can access Jenkins UI in :
http://localhost:8080
My question is:
If I want to be able to create a jenkins job that pulls from a github repo and I want to run some python tests from one of the test files of that repo, how can I install extra packages such as virtualenvwrapper, pip, pytest, nose, selenium etc?
It appears that the docker container does not share any reference with local host file system.
How can I install such packages in this running container?
Thanks
You will need to install all your dependencies at docker container build time.
You can make your own Dockerfile off of the jenkins library, and then put custom stuff in there. Your Dockerfile can look like
FROM jenkins:latest
MAINTAINER Becks
RUN apt-get update && apt-get install -y {space delimited list of package}
Then, you can do something like...
docker build -t jenkins-docker --file Dockerfile .
docker run -it -d --name=jenkins-docker jenkins-docker
I might not have written all the syntax correctly, but this is basically what you need to do. If you want the run step to spin up jenkins, follow along with what they are doing in the existing Dockerfile here and add relevant sections to your dockerfile, to add some RUN steps to run jenkins.
Came across this page, which approaches a similar problem, although it also mounts the docker sock inside another container, to kind of connect one container to another. Given that its an external link, here's the relevant dockerfile from there,
FROM jenkins:1.596
USER root
RUN apt-get update \
&& apt-get install -y sudo \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
And this is how you can spin it up.
docker build -t myjenk .
...
Successfully built 471fc0d22bff
$ docker run -d -v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker -p 8080:8080 myjenk
I strongly suggest going through that post. Its pretty awesome.

Creating non-root user in jupyter dockerfile

I am starting with docker and built an image with jupyter and some python libraries. The end user should be able to use jupyter and access specific host data directories throught the container (read/write rights), but must be a non-root user. Here is my dockerfile so far:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y \
python-pip
RUN pip install --upgrade pip && pip install jupyter \
pandas \
numpy
RUN useradd -r -g users A && \
mkdir /myhome && \
chown -R A:users /myhome
EXPOSE 8888
WORKDIR /myhome
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0"]
I run this by doing docker run -it -p 8888:8888 -u="A" -v /some/host/files:/myhome
But then I got a jupyter error that says OSError: [Errno 13] Permission denied: '/home/A'
Any help appreciated. Many thanks!
When you start your container with --entrypoint=bash, you will find that the home directory /home/A of your user has not been created. To do that, you need to add the -m flag to the useradd command
Some more info: You might want to take a look at the docker-stacks projects (https://github.com/jupyter/docker-stacks/tree/master/base-notebook and derived images). That seems to match with what you're trying to do and adds some other helpful stuff. E.g. when running a dockerized jupyter, you need a "PID 1 reaper"; otherwise your exited notebook kernels turn into zombies (you can google for that :-)
Also, when sharing host files with a non-root user inside the container, you will often need to set the UID of your container user to some specific value matching with the host system, so the file system permissions are properly matched. The docker-stacks containers support that too. Their Dockerfiles might at least help as a boilerplate to run your own.

No EXPOSE in aws docker fails deployment

I have a scrapy project run continously by cron hosted inside a docker image.
When I run and deploy this locally everything works fine. If I try to deploy the same to AWS I get the following error inside the logs:
No EXPOSE directive found in Dockerfile, abort deployment (ElasticBeanstalk::ExternalInvocationError)
The console shows that my container was build correctly but I can not use it without an EXPOSED port.
INFO: Successfully pulled python:2.7
WARN: Failed to build Docker image aws_beanstalk/staging-app, retrying...
INFO: Successfully built aws_beanstalk/staging-app
ERROR: No EXPOSE directive found in Dockerfile, abort deployment
ERROR: [Instance: i-6eebaeaf] Command failed on instance. Return code: 1 Output: No EXPOSE directive found in Dockerfile, abort deployment.
Hook /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
But why is it not possible?
My Dockerfile looks like the following:
FROM python:2.7
MAINTAINER XDF
ENV DIRECTORY /opt/the-flat
# System
##########
RUN apt-get update -y && apt-get upgrade -y && apt-get install -y ntp vim apt-utils
WORKDIR $DIRECTORY
# GIT
##########
# http://stackoverflow.com/questions/23391839/clone-private-git-repo-with-dockerfile
RUN apt-get install -y git
RUN mkdir /root/.ssh/
ADD deploy/git-deply-key /root/.ssh/id_rsa
RUN chmod 0600 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan -t rsa bitbucket.org >> /root/.ssh/known_hosts
RUN ssh -T -o 'ConnectionAttempts=1' git#bitbucket.org
RUN git clone --verbose git#bitbucket.org:XDF/the-flat.git .
# Install
##########
RUN pip install scrapy
RUN pip install MySQL-python
# not working
# apt-get install -y wkhtmltopdf && pip install pdfkit
# else
# https://pypi.python.org/pypi/pdfkit
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y openssl build-essential xorg libssl-dev
RUN wget http://wkhtmltopdf.googlecode.com/files/wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN tar xvjf wkhtmltopdf-0.10.0_rc2-static-amd64.tar.bz2
RUN chown root:root wkhtmltopdf-amd64
RUN mv wkhtmltopdf-amd64 /usr/bin/wkhtmltopdf
RUN pip install pdfkit
# Cron
##########
# http://www.ekito.fr/people/run-a-cron-job-with-docker/
# http://www.corntab.com/pages/crontab-gui
RUN apt-get install -y cron
RUN crontab "${DIRECTORY}/deploy/crontab"
CMD ["cron", "-f"]
It's by design. You need to have an EXPOSE port directive in your Dockerfile to tell beanstalk what port your app will be listening on. Do you have a usecase where you cannot or do not want to have EXPOSE in your Dockerfile?
ElasticBeanstalk is designed for web applications, hence the EXPOSE requirement. The use case you demonstrated is that of a jobs (workers) server, which Elastic Beanstalk doesn't handle well.
For your case, either expose a dummy port number or launch an EC2 instance yourself to bypass the EB overload.

Categories

Resources