I am trying to test my web application using a docker container, but I am not able to see it when I try to access it through my browser.
The docker compose file looks like
version: '2'
services:
db:
image: postgres
volumes:
- ~/pgdata:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_PASSWORD: "dbpassword"
PGDATA: "/var/lib/postgresql/data/pgdata"
ports:
- "5432:5432"
web:
build:
context: .
dockerfile: Dockerfile-web
ports:
- "5000:5000"
volumes:
- ./web:/web
depends_on:
- db
backend:
build:
context: .
dockerfile: Dockerfile-backend
volumes:
- ./backend:/backend
depends_on:
- db
The dockerfile-web looks like
FROM python
ADD web/requirements.txt /web/requirements.txt
ADD web/bower.json /web/bower.json
WORKDIR /web
RUN \
wget https://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.xz && \
tar xJf node-*.tar.xz -C /usr/local --strip-components=1 && \
rm -f node-*.tar.xz
RUN npm install -g bower
RUN bower install --allow-root
RUN pip install -r requirements.txt
RUN export MYFLASKAPP_SECRET='makethewebsite'
CMD python manage.py server
The ip for my docker machine is
docker-machine ip
192.168.99.100
But when I try
http://192.168.99.100:5000/
in my browser it just says that the site cannot be reached.
It seems like it is refusing the connection.
When I ping my database in the browser I can see that my database response in a log
http://192.168.99.100:5432/
So I tried wget inside the container and got
$ docker exec 3bb5246a0623 wget http://localhost:5000/
--2016-07-23 05:25:16-- http://localhost:5000/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:5000... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34771 (34K) [text/html]
Saving to: ‘index.html.1’
0K .......... .......... .......... ... 100% 5.37M=0.006s
2016-07-23 05:25:16 (5.37 MB/s) - ‘index.html.1’ saved [34771/34771]
Anyone know how I can get my web application to show up through my browser?
I had to enable external visibility for my flask application.
You can see it here
Can't connect to Flask web service, connection refused
Related
I'm running an app inside a docker container. That app uses docker Postgres image to save data in a database. I need to keep a local copy of this database's data to avoid losing data if the container is removed or purged somehow ..so I am using volumes inside my `docker-compose.YAML file,, but still the local DB folder is always empty .. so whenever I move the container or purge it the data are lost
docker-compose.yaml
version: "2"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
ports:
- '5433:5432'
restart: always
command: -p 5433
environment:
- POSTGRES_DB=mydata
- POSTGRES_USER=mydata
- POSTGRES_PASSWORD=mydata#
- PGDATA=/tmp
django-apache2:
build: .
container_name: rolla_django
restart: always
environment:
- POSTGRES_DB=mydata
- POSTGRES_USER=mydata
- POSTGRES_PASSWORD=mydata#
- PGDATA=/tmp
ports:
- '4002:80'
- '4003:443'
volumes:
- ./www/:/var/www/html
- ./www/demo_app/static_files:/var/www/html/demo_app/static_files
- ./www/demo_app/media:/var/www/html/demo_app/media
# command: sh -c 'python manage.py migrate && python manage.py loaddata db_backkup.json && apache2ctl -D FOREGROUND'
command: sh -c 'wait-for-it db:5433 -- python manage.py migrate && apache2ctl -D FOREGROUND'
depends_on:
- db
as you can see i used ./data/db:/var/lib/postgresql/data , but locally the ./data/db directory is always empty !!
NOTE
when I use docker volume list it shows no volumes at all
According to your setup, the data is in /tmp: PGDATA=/tmp. Remove this and you volume mapping should work.
Also your command -p 5433 makes postgres run on port 5433, but you still map the port 5432. So if you cant reach the database it might be because of that.
I know this question has been asked a million times, and I've read as many of the answers as I can find. They all seem to come to one conclusion (db hostname is the container service name).
I got it to work in my actual code base, but it started failing when I added ffmpeg install to the Dockerfile. Nothing else had to be done, just installing FFPMEG via apt-get install -y ffmpeg would cause my python code to get the connection refused message. If I removed the ffmpeg install line, then my code would connect to the db just fine. Although re-running the container would trigger the dreaded connection refused error.
So I created a quick sample app so I could post here and try to get some thoughts on what's going on. But now this sample code won't connect to the db no matter what I do.
So here goes - And thanks in advance for any help:
myapp.py
# import ffmpeg
import psycopg2
if __name__ == "__main__":
print("Starting app...")
# probe = ffmpeg.probe("131698249.mp4")
# print(probe)
try:
connection = psycopg2.connect(
user="docker", password="docker", host="db", port="5432", database="docker")
cursor = connection.cursor()
postgreSQL_select_Query = "select * from test_table"
cursor.execute(postgreSQL_select_Query)
print("Selecting rows from table using cursor.fetchall")
records = cursor.fetchall()
print("Print each row and it's columns values")
for row in records:
print(row)
cursor.close()
connection.close()
except (Exception, psycopg2.Error) as error:
print("Error while fetching data from PostgreSQL", error)
Dockerfile
WORKDIR /usr/src/app
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
CMD ["python", "myapp.py"]
docker-compose.yml
version: '3.8'
services:
db:
container_name: pg_container
image: postgres:14.1
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
ports:
- "8000:5432"
expose:
- "5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- pg_data:/var/lib/postgresql/data
myapp:
container_name: myapp
build:
context: .
dockerfile: ./Dockerfile
restart: "no"
depends_on:
- db
volumes:
pg_data:
If I build and run the code: docker compose up --detach
Everything gets built and started. The Database starts up and gets populated with table/data from the init.sql (not included here)
The app container starts and the code executes, but immediately fails with the Connection refused error.
However, if from my computer I run: psql -U docker -h localhost -p 8000 -d docker
it connects without any error and I can query the database as expected.
But the app in the container won't connect and if I run the container with docker run -it myapp /bin/bash and then from inside the container run: psql -U docker -h db -p 5432 -d docker I get the Connection refused error.
If anyone has any thoughts or ideas I would be so grateful. I've been wrestling with this for three days now.
Looks like I've resolved it. I was sure I'd tried this before, but regardless adding a networks section to the docker-compose.yml seemed to fix the issue.
I also had to do the second docker-compose up -d as suggested by David Maze's comment. But the combination of the two seem to have fixed my issue.
Here's my updated docker-compose.yml for complete clarity:
version: '3.8'
services:
postgres-db:
container_name: pg_container
image: postgres:14.1
restart: always
environment:
POSTGRES_USER: docker
POSTGRES_PASSWORD: docker
POSTGRES_DB: docker
ports:
- "5500:5432"
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- dock-db-test
myapp:
container_name: myapp
build:
context: .
dockerfile: ./Dockerfile
restart: "no"
depends_on:
- db
networks:
- dock-db-test
networks:
dock-db-test:
external: false
name: dock-db-test
I am working on a localhost django webserver http://localhost:8000, which works fine.
Meanwhile i need ngrok to do the port forwarding, ngrok http 8000, which works fine too.
Then I want to put ngrok, postgres, redis, maildev, etc all in docker containers, all others works fine, except ngrok.
ngrok failed to contain to localhost:8000.
I understand why, i suppose because ngrok is running on a seperate 'server 'and the localhost on that server does not have web server running.
I am wondering how i can fix it.
I tried in my docker-compose file with
network_mode: "host", it is not working (MacOS).
I tried to use host.docker.internal, but as I am a free plan user, ngrok does not allow me to specify a hostname.
any help is appreciated! Thanks.
here is my docker-compose file:
ngrok:
image: wernight/ngrok
ports:
- '4040:4040'
environment:
- NGROK_PORT=8000
- NGROK_AUTH=${NGROK_AUTH_TOKEN}
network_mode: "host"
UPDATE:
stripe has a new tool [stripe-cli][1], which can do the same thing.
just do as below
stripe-cli:
image: stripe/stripe-cli
command: listen --api-key $STRIPE_SECRET_KEY
--load-from-webhooks-api
--forward-to host.docker.internal:8000/api/webhook/
I ended up getting rid of ngrok, using serveo instead to solve the problem,
here is the code, in case anyone run into the same problem
serveo:
image: taichunmin/serveo
tty: true
stdin_open: true
command: "ssh -o ServerAliveInterval=60 -R 80:host.docker.internal:8000 -o \"StrictHostKeyChecking no\" serveo.net"
I was able to get it to work by doing the following:
Instruct Django to bind to port 8000 with the following command: python manage.py runserver 0.0.0.0:8000
Instruct ngrok to connect to the web docker service in my docker compose file by passing in web:8000 as the NGROK_PORT environment variable.
I've pasted truncated versions of my settings below.
docker-compose.yml:
version: '3.7'
services:
ngrok:
image: wernight/ngrok
depends_on:
- web
env_file:
- ./ngrok/.env
ports:
- 4040:4040
web:
build:
context: ./app
dockerfile: Dockerfile.dev
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./app/django-project/settings/.env
ports:
- 8000:8000
volumes:
- ./app/:/app/
And here is the env file referenced above (i.e. ./ngrok/.env):
NGROK_AUTH=your-auth-token-here
NGROK_DEBUG=1
NGROK_PORT=web:8000
NGROK_SUBDOMAIN=(optional)-your-subdomain-here
You can leave out the subdomain and auth fields. I figured this out by looking through their docker entrypoint
I made a Docker Image of a Web Application which is built on Python and my Web application needs CouchDB server to start before running the programme. Can anyone please tell me how can I install and run CouchDB server in the Dockerfile of this Web Application. My Dockerfile is given below:
FROM python:2.7.15-alpine3.7
RUN mkdir /home/WebDocker
ADD ./Webpage1 /home/WebDocker/Webpage1
ADD ./requirements.txt /home/WebDocker/requirements.txt
WORKDIR /home/WebDocker
RUN pip install -r /home/WebDocker/requirements.txt
RUN apk update && \
apk upgrade && \^M
apk add bash vim sudo
EXPOSE 8080
ENTRYPOINT ["/bin/bash"]
Welcome to SO! I solved it by using Docker-Compose for running a separate CouchDB Container and a separate Python Container. The relevant part of the configuration file docker-compose.yml looks like this:
# This help to avoid routing conflict within virtual machines:
networks:
default:
ipam:
driver: default
config:
- subnet: 192.168.112.0/24
# The CouchDB data is kept in docker volume:
volumes:
couchdb_data:
services:
# The container couchServer uses Dockerfile from the subdirectory CouchDB-DIR
# and it has the hostname 'couchServer':
couchServer:
build:
context: .
dockerfile: CouchDB-DIR/Dockerfile
ports:
- "5984:5984"
volumes:
- type: volume
source: couchdb_data
target: /opt/couchdb/data
read_only: False
- type: volume
source: ${DOCKER_VOLUMES_BASEPATH}/couchdb_log
target: /var/log/couchdb
read_only: False
tty: true
environment:
- COUCHDB_PASSWORD=__secret__
- COUCHDB_USER=admin
python_app:
build:
context: .
dockerfile: ./Python_DIR/Dockerfile
...
In the Docker subnet, the CouchDB can be accessed by http://couchServer:5984 from the Python container. To ensure that the CouchDB is not lost when restarting the container, it is kept in a separate Docker volume couchdb_data.
Use the enviroment-variable DOCKER_VOLUMES_BASEPATH to determine in which directory CouchDB logs. It can be defined in a .env-file.
The network section is only necessary if you have routing problems.
I am now preparing the images for my project. I use dockerize to control my initialization. I am not sure that hardcode the IP address given by docker is way to go or not?
Problem:
backend does not wait until the database finish initialization first.
Terminal says
backend_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
backend_1 | Is the server running on host "sakahama_db" (172.21.0.2) and accepting
backend_1 | TCP/IP connections on port 5432?
Here are my files:
devdb.dockerfile
FROM postgres:9.5
# Install hstore extension
COPY ./Dockerfiles/hstore.sql /docker-entrypoint-initdb.d
RUN mkdir -p /var/lib/postgresql-static/data
ENV PGDATA /var/lib/postgresql-static/data
hstore.sql
create extension hstore;
backend.dockerfile
FROM python:2
RUN apt-get update && apt-get install -y wget
ENV DOCKERIZE_VERSION v0.2.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY requirements ./requirements
RUN pip install -r requirements/local.txt
COPY . .
EXPOSE 8000
CMD echo "dockerize"
CMD ["dockerize", "-wait", "tcp://sakahama_db:5432"]
CMD echo "migrate"
CMD ["python", "sakahama/manage.py", "migrate"]
CMD echo "runserver"
CMD ["python", "sakahama/manage.py", "runserver", "0.0.0.0:8000"]
docker-compose.yml
version: "2"
services:
backend:
build:
context: .
dockerfile: Dockerfiles/backend.dockerfile
restart: "always"
environment:
DATABASE_URL: postgres://username:password#sakahama_db:5432/sakahama
REDISCLOUD_URL: redis://redis:6379/0
links:
- sakahama_db
ports:
- "9000:8000"
sakahama_db:
build:
context: .
dockerfile: Dockerfiles/devdb.dockerfile
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: password
POSTGRES_DB: sakahama
ports:
- "5435:5432"
redis:
image: redis
expose:
- "6379"
Question: How to use dockerize properly?
Update:
I had tried temporary solution like this. But it does not work
backend-entrypoint.sh
#!/bin/bash
echo "dockerize"
dockerize -wait tcp://sakahama_db:5432
echo "migrate"
python sakahama/manage.py migrate
echo "runserver"
python sakahama/manage.py runserver 0.0.0.0:8000
and docker-compose.yml:
I add one line
command: ["sh", "Dockerfiles/backend-entrypoint.sh"]
When your Postgres container is up it starts to receive tcp packages you send with the command: dockerize -wait tcp://sakahama_db:5432 , but it does not mean that the Postgres service is ready. It takes some time to load, to set up users, passwords, create the db or load the databases and make all the grants needed.
I had a similar issue with Flask and MySQL, I created an sh script like you did and inside it I made a simple loop to check if the service was up before starting the Flask application
I am not a shell script Senior but here follow the script:
# testing if databas is up
mysql -h database -uroot -proot databasename
ISDBUP=$?
while [[ $ISDBUP != "0" ]]; do
echo "database is not up yet, waiting for 5 seconds"
sleep 5;
mysql -h database -uroot -proot databasename -e "SELECT 1;";
ISDBUP=$?
done
# starting the application
python server.py app