I am now preparing the images for my project. I use dockerize to control my initialization. I am not sure that hardcode the IP address given by docker is way to go or not?
Problem:
backend does not wait until the database finish initialization first.
Terminal says
backend_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
backend_1 | Is the server running on host "sakahama_db" (172.21.0.2) and accepting
backend_1 | TCP/IP connections on port 5432?
Here are my files:
devdb.dockerfile
FROM postgres:9.5
# Install hstore extension
COPY ./Dockerfiles/hstore.sql /docker-entrypoint-initdb.d
RUN mkdir -p /var/lib/postgresql-static/data
ENV PGDATA /var/lib/postgresql-static/data
hstore.sql
create extension hstore;
backend.dockerfile
FROM python:2
RUN apt-get update && apt-get install -y wget
ENV DOCKERIZE_VERSION v0.2.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY requirements ./requirements
RUN pip install -r requirements/local.txt
COPY . .
EXPOSE 8000
CMD echo "dockerize"
CMD ["dockerize", "-wait", "tcp://sakahama_db:5432"]
CMD echo "migrate"
CMD ["python", "sakahama/manage.py", "migrate"]
CMD echo "runserver"
CMD ["python", "sakahama/manage.py", "runserver", "0.0.0.0:8000"]
docker-compose.yml
version: "2"
services:
backend:
build:
context: .
dockerfile: Dockerfiles/backend.dockerfile
restart: "always"
environment:
DATABASE_URL: postgres://username:password#sakahama_db:5432/sakahama
REDISCLOUD_URL: redis://redis:6379/0
links:
- sakahama_db
ports:
- "9000:8000"
sakahama_db:
build:
context: .
dockerfile: Dockerfiles/devdb.dockerfile
environment:
POSTGRES_USER: username
POSTGRES_PASSWORD: password
POSTGRES_DB: sakahama
ports:
- "5435:5432"
redis:
image: redis
expose:
- "6379"
Question: How to use dockerize properly?
Update:
I had tried temporary solution like this. But it does not work
backend-entrypoint.sh
#!/bin/bash
echo "dockerize"
dockerize -wait tcp://sakahama_db:5432
echo "migrate"
python sakahama/manage.py migrate
echo "runserver"
python sakahama/manage.py runserver 0.0.0.0:8000
and docker-compose.yml:
I add one line
command: ["sh", "Dockerfiles/backend-entrypoint.sh"]
When your Postgres container is up it starts to receive tcp packages you send with the command: dockerize -wait tcp://sakahama_db:5432 , but it does not mean that the Postgres service is ready. It takes some time to load, to set up users, passwords, create the db or load the databases and make all the grants needed.
I had a similar issue with Flask and MySQL, I created an sh script like you did and inside it I made a simple loop to check if the service was up before starting the Flask application
I am not a shell script Senior but here follow the script:
# testing if databas is up
mysql -h database -uroot -proot databasename
ISDBUP=$?
while [[ $ISDBUP != "0" ]]; do
echo "database is not up yet, waiting for 5 seconds"
sleep 5;
mysql -h database -uroot -proot databasename -e "SELECT 1;";
ISDBUP=$?
done
# starting the application
python server.py app
Related
I would like to use the psql in the postgres image in order to run some queries on the database.
But unfortunately when I attach to the postgres container, I got that error the psql command is not found...
For me a little bit it is a mystery how I can run postgre sql queries or commands in the container.
How run the psql command in the postgres container? (I am a new guy in Docker world)
I use Ubuntu as a host machine, and I did not install the postgres on the host machine, I use the postgres container instead.
docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------
yiialkalmi_app_1 /bin/bash Exit 0
yiialkalmi_nginx_1 nginx -g daemon off; Up 443/tcp, 0.0.0.0:80->80/tcp
yiialkalmi_php_1 php-fpm Up 9000/tcp
yiialkalmi_postgres_1 /docker-entrypoint.sh postgres Up 5432/tcp
yiialkalmi_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
Here the containers:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
315567db2dff yiialkalmi_nginx "nginx -g 'daemon off" 18 hours ago Up 3 hours 0.0.0.0:80->80/tcp, 443/tcp yiialkalmi_nginx_1
53577722df71 yiialkalmi_php "php-fpm" 18 hours ago Up 3 hours 9000/tcp yiialkalmi_php_1
40e39bd0329a postgres:latest "/docker-entrypoint.s" 18 hours ago Up 3 hours 5432/tcp yiialkalmi_postgres_1
5cc47477b72d redis:latest "docker-entrypoint.sh" 19 hours ago Up 3 hours 6379/tcp yiialkalmi_redis_1
And this is my docker-compose.yml:
app:
image: ubuntu:16.04
volumes:
- .:/var/www/html
nginx:
build: ./docker/nginx/
ports:
- 80:80
links:
- php
volumes_from:
- app
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d
php:
build: ./docker/php/
expose:
- 9000
links:
- postgres
- redis
volumes_from:
- app
postgres:
image: postgres:latest
volumes:
- /var/lib/postgres
environment:
POSTGRES_DB: project
POSTGRES_USER: project
POSTGRES_PASSWORD: project
redis:
image: redis:latest
expose:
- 6379
docker exec -it yiialkalmi_postgres_1 psql -U project -W project
Some explanation
docker exec -it
The command to run a command to a running container. The it flags open an interactive tty. Basically it will cause to attach to the terminal. If you wanted to open the bash terminal you can do this
docker exec -it yiialkalmi_postgres_1 bash
yiialkalmi_postgres_1
The container name (you could use the container id instead, which in your case would be 40e39bd0329a )
psql -U project -W project
The command to execute to the running container
U user
W Tell psql that the user needs to be prompted for the password at connection time. This parameter is optional. Without this parameter, there is an extra connection attempt which will usually find out that a password is needed, see the PostgreSQL docs.
project the database you want to connect to. There is no need for the -d parameter to mark it as the dbname when it is the first non-option argument, see the docs: -d "is equivalent to specifying dbname as the first non-option argument on the command line."
These are specified by you here
environment:
POSTGRES_DB: project
POSTGRES_USER: project
POSTGRES_PASSWORD: project
This worked for me:
goto bash :
docker exec -it <container-name> bash
from bash :
psql -U <dataBaseUserName> <dataBaseName>
or just this one-liner :
docker exec -it <container-name> psql -U <dataBaseUserName> <dataBaseName>
helps ?
After the Postgres container is configured using docker, open the bash terminal using:
docker exec -it <containerID>(postgres container name / ID) bash
Switch to the Postgres user:
su - postgres
Then run:
psql
It will open the terminal access for the Postgres.
If you need to restore the database in a container you can do this:
docker exec -i app_db_1 psql -U postgres < app_development.back
Don't forget to add -i.
:)
You can enter inside the postgres container using docker-compose by typing the following
docker-compose exec postgres bash
knowing that postgres is the name of the service. Replace it with the name of the Postgresql service in you docker-compose file.
if you have many docker-compose files, you have to add the specific docker-compose.yml file you want to execute the command with. Use the following commnand instead.
docker-compose -f < specific docker-compose.yml> exec postgres bash
For example if you want to run the command with a docker-compose file called local.yml, here the command will be
docker-compose -f local.yml exec postgres bash
Then, use psql command and specify the database name with the -d flag and the username with the -U flag
psql -U <database username you want to connect with> -d <database name>
Baammm!!!!! you are in.
If you have running "postgres" container:
docker run -it --rm --link postgres:postgres postgres:9.6 sh -c "exec psql -h \$POSTGRES_PORT_5432_TCP_ADDR -p \$POSTGRES_PORT_5432_TCP_PORT -U postgres"
We can enter the container with a terminal sh or bash by using,
docker run -it <container id | name> <sh | bash>
if assume it is sh,
psql -U postgres
will work
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
createdb -O docker docker &&\
Just fired up a local test, not sure if -c is what you were after from the cli.
docker run -it --rm --name psql-test-connection -e PGPASSWORD=1234 postgres psql -h kubernetes.docker.internal -U awx -c "\conninfo"
You are connected to database "awx" as user "awx" on host "kubernetes.docker.internal" (address "192.168.65.4") at port "5432".
In many common setups, the PostgreSQL port is published out to the host.
postgres:
ports:
- '12345:5432'
If this is the case, you don't need to do anything Docker-specific to connect to the database. You can use the psql client directly on your host system pointing to the first ports: number.
psql -h localhost -p 12345 -U project
This approach only requires psql or another ordinary PostgreSQL client be installed on the host and that the database container be configured with ports: making it accessible from outside Docker. (The ports: are not necessary for inter-container communication and a production-oriented setup could reasonably not have them.) This does not require the ability to run docker commands and the attendant security concerns, and it can avoid multiple layers of additional command quoting from a docker exec sh -c '...' sequence.
Without using an external terminal a person can run SQL commands within the container CLI.
psql -d [database-name] -U [username] -W
** Don't forget to replace [database-name] with your db-name & [username] with your actual username
Flags:
-d : Specify the database name you want to connect
-U : Specify the username as whom you want to connect
-W : Prompt for the password
I have a docker-compose where there are three components: app, celery, and redis. These are implemented in DjangoRest.
I have seen this question several times on stackoverflow and have tried all the solutions listed. However, the celery task is not running.
The behavior that celery has is the same as the app, that is, it is starting the django project, but it is not running the task.
docker-compose.yml
version: "3.8"
services:
app:
build: .
volumes:
- .:/django
ports:
- 8000:8000
image: app:django
container_name: myapp
command: python manage.py runserver 0.0.0.0:8000
depends_on:
- redis
redis:
image: redis:alpine
container_name: redis
ports:
- 6379:6379
volumes:
- ./redis/data:/data
restart: always
environment:
- REDIS_PASSWORD=
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
celery:
image: celery:3.1
container_name: celery
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
command: celery -A myapp worker -l INFO -c 8
volumes:
- .:/django
depends_on:
- redis
- app
links:
- redis
DockerFile
FROM python:3.9
RUN useradd --create-home --shell /bin/bash django
USER django
ENV DockerHOME=/home/django
RUN mkdir -p $DockerHOME
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV PIP_DISABLE_PIP_VERSION_CHECK 1
USER root
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
USER django
WORKDIR /home/django
COPY requirements.txt ./
# set path
ENV PATH=/home/django/.local/bin:$PATH
# Upgrade pip and install requirements.txt
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
# entrypoint
ENTRYPOINT ["/bin/bash", "-e", "docker-entrypoint.sh"]
docker-entrypoint.sh
# run migration first
python manage.py migrate
# create test dev user and test superuser
echo 'import create_test_users' | python manage.py shell
# start the server
python manage.py runserver 0.0.0.0:8000
celery.py
from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
app = Celery('myapp', broker='redis://redis:6379')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
settings.py
CELERY_BROKER_URL = os.getenv('REDIS_URL') # "redis://redis:6379"
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL') # ""redis://redis:6379"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Africa/Nairobi'
Your docker-entrypoint.sh script unconditionally runs the Django server. Since you declare it as the image's ENTRYPOINT, the Compose command: is passed to it as arguments but your script ignores these.
The best way to fix this is to pass the specific command – "run the Django server", "run a Celery worker" - as the Dockerfile CMD or Compose command:. The entrypoint script ends with the shell command exec "$#" to run that command.
#!/bin/sh
python manage.py migrate
echo 'import create_test_users' | python manage.py shell
# run the container CMD
exec "$#"
In your Dockerfile you need to declare a default CMD.
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD python manage.py runserver 0.0.0.0:8000
Now in your Compose setup, if you don't specify a command:, it will use that default CMD, but if you do, that will be run instead. In both cases your entrypoint script will run but when it gets to the final exec "$#" line it will run the provided command.
That means you can delete the command: override from your app container. (You do need to leave it for the Celery container.) You can simplify this setup further by removing the image: and container_name: settings (Compose will pick reasonable defaults for both of these) and the volumes: mount that hides the image content.
I stuck with this error when trying to backup and restore my database from a docker django app environment :
I first did this command to backup my whole DB
docker exec -t project_final-db-1 pg_dumpall -c -U fred2020 > ./db/dump.sql
And then trying to restory with this command
cat dump.sql | docker exec -i --user fred2020 catsitting-db-1 psql -U fred2020 -d postgres
I have two containers, one for my django app named catsitting-web-1 and one for my postgresql named catsitting-db-1.
I don't understand why it gaves me that error, my db user is the same that I specified on the Dockerfile.
Any clue ?
For purpose help, here is my docker files configuration :
Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
RUN pip install Pillow
COPY . /code/
docker-compose.yml
version: "3.9"
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=fred2020
- POSTGRES_PASSWORD=p*******DD
expose:
- "5432"
ports:
- 5432:5432
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
requirements.txt
Django>=3.0,<4.0
psycopg2-binary>=2.8
Pillow==8.1.0
And that's my process to migrate from laptop1 to laptop2 :
Installation
Run a command line go into a root directory and run:
git clone https://github.com/XXXXXXXXXXXXXXXX
In the command line go into the root directory:
cd catsitting
In the same command line window, run:
docker-compose build --no-cache
In the command line window you need first to migrate the database for Django, run :
docker-compose run web python manage.py migrate
In the command line window then you need to apply the migrations, run :
docker-compose run web python manage.py makemigrations
In the command line window then you need to import database, run :
cat dump.sql | docker exec -i --user fred2020 catsitting-db-1 psql -U fred2020 -d postgres
(for dumping my DB I used docker exec -t project_final-db-1 pg_dumpall -c -U fred2020 > ./db/dump.sql)
You can now run:
docker-compose up
Is there something I get wrong ?
I solved !
It was a problem in misconfiguration in the pg_hba.conf inside my docker postgresql
I changed the value from scram-sha-256 to md5 and it works now I can display my webapp with the current db !!
Do you know how to specifie md5 when I build my docker environnement ? by default it puts scram-sha-256
Do you know why when I restore my dump in the new environnement by default in the container the pg_hba.conf set the authentification methode to scram-sha-256 and to do my connection working I need to edit that file and to put the authentification method set to md5 ?
# TYPE DATABASE USER ADDRESS METHOD
local all all md5
Ok sorry folks I found the solution.
I've put that line in my docker-compose.yml:
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
I'm trying to set up a dockerized Python server named Bullet Train on my local machine:
It has 3 components:
A Postgres database
A Python server
A React frontend
All of these 3 need to work together to get the server up and running, so this is the docker-compose file which sits at the top level of both the frontend and api-server:
version: '3'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: bullettrain
ports:
- "5432:5432"
api:
build:
context: ./bullet-train-api
dockerfile: docker/Dockerfile
command: bash -c "pipenv run python manage.py migrate --noinput
&& pipenv run python manage.py collectstatic --noinput
&& pipenv run gunicorn --bind 0.0.0.0:8000 -w 3 app.wsgi
&& pipenv run python src/manage.py createsuperuser"
environment:
DJANGO_DB_NAME: bullettrain
DJANGO_DB_USER: postgres
DJANGO_DB_PASSWORD: password
DJANGO_DB_PORT: 5432
DJANGO_ALLOWED_HOSTS: localhost
ports:
- "8000:8000"
depends_on:
- db
links:
- db:db
frontend:
build:
context: ./bullet-train-frontend
dockerfile: Dockerfile
ports:
- "8080:8080"
This way, all the 3 components run in parallel. So far so good! Now to initialize it, I run the createsuperuser as stated here by following these steps:
docker exec -it research_api_1 bash ## go to the context of the API server terminal
run python manage.py createsuperuser ## run the createsuperuser command
The command runs successfully and I get this output:
To confirm, I went to the database:
docker exec -it research_db_1 bash ## go to the database instance
psql bullettrain postgres ## connect to the bullettrain database
select * from public.users_ffadminuser; ## check if the super user is created
The results show that the user is indeed created:
Now, if I go to the admin panel as per the docs, nothing happens and the server logs always throw Session data corrupted:
I am trying to test my web application using a docker container, but I am not able to see it when I try to access it through my browser.
The docker compose file looks like
version: '2'
services:
db:
image: postgres
volumes:
- ~/pgdata:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_PASSWORD: "dbpassword"
PGDATA: "/var/lib/postgresql/data/pgdata"
ports:
- "5432:5432"
web:
build:
context: .
dockerfile: Dockerfile-web
ports:
- "5000:5000"
volumes:
- ./web:/web
depends_on:
- db
backend:
build:
context: .
dockerfile: Dockerfile-backend
volumes:
- ./backend:/backend
depends_on:
- db
The dockerfile-web looks like
FROM python
ADD web/requirements.txt /web/requirements.txt
ADD web/bower.json /web/bower.json
WORKDIR /web
RUN \
wget https://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.xz && \
tar xJf node-*.tar.xz -C /usr/local --strip-components=1 && \
rm -f node-*.tar.xz
RUN npm install -g bower
RUN bower install --allow-root
RUN pip install -r requirements.txt
RUN export MYFLASKAPP_SECRET='makethewebsite'
CMD python manage.py server
The ip for my docker machine is
docker-machine ip
192.168.99.100
But when I try
http://192.168.99.100:5000/
in my browser it just says that the site cannot be reached.
It seems like it is refusing the connection.
When I ping my database in the browser I can see that my database response in a log
http://192.168.99.100:5432/
So I tried wget inside the container and got
$ docker exec 3bb5246a0623 wget http://localhost:5000/
--2016-07-23 05:25:16-- http://localhost:5000/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:5000... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34771 (34K) [text/html]
Saving to: ‘index.html.1’
0K .......... .......... .......... ... 100% 5.37M=0.006s
2016-07-23 05:25:16 (5.37 MB/s) - ‘index.html.1’ saved [34771/34771]
Anyone know how I can get my web application to show up through my browser?
I had to enable external visibility for my flask application.
You can see it here
Can't connect to Flask web service, connection refused