I have a docker application that builds a Postgres database. I am using tox to run my Django tests. When I am running docker-compose run web tox over my docker image in my local machine (I used docker-compose up --build --force-recreate -d to build my docker image) it is showing error as:
E django.db.utils.OperationalError: could not connect to server:
Connection refused
E Is the server running on host "127.0.0.1" and accepting
E TCP/IP connections on port 5432?
But when I am running the only tox command (not on my docker image) it is working fine.
I tried to run my Django test without tox that is by using docker-compose run web python manage.py test over my docker image. In this case, it is not showing any errors. I guess I have some problem to run tox over my docker image.
I was having the same issue while the DB was definitely running. Turns out Tox doesn't pass the env variables from the host machine into the test environment unless you tell it to, so Django was trying to connect with the wrong DB settings.
The fix was to use the passenv option in the Tox.ini file to pass the required variables:
[testenv]
deps = -r requirements.txt
commands = pytest {posargs}
passenv = POSTGRES_USER POSTGRES_PASSWORD POSTGRES_HOST POSTGRES_PORT
You could also use passenv = * to pass everything.
This is probably caused by well known thing, that the test container starts before DB container is fully functional. Although you set in docker compose dependency/link docker only waits for the dependent container to be up. If DB initialization takes say 30s, the second container will be started before that and you will see this issue.
Solution is to put in place on the second container some bash script that will ping the DB port to make it wait with until the DB is ready. Check SO, there are multiple similar questions with some nice solutions how to make second container wait for the dependent DB.
Related
I can connect to postgres in that way.
On my local machine i run:
ssh name#ip -p 22
input --> password
and then
sudo docker-compose exec postgres bash
after that i have full access to my postgres db.
how can i connect to that DB with python?
I know library like psycopg2, but i didn't found any example how to connect to db which is on another server and with docker ot run.
There are three layers here.
Your local machine.
The server.
The container running the database.
Between each layer, there is a gap. Between
(1) and (2) you have the Internet. Between
(2) and (3) you have docker networking.
Now, what you described in the question is this.
You first cross the (1)-(2) gap with SSH then
you cross the (2)-(3) with the command
sudo docker-compose exec postgres bash.
Now for your question in the comment, according to
docker documentation, docker-compose exec <container-name or id> <command> will
run a command in a container, and sudo elevate your privilege to root account.
Since the command is bash, you essentially open an interactive shell of the
container.
Now this method of crossing the two gaps will work, and you observed, but for
psycopg2 library, it will not.
Again with docker documentation, you can tell docker to eliminate the (2)-(3) gap
for you, this is mainly known as publishing a port. You can tell docker to map
a port on the server to a port on the container, so the (2)-(3) gap can be eliminated.
At this point, the connection to a port on the server will be passed to the container
at the defined port.
Now the only gap you need to cross is just (1)-(2) which can now be done by psycopg2
easily (given that the firewall is allowing inbound connection on that said port).
Now, the detail on how to tell docker to eliminate the (2)-(3) gap is in the answer to Connecting to Postgresql in a docker container from outside. It also show you how you can connect to the database with psql directly from your local machine.
I'm contributing to a new project, but getting info about the setup/build is difficult. I can get through these steps in the build process:
$ docker-machine create -d virtualbox dev;
$ eval $(docker-machine env dev)
$ docker-compose build
$ docker-compose up -d
The next command fails:
$ docker-compose run web /usr/local/bin/python manage.py migrate
...with this error:
(2005, "Unknown MySQL server host 'mysql' (0)")
When I change the mysql HOST from mysql to localhost, I get a new error:
(2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
I've read about each error, but the proposed solutions aren't relevant to my code (besides the suggestion of setting the HOST to localhost). Which host value is correct and what should be done about the respective error?
I'm not actually sure if mysql is running, where it should be running, and how to check its status.
I suspect that mysql is in another container, and project container is called "web" in the docker-compose.yml.
When you change mysql to localhost it will attempt to connect to local mysql in the web container (via linux socket) but of course it doesn't exist, because it has it's own container which I suspect is called mysql in docker-compose.yml.
To view the running containers you can use sudo docker ps, if mysql container is stopped/restarting you can investigate using docker logs <mysql container name/ID>.
If thats the case, try to look for mounts in the docker-compose.yml to investigate further.
I try to run django on a docker container using sqllite as the db and the django dev server. So far I was able to launch locally the django server:
python .\manage.py runserver
I can build the docker image using Dockerfile:
docker build . -t pythocker
But when I run the image with docker run -p 8000:8000 pythocker no output is shown and the machine is not reachable, I have to kill the running container.
If I add the -it flag on the docker run command then the server is running and I can go to http://192.168.99.100:8000 and display the django welcome page. Why is this flag mandatory here?
Docker logs on the container gives nothing. I also tried to add custom logging inside the manage.py but it's not diplayed in the console or in the docker logs.
I am using the Docker Windows toolbox as I have only a windows home computer.
I am trying to do
http://containertutorials.com/docker-compose/flask-simple-app.html
I have copied the tutorial verbatim, except I changed
From flask import Flask
to
from flask import Flask
I can built it just fine. I can start it and get the following when I run docker ps from the command line
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
291c8dfe5ddb whatever:latest "python app.py" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp sick_wozniak
I am building this on OSX
I have tried the following
Looking at this post Deploying a minimal flask app in docker - server connection issues
Running $ python app.py to make sure it works without docker
Creating a django project and a dockerfile for that project. Then build, run and access.
So I am confident that docker is working and flask is working independently of each other, but I cannot get them to work together.
If on Linux, then http://localhost:5000 should work seeing as the container is both running and listening on port 5000.
Otherwise, you would be using Docker Machine, and so you need to get the IP of the docker virtual machine using docker-machine ip. On OSX, for example
$ open http://$(docker-machine ip default):5000
What's the proper development workflow for code that runs in a Docker container?
Solomon Hykes said that the "official" workflow involves building and running a new Docker image for each Git commit. That makes sense, but what if I want to test a change before committing it to the Git repo?
I can think of two ways to do it:
Run the code on a local development server (e.g., the Django development server). Edit a file; test on the dev server; make a Git commit; rebuild the Docker image with the new code; test again on the local Docker container.
Don't run a local dev server. Instead, build and run a new Docker image each time I edit a file, and then test the change on local Docker container.
Both approaches are pretty inefficient. Is there a better way?
A more efficient way is to run a new container from the latest image that was built (which then has the latest code).
You could start that container starting a bash shell so that you will be able to edit files from inside the container:
docker run -it <some image> bash -l
You would then run the application in that container to test the new code.
Another way to alter files in that container is to start it with a volume. The idea is to alter files in a directory on the docker host instead of messing with files from the command line from the container itself:
docker run -it -v /home/joe/tmp:/data <some image>
Any file that you will put in /home/joe/tmp on your docker host will be available under /data/ in the container. Change /data to whatever path is suitable for your case and hack away.