I try to run django on a docker container using sqllite as the db and the django dev server. So far I was able to launch locally the django server:
python .\manage.py runserver
I can build the docker image using Dockerfile:
docker build . -t pythocker
But when I run the image with docker run -p 8000:8000 pythocker no output is shown and the machine is not reachable, I have to kill the running container.
If I add the -it flag on the docker run command then the server is running and I can go to http://192.168.99.100:8000 and display the django welcome page. Why is this flag mandatory here?
Docker logs on the container gives nothing. I also tried to add custom logging inside the manage.py but it's not diplayed in the console or in the docker logs.
I am using the Docker Windows toolbox as I have only a windows home computer.
Related
I created a docker image for my FastAPI application then I created a container from that image. Now When I connect to that container using docker exec -it <container-id> through my terminal, I am able to access it but the problem is that autocomplete doesn't work when I press TAB.
What I have understood from your question is when you enter into the docker environment, you are unable to autocomplete filenames and folders.
Usually when you enter into the container via shell, the autocomplete not works properly. Tried to enter into the container using bash environment i.e., docker exec -it <container-id> bash. Now you can use TAB to autocomplete files and folders.
It's been asked before but I haven't been able to find the answer so far. I have a script which is called via a Flask app. It's Dockerized and I used docker-compose.yml. The docker command, which worked outside of Docker, creates a html file using openscad. As you can see below, it takes a variable path:
cmd_args = f"docker run -v '{path}':/documents/ --rm --name manual-asciidoc-to-html " \
f"asciidoctor/docker-asciidoctor asciidoctor -D /documents *.adoc"
Popen(cmd_args, shell=True)
time.sleep(1)
When the script executes, the print out in Terminal shows:
myapp | /bin/sh: 1: docker: not found
How can I get this docker command to run in my already running docker container?
I don’t really get what you are trying to say here but I‘m assuming you want to run the docker command from within your container. You don’t really do it that way. The way to communicate with the docker daemon from within a container is to add the Docker Unix socket from the host system to the container using the -v when starting the container or adding it to the volumes section of your docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After doing that you should be able to use a docker API (https://github.com/docker/docker-py) to connect to the Daemon from within the container and do the actions you want to. You should be able to convert the command you initially wanted to execute to simple docker API calls.
Regards
Dominik
I have a docker application that builds a Postgres database. I am using tox to run my Django tests. When I am running docker-compose run web tox over my docker image in my local machine (I used docker-compose up --build --force-recreate -d to build my docker image) it is showing error as:
E django.db.utils.OperationalError: could not connect to server:
Connection refused
E Is the server running on host "127.0.0.1" and accepting
E TCP/IP connections on port 5432?
But when I am running the only tox command (not on my docker image) it is working fine.
I tried to run my Django test without tox that is by using docker-compose run web python manage.py test over my docker image. In this case, it is not showing any errors. I guess I have some problem to run tox over my docker image.
I was having the same issue while the DB was definitely running. Turns out Tox doesn't pass the env variables from the host machine into the test environment unless you tell it to, so Django was trying to connect with the wrong DB settings.
The fix was to use the passenv option in the Tox.ini file to pass the required variables:
[testenv]
deps = -r requirements.txt
commands = pytest {posargs}
passenv = POSTGRES_USER POSTGRES_PASSWORD POSTGRES_HOST POSTGRES_PORT
You could also use passenv = * to pass everything.
This is probably caused by well known thing, that the test container starts before DB container is fully functional. Although you set in docker compose dependency/link docker only waits for the dependent container to be up. If DB initialization takes say 30s, the second container will be started before that and you will see this issue.
Solution is to put in place on the second container some bash script that will ping the DB port to make it wait with until the DB is ready. Check SO, there are multiple similar questions with some nice solutions how to make second container wait for the dependent DB.
So I'm trying to run this project using docker.
I followed the standard docker protocol:
docker build -t orange .
docker run -p 8080:8080 orange
I used the following command to check that the docker image was indeed created.
docker image ls
However, after running these commands, there is still no site running on localhost:8080. Any tips on troubleshooting this?
EDIT: After using the right port, I'm getting a directory listing instead of an actual site. Directory listing
By looking at the repository, it seems that the exposed port is 9999 and not 8080. Also, it looks like you can use docker-compose, that is, you can run
docker-compose up --build
to spin up the server. You should then be able to reach it at localhost:9999
I am trying to do
http://containertutorials.com/docker-compose/flask-simple-app.html
I have copied the tutorial verbatim, except I changed
From flask import Flask
to
from flask import Flask
I can built it just fine. I can start it and get the following when I run docker ps from the command line
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
291c8dfe5ddb whatever:latest "python app.py" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp sick_wozniak
I am building this on OSX
I have tried the following
Looking at this post Deploying a minimal flask app in docker - server connection issues
Running $ python app.py to make sure it works without docker
Creating a django project and a dockerfile for that project. Then build, run and access.
So I am confident that docker is working and flask is working independently of each other, but I cannot get them to work together.
If on Linux, then http://localhost:5000 should work seeing as the container is both running and listening on port 5000.
Otherwise, you would be using Docker Machine, and so you need to get the IP of the docker virtual machine using docker-machine ip. On OSX, for example
$ open http://$(docker-machine ip default):5000