docker-compose flask network issue - python

I have a python flask project which is supposed to be a web app for an internal network. It's in a docker image which is started with a docker-compose file.
Sometimes when I run it, the flask server doesn't get messages from the outside world. I figure it has to be a problem with the docker network that docker-compose automatically creates. Whenever this problem occurs I have to restart the box then bring the container back up, and it fixes itself.
Has anyone else seen this?
When I say it doesn't see connections from outside the box I mean HTTP requests never make it to the flask server. I can attempt to go to the URL corresponding to the flask server from a different machine and the flask server sees nothing. However, if I attempt to send an HTTP GET request from inside the box (not inside the container, but on the box the container is running on) the flask server responds.
So this leads me to believe docker-compose is creating a docker network which isn't configured correctly to allow the container to listen to outside requests.
Here's my docker compose file:
version: '3.7'
services:
falcon:
image: "company.com/internal/falcon:0.1"
container_name: falcon
env_file:
- ~/.env
ports:
- "80:80"
volumes:
- ${REPOS}/falcon:/app
command: /conda/bin/falcon start

This hasn't been tested extensively, but it seems to have solved the problem:
version: '3.7'
services:
falcon:
image: "company.com/internal/falcon:0.1"
container_name: falcon
network_mode: bridge # also tried host, I think bridge is right.
Another solution seemed to omit the network_mode above and put this down below:
networks:
default:
external:
name: prod_default
I still don't know why it happened, but both of these solutions seemed to fix it.

Related

I have an error in docker-compose about undefined network,

I am trying to dockerize two applications, one being streamlit, the other being fastapi, I have built the individual docker images for them, and now trying to run them at the same time so they communicate using docker-compose.
I have written the code but get an error when i run docker-compose up --build, that says
ERROR: Service "backend" uses an undefined network "AIservice"
because of this part of the code
networks:
AIservice:
aliases:
- backend.docker
how do i fix this error

Port not exposed but still reachable on internal docker network

(I'm having the inverse problem of exposing a port and it's not reachable.)
In my case I have 2 containers on the same network. One is an Alpine Python running a Python Flask app. The other is a barebones Ubuntu 18.04. The services are initialised basically like this:
docker-compose.yml:
version: '3'
services:
pythonflask:
build: someDockerfile # from python:3.6-alpine
restart: unless-stopped
ubuntucontainer:
build: someOtherDockerfile #from ubuntu:18.04
depends_on:
- pythonflask
restart: unless-stopped
The Python Flask app runs on port 5000.
Notice the lack of expose: - 5000 in the docker-compose.yml file.
The problem is that I'm able to get a correct response when cURLing http://pythonflask:5000 from inside ubuntucontainer
Steps:
$ docker exec -it ubuntucontainer /bin/bash
...and then within the container...
root#ubuntucontainer:/# curl http://pythonflask:5000/
...correctly returns my response from the Flask app.
However from my machine running docker:
$ curl http://localhost:5000/
Doesn't return anything (as expected).
As I test different ports, they get automatically exposed each time. What is doing this?
Connectivity between containers is achieved by placing the containers on the same docker network and communicating over the container ip and port (rather than the host published port). So what does expose do then?
Expose is documentation
Expose in docker is used by image creators to document the expected port that the application will listen on inside the container. With the exception of some tools and a flag in docker that uses this metadata documentation, it is not used to control access between containers or modify docker's networking. Applications may be reconfigured at runtime to listen to a different port, and you can connect to ports that have not been exposed.
For DNS lookups between containers, the network needs to be user created, not one of the default networks from docker (e.g. DNS is not enabled in the default bridge network named "bridge"). With DNS, you can lookup the container name, service name (from a compose file), and any network aliases created for that container on that network.
The other half of the equation is "publish" in docker. This creates a mapping from the host to the container to allow external access. It is implemented with a proxy process that runs on the host and forwards new connections. Because of the implementation, you can publish a port on the host even if the container is not listening on the port, though you would receive an error when trying to connect to that port in that scenario.
The lack of expose: ... just means that there is no port exposed from the service group you defined in your docker-compose.yml
Within the images you use, there are still exposed ports which are reachable from within the network that is automatically created by docker-compose.
That is why you reach one container from within another. In addition every container can be accessed via service name from the docker-compose.yml on the internal network.
You should not be able to access flask from your host (http://localhost:5000)

Pycharm debugging manage.py commands in docker compose

I have a pretty simple setup. I'm running Pycharm 2018.2.3 and using docker compose to spin up 3 containers.
My Django application
NGINX to serve static
Postgres DB
I've configured the remote interpreter for debugging the container, and break point work just fine in most cases, at least when I hit my API endpoints or some other action to the django application.
What does not work, is when I run one of my manage custom manage.py custom commands. I've tried this 2 ways so far.
I setup another debug configuration in PyCharm to execute the command. This results in another container spinning up (in place of the original. Running the command, without breaking on any line breaks. Then the whole container shuts down.
I've logged into the container, run the manage.py command directly via the command line, and it execute in the container, but again no breakpoints.
The documentation seems to work in the normal case, but I can't find any help for debugging these commands in the container.
Thanks for any help or tips.
In order to debug Django commands in a Docker Container you can create a new Run/Debug Configuration with following setup:
Use a Python configuration template
Script path: absolut location of manage.py
Parameters: the Django command you want to debug/execute
!important! Python interpreter: Docker Compose interpreter
Just an update in case anybody comes across a similar problem. My personal solution was to not use the manage.py commands, but instead make these same commands available via an http call.
I found that it was easier (and often even more useful) to simply have an endpoint like myserver.com/api/do-admin-function and restrict that to administrative access.
When I put a breakpoint in my code, even running in the container, it breaks just fine as expected and allows me to debug the way I'd like
It can depends on your docker-compose.yml exact content.
See for instance the section "An interactive debugger inside a running container!" of the article "A Simple Recipe for Django Development In Docker (Bonus: Testing with Selenium)" from Adam King.
His docker-compose.yml includes:
version: "2"
services:
django:
container_name: django_server
build:
context: .
dockerfile: Dockerfile
image: docker_tutorial_django
stdin_open: true
tty: true
volumes:
- .:/var/www/myproject
ports:
- "8000:8000"
In it, see:
stdin_open: true
tty: true
[Those 2 lines] are important, because they let us run an interactive terminal.
Hit ctrl-c to kill the server running in your terminal, and then bring it up in the background with docker-compose up -d.
docker ps tells us it’s still running:
We need to attach to that running container, in order to see its server output and pdb breakpoints.
The command docker attach django_server will present you with a blank line, but if you refresh your web browser, you’ll see the server output.
Drop import pdb; pdb.set_trace() in your code and you’ll get the interactive debugger, just like you’re used to.

Python remote debugging with docker

I'm making a flask webapp with docker, I'm looking for a way to enable pycharm debugging, so far I'm able to deploy the app using the in built docker, the app is automatically ran due to the dockerfile configs using supervisord
When I connect my remote interpretor I get the usual:
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 579-233-679
But the post I perform clearly isn't going to that interpretor as I've marked all of the routes to be break points, I'm still getting the original results from the webapp and none of the break points do anything.
I guess I'm asking:
Am I going about this the wrong way? (should I just use a VM, remote debug on that and then containerise the VM later on)
Is what I'm trying to do even possible?
Should I just manually debug everything instead if I use this method of development?
Update:
the way to correctly enable debug mode for docker is to create a docker-compose.yml, this tells pycharm what to do when you give it a docker-compose interpreter, that way you can hook onto a service, my yml looks like:
version: '3.0'
services:
web:
build: .
command: python3 app/main.py
volumes:
- .:/app
ports:
- "80:80"
- "22"
the yml file isn't generated, you make it yourself.
This enables the port that I've set flask to go to 80 and allows the debugger to connect using port 22,
I followed https://blog.jetbrains.com/pycharm/2017/03/docker-compose-getting-flask-up-and-running/ quite closely. (if anyone stumbles on to this and needs a hand then comment I'll see if I can help)

Docker flask cant connect

I am trying to do
http://containertutorials.com/docker-compose/flask-simple-app.html
I have copied the tutorial verbatim, except I changed
From flask import Flask
to
from flask import Flask
I can built it just fine. I can start it and get the following when I run docker ps from the command line
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
291c8dfe5ddb whatever:latest "python app.py" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp sick_wozniak
I am building this on OSX
I have tried the following
Looking at this post Deploying a minimal flask app in docker - server connection issues
Running $ python app.py to make sure it works without docker
Creating a django project and a dockerfile for that project. Then build, run and access.
So I am confident that docker is working and flask is working independently of each other, but I cannot get them to work together.
If on Linux, then http://localhost:5000 should work seeing as the container is both running and listening on port 5000.
Otherwise, you would be using Docker Machine, and so you need to get the IP of the docker virtual machine using docker-machine ip. On OSX, for example
$ open http://$(docker-machine ip default):5000

Categories

Resources