I am using docker-compose to deploy multiple microservices in flask. Here is the compose code
version: '3'
services:
test-api:
volumes:
- ./test-api:/test-api
build: test-api
ports:
- "5000:5000"
redis:
image: "redis:alpine"
search:
volumes:
- ./seach:/search
environment:
- HTTP_PORT=5000
- REDIS_URL=redis://redis:6379/0
build: search
ports:
- "5001:5000"
link:
- redis
Now I have to access this service from single URL eg: http://example.com/test-api or http://example.com/search, but I am unable to figure it out since the 2 services are running are on 2 different ports. I know I need to use nginx and configure it so that I can access them. But I am not sure how to do that. Can someone help me with this or at least give me some docs to read so as to understand the routing?
Also both the services use /health to report the result of health-check. How do I access the health check for both the services?
As you wrote you should use a load balancer placed in front of your services. Now, you should create a docker network without exposing ports. the only container that exposes ports should be the nginx container in order to handle all clients request. The test-api, search and nginx should be part of the same docker network in order to allow nginx to dispatch the request to the right container. Your docker-compose file should look like this:
version: '3'
services:
loadbalancer:
image: nginx
ports:
- "80:8080"
networks:
- my_netowrk
test-api:
volumes:
- ./test-api:/test-api
build: test-api
networks:
- my_netowrk
redis:
image: "redis:alpine"
networks:
- my_netowrk
search:
volumes:
- ./seach:/search
environment:
- HTTP_PORT=5000
- REDIS_URL=redis://redis:6379/0
build: search
networks:
- my_netowrk
networks:
my_netowrk:
driver: <driver>
I would advise you to don't use links anymore, they are old and deprecated.
You can learn more about docker networks from links below:
https://docs.docker.com/network/
https://docs.docker.com/compose/networking/
https://docs.docker.com/compose/compose-file/#network-configuration-reference
So for the people who are looking for a quick solution, here is my nginx file
http {
server {
listen 80;
location /test {
proxy_pass http://test-api:5000;
}
location /search {
proxy_pass http://search:5000;
}
location /health-test {
proxy_pass http://test-api:5000/health;
}
location /health-search {
proxy_pass http://search:5000/health;
}
}
}
Related
I'm running a flask python app, within which I make some Ajax calls.
Running it using the Flask development server works fine, the calls run in the background and I can continue using the app.
When moving to a gunicorn and Nginx reverse proxy setup the app seems to wait for that Ajax call to be processed (often ending up in a timeout). Why is that? Does this have something to do with multithreading? I'm new to gunicorn/nginx. Thanks for the help
The setup is pretty much the same as described here: https://testdriven.io/blog/dockerizing-flask-with-postgres-gunicorn-and-nginx/#docker
The nginx config:
upstream app {
server web:5000;
}
server {
listen 80;
location / {
proxy_pass http://app;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/app/web/project/static/;
}
location /media/ {
alias /home/app/web/project/media/;
}
}
docker-compose file:
version: '3.8'
services:
web:
container_name: app
restart: always
build:
context: ./services/web
dockerfile: Dockerfile.prod
expose:
- 5005
env_file:
- ./.env.prod
command: gunicorn --bind 0.0.0.0:5005 manage:app
volumes:
- static_volume:/home/hello_flask/web/app/static
- media_volume:/home/hello_flask/web/app/media
depends_on:
- db
db:
container_name: app_prod_db
restart: always
image: postgres:13-alpine
volumes:
- postgres_data_prod:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
container_name: nginx
restart: always
build: ./services/nginx
volumes:
- static_volume:/home/app/web/app/static
- media_volume:/home/app/web/app/start/media
image: "nginx:latest"
ports:
- "5000:80"
depends_on:
- web
volumes:
postgres_data_prod:
static_volume:
media_volume:
Don't think that Ajax call is an issue but just in case here it is:
$("#load_account").on('submit', function(event) {
$.ajax({
data : {
vmpro : $('#accountInput').val()
},
type : 'POST',
url : '/account/load_account'
})
.done(function(data) {
if (data.error) {
$('#errorAlert_accountvmproInput').text(data.error).show();
$('#successAlert_accountInput').hide();
}
else {
$('#successAlert_accountInput').text(data.overview).show();
$('#errorAlert_accountInput').hide();
}
});
Solved:
gunicorn was running 1 single worker and of the default sync class.
Increasing the number of workers to 4 solved the problem.
However I actually opted to use gevent class workers in the end.
My updated docker-compose yml includes:
command: gunicorn -k gevent -w 2 --bind 0.0.0.0:5005 manage:app
Detailed in gunicorn documentation HERE
I have created the following Docker containers to run zookeeper, kafka, ksql, and ksql-cli as well. When I am running the command docker-compose exec ksqldb-cli ksql http://ksqldb-server:8088 from the same machine where Docker is running, ksql-cli can access the ksql-server just fine.
However, I want to have access to the ksql-server outside of the same machine but using a different laptop under the same local network. How do I do that?
Here's the relevant docker-compose.yml file:
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
networks:
- kafka_network
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
networks:
- kafka_network
depends_on:
- zookeeper
ports:
- 29092:29092
- 29093:29093
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: EXTERNAL_SAME_HOST://:29092,EXTERNAL_DIFFERENT_HOST://:29093,INTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092,EXTERNAL_DIFFERENT_HOST://192.168.178.218:29093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT,EXTERNAL_DIFFERENT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
ksqldb-server:
image: confluentinc/cp-ksqldb-server:latest
container_name: ksqldb-server
hostname: ksqldb-server
networks:
- kafka_network
depends_on:
- kafka
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "kafka:9092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:latest
container_name: ksqldb-cli
networks:
- kafka_network
depends_on:
- kafka
- ksqldb-server
entrypoint: /bin/sh
tty: true
networks:
kafka_network:
name: kafka_docker_sse
When I try accessing the ksql-server from a different laptop that is under the same local network, I get a connection error/connection refused. I tried accessing the ksqldb-server using the Python ksql-python package.
pip install ksql
from ksql import KSQLAPI
client = KSQLAPI('http://ksql-server:8088')
# OR
# client = KSQLAPI('http://0.0.0.0:8088')
# client = KSQLAPI('http://192.168.178.218:8088')
if __name__ == '__main__':
print(client)
I also tried changing the KSQL_LISTENERS: "http://0.0.0.0:8088" under the ksqldb-server to KSQL_LISTENERS: "http://192.168.178.218:8088" but that doesn't work either.
Any hints would be really helpful as I am currently stuck here for the last two days!
You'll need to keep KSQL_LISTENERS: "http://0.0.0.0:8088". This binds the container to accept all incoming traffic into port 8088
Then, with ports in Compose, the host 8088 traffic is forwarded to the container 8088.
So, for any external client, you need to connect to that hosts LAN / external IP on port 8088. You may need to explicitly allow TCP traffic into that server host's firewall for that port.
I have two separate Docker containers, and separate docker-compose YAML's, too. One ('mongodb') for running the MongoDB, the other ('logger') for data scraping in Python. The latter should write some results into MongoDB.
I used separate yaml's to be able to stop easily one container while not stopping the other one.
To resolve this task I used docker-compose' bridge network capability. So I used the following two yaml's:
networks:
wnet:
driver: bridge
services:
mongodb:
image: mongo:4.0.9
container_name: mongodb
ports:
- "27018:27017"
volumes:
- mongodb-data:/data/db
logging: *default-logging
restart: unless-stopped
networks:
- wnet
volumes:
mongodb-data:
name: mongodb-data
and
networks:
wnet:
driver: bridge
services:
logger:
build:
context: .
image:logger:$VERSION
container_name:logger
environment:
- TARGET=$TARGET
volumes:
- ./data:/data
restart: unless-stopped
networks:
- wnet
The Python container should now persist the scraped data within the MongoDB database. So I tried the following variants:
from pymongo import MongoClient
client = MongoClient(port=27018, host='mongodb') # V1
client = MongoClient(port=27018) # V2
db = client['dbname']
Then, executing one of the following commands throws the error:
db.list_collection_names()
db.get_collection('aaa').insert_one({ 'a':1 })
The response I get is
pymongo.errors.ServerSelectionTimeoutError: mongodb:27018: [Errno -2] Name or service not known
Any idea?
Thanks.
What finally worked is to refer to the network (defined in container mongodb) by its composed name (mongodb + wnet = mongodb_wnet), and to add the external option. This makes the YAML file of the logger container look like:
services:
logger:
build:
context: .
image: logger:$VERSION
container_name: logger
environment:
- TARGET=$TARGET
volumes:
- ./data:/data
restart: unless-stopped
networks:
- mongodb_wnet
networks:
mongodb_wnet:
external: true
However, as mentioned by #BellyBuster, it might be a good idea to use one single docker-compose file. I was not aware that it is quite easy to start, stop, and build single containers belonging to the same YAML.
SO has also enough posts on that, e.g. How to restart a single container with docker-compose and/or docker compose build single container.
I have a Flask app with a Nginx reverse proxy setup with docker-compose. I can get everything to work in a single container without problems, but I need to launch the staging and production servers on the same machine, so I am trying to migrate my setup to multiple containers with a separate nginx-proxy container. The reverse proxy setup seems to be ok, but when I access the app using the proxy Flask has some issue with the request. I detail below the docker-compose files and the server outputs.
NGINX-PROXY docker-compose.yml
version: "3.5"
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- proxy
networks:
proxy:
Flask docker-compose.yml
version: '3.5'
services:
# other services defined, not relevant for the issue
data-api:
environment:
FLASK_ENV: development
VIRTUAL_HOST: app.local
build: ./dataAPI
expose:
- 5000
ports:
- 5000:5000
volumes:
- ./dataAPI:/dataAPI
networks:
- nginx_proxy
networks:
nginx_proxy:
external: true
I added a line in /etc/hosts for app.local.
I spin up first nginx then the app. If I try to access it directly with 0.0.0.0:5000/staging/data the request is served without problems, but if I try to use the proxy with app.local/staging/data the Flask app throws a 404:
Flask log
data-api_1 | 172.20.0.1 - - [30/May/2019 14:13:29] "GET /staging/data/ HTTP/1.1" 200 -
data-api_1 | 172.20.0.2 - - [30/May/2019 14:13:31] "GET /staging/data/ HTTP/1.1" 404 -
It doesn't look like you put the containers on the same network. The nginx_proxy is using a network named proxy while the flask container is using an network named nginx_proxy.
By the way, docker-compose is useful for composing applications requiring multiple containers. Rather than use a separate docker-compose file for each container, this setup might be easier if you put both services in the same docker-compose file. Then you don't even need to setup a separate network as compose creates a default network for the services.
Another note, since you are using an nginx reverse proxy, you probably wouldn't want to map the flask port to the host machine.
Well, my question is. How to create a file which can start node angular, python main_worker.py, MongoDB and redis? I really do not know where to start.
I just wanna start my web program without opening 7 consoles to start each service like python worker angular node and databases.
I know about angular and MongoDB others are not, will it be your help? try the following ways but you need one console
"scripts": {
"dev": "concurrently \"mongod\" \"ng serve --proxy-config proxy.conf.json --open\" \"tsc -w -p server\" \"nodemon dist/server/app.js\"",
"prod": "concurrently \"mongod\" \"ng build --aot --prod && tsc -p server && node dist/server/app.js\""
},
You can use Docker Compose to start all your services with a single command:
docker-compose up
Learn more about it here: https://docs.docker.com/compose/reference/up/
You will need to create a docker-compose.yml in your project which will looks something like:
version: "3.5"
services:
mongodb:
container_name: mongo
hostname: mongo
image: mongo
restart: always
volumes:
- mongo_data:/var/lib/mongo/data
networks:
- your-app-network
ports:
- 27017:27017
environment:
- YOUR_VARIABLE:value
redis:
container_name: redis
hostname: redis
image: redis
restart: always
volumes:
- rediso_data:/var/lib/redis/data
networks:
- your-app-network
ports:
- 6380:6380
environment:
- YOUR_VARIABLE:value
volumes:
mongo_data:
redis_data:
networks:
go-app:
name: your-app-network
Note, the sample above is not ready to use docker compose file. It just shows you the idea how you do it. You will have to edit it and add some variable and settings specific to your application as well as add more services like node.js, python, etc.