(root) Additional property app is not allowed - python

This is the docker-compose.yml and I didn't get this error until today. I didnt touch the .yml and the server runs as usual when I run it from Docker app.
app:
build: .
command: python -u app.py
ports:
- "5000:5000"
volumes:
- .:/app
links:
- db
db:
image: mongo:latest
hostname: dsairline_mongodb
environment:
- MONGO_INITDB_DATABASE=ds_airline_db
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=pass
volumes:
- ./init-db.js:/docker-entrypoint-initdb.d/init-db.js:ro
ports:
- 27017:27017

That looks like an obsolete version 1 Compose file. Recent versions of Compose have both removed support for this version of the file, and also declared the top-level version: key optional, so this file is now being interpreted as conforming to the Compose Specification, which it doesn't.
I'd recommend changing this file to use version 2 or 3 of the Compose format, which are supported by all current versions of the Compose tool. (Version 2 supports some options like resource constraints for standalone Docker installations; version 3 has several options only useful with Docker Swarm.) To update this file:
Add a top-level version: '2.4' or version: '3.8' line declaring the file format you're using.
Add a top-level services: block, and move all of this existing content under it.
Delete the obsolete links: option; the newer file formats automatically provide a Docker network that replaces Docker links.
version: '3.8' # or '2.4' # add
services: # add
app:
build: .
command: python -u app.py # (delete? duplicates Dockerfile CMD)
ports:
- "5000:5000"
volumes: # (delete? duplicates Dockerfile COPY)
- .:/app
# links: # delete, obsolete
# - db
db:
image: mongo:latest
hostname: dsairline_mongodb
environment:
- MONGO_INITDB_DATABASE=ds_airline_db
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=pass
volumes:
- ./init-db.js:/docker-entrypoint-initdb.d/init-db.js:ro
# - dbdata:/data/db # add?
ports:
- 27017:27017
# volumes: # add?
# dbdata: # add?
I also propose two other changes you might consider. Your MongoDB instance isn't configured to persist data anywhere; if you add a top-level volumes: block, you can create a named volume, which you can then add to the db service volumes: block. (This wasn't an option in the version 1 Compose file.) You also have options on your app container to overwrite the code in the image with a volume mount and override the Dockerfile's CMD, but these probably aren't necessary and you can also delete them.

Related

How to import and manage the database in postgresql on the docker?

I have my Django project with structure like this:
myapp/
manage.py
Dockerfile
docker-compose.yml
my-database1.sql
my-database2.sql
requirements.txt
pgadmin/
pgadmin-data/
myapp/
__init__.py
settings.py
urls.py
wsgi.py
This is my docker-compose.yml file:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
- ./my-database1.sql:/docker-entrypoint-initdb.d/my-database1.sql
- ./my-database2.sql:/docker-entrypoint-initdb.d/my-database2.sql
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4:4.18
restart: unless-stopped
environment:
- PGADMIN_DEFAULT_EMAIL=admin#domain.com
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=80
ports:
- "8090:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
links:
- "db:pgsql-server"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
volumes:
db-data:
pgadmin-data:
I have three problems with my app:
1 - how can I import my my-database1.sql and my-database2.sql databases into postgresql? The solution (I mean ./my-database1.sql:/docker-entrypoint-initdb.d/my-database1.sql) in my code doesn't work.
2 - after successful import databases from previous step how can I see them inside pgadmin?
3 - my code should write something inside tables of my-database1.sql. How should I connect to it after import to postgresql?
The postgres image will only attempt to run the files provided inside the /docker-entrypoint-initdb.d directory while running on an empty folder. By your docker-compose.yml configuration, you have a persistent volume for the database data. This means that Postgres will not take updates to the SQL files into account on later deployments. Something similar happens when one of the scripts fails. Here is the excerpt from the documentation:
Warning: scripts in /docker-entrypoint-initdb.d are only run if you start the container with an empty data directory; any pre-existing database will be left untouched on container startup. One common problem is that if one of your /docker-entrypoint-initdb.d scripts fails (which will cause the entrypoint script to exit) and your orchestrator restarts the container with the already initialized data directory, it will not continue with your scripts.
Check the site documentation to see how you can make your initialization scripts more robust so they can handle failures.
To solve your issue, try deleting the volume manually or by using the -v flag while running docker-compose down, and then redeploy your application:
-v, --volumes Remove named volumes declared in the `volumes`
section of the Compose file and anonymous volumes
attached to containers.

How to access MongoDB from PyMongo when using separate docker-compose yaml's?

I have two separate Docker containers, and separate docker-compose YAML's, too. One ('mongodb') for running the MongoDB, the other ('logger') for data scraping in Python. The latter should write some results into MongoDB.
I used separate yaml's to be able to stop easily one container while not stopping the other one.
To resolve this task I used docker-compose' bridge network capability. So I used the following two yaml's:
networks:
wnet:
driver: bridge
services:
mongodb:
image: mongo:4.0.9
container_name: mongodb
ports:
- "27018:27017"
volumes:
- mongodb-data:/data/db
logging: *default-logging
restart: unless-stopped
networks:
- wnet
volumes:
mongodb-data:
name: mongodb-data
and
networks:
wnet:
driver: bridge
services:
logger:
build:
context: .
image:logger:$VERSION
container_name:logger
environment:
- TARGET=$TARGET
volumes:
- ./data:/data
restart: unless-stopped
networks:
- wnet
The Python container should now persist the scraped data within the MongoDB database. So I tried the following variants:
from pymongo import MongoClient
client = MongoClient(port=27018, host='mongodb') # V1
client = MongoClient(port=27018) # V2
db = client['dbname']
Then, executing one of the following commands throws the error:
db.list_collection_names()
db.get_collection('aaa').insert_one({ 'a':1 })
The response I get is
pymongo.errors.ServerSelectionTimeoutError: mongodb:27018: [Errno -2] Name or service not known
Any idea?
Thanks.
What finally worked is to refer to the network (defined in container mongodb) by its composed name (mongodb + wnet = mongodb_wnet), and to add the external option. This makes the YAML file of the logger container look like:
services:
logger:
build:
context: .
image: logger:$VERSION
container_name: logger
environment:
- TARGET=$TARGET
volumes:
- ./data:/data
restart: unless-stopped
networks:
- mongodb_wnet
networks:
mongodb_wnet:
external: true
However, as mentioned by #BellyBuster, it might be a good idea to use one single docker-compose file. I was not aware that it is quite easy to start, stop, and build single containers belonging to the same YAML.
SO has also enough posts on that, e.g. How to restart a single container with docker-compose and/or docker compose build single container.

Multi-repository docker-compose

I have two services, on two different GitLab repositories, deployed to the same host. I am currently using supervisord to run all of the services. The CI/CD for each repository pushes the code to the host.
I am trying to replace supervisord with Docker. What I did was the following:
Set up a Dockerfile for each service.
Created a third repository with only a docker-compose.yml, that runs docker-compose up in its CI to build and run the two services. I expect this repository to only be deployed once.
I am looking for a way to have the docker-compose automatically update when I deploy one of the two services.
Edit: Essentially, I am trying to figure out the best way to use docker-compose with a multi repository setup and one host.
My docker-compose:
version: "3.4"
services:
redis:
image: "redis:alpine"
api:
build: .
command: gunicorn -c gunicorn_conf.py --bind 0.0.0.0:5000 --chdir server "app:app" --timeout 120
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- redis
celery-worker:
build: .
command: celery worker -A server.celery_config:celery
volumes:
- .:/app
depends_on:
- redis
celery-beat:
build: .
command: celery beat -A server.celery_config:celery --loglevel=INFO
volumes:
- .:/app
depends_on:
- redis
other-service:
build: .
command: python other-service.py
volumes:
- .:/other-service
depends_on:
- redis
If you're setting this up in the context of a CI system, the docker-compose.yml file should just run the images; it shouldn't also take responsibility for building them.
Do not overwrite the code in a container using volumes:.
You mention each service's repository has a Dockerfile, which is a normal setup. Your CI system should run docker build there (and typically docker push). Then your docker-compose.yml file just needs to mention the image: that the CI system builds:
version: "3.4"
services:
redis:
image: "redis:alpine"
api:
image: "me/django:${DJANGO_VERSION:-latest}"
ports:
- "8000:8000"
depends_on:
- redis
celery-worker:
image: "me/django:${DJANGO_VERSION:-latest}"
command: celery worker -A server.celery_config:celery
depends_on:
- redis
I hint at docker push above. If you're using Docker Hub, or a cloud-hosted Docker image repository, or are running a private repository, the CI system should run docker push after it builds each image, and (if it's not Docker Hub) the image: lines need to include the repository address.
The other important question here is what to do on rebuilds. I'd recommend giving each build a unique Docker image tag, a timestamp or a source control commit ID both work well. In the docker-compose.yml file I show above, I use an environment variable to specify the actual image tag, so your CI system can run
DJANGO_VERSION=20200113.1114 docker-compose up -d
Then Compose will know about the changed image tag, and will be able to recreate the containers based on the new images.
(This approach is highly relevant in the context of cluster systems like Kubernetes. Pushing images to a registry is all but required there. In Kubernetes changing the name of an image: triggers a redeployment, so it's also all but required to use a unique image tag per build. Except that there are multiple and more complex YAML files, the overall approach in Kubernetes would be very similar to what I've laid out here.)

How to make python logging persistent with docker

I am running a python application as a docker container and in my python application I am using pythons logging class to log execution steps using logger.info, logger.debug and logger.error. The problem with this is the log file is only persistent within the docker container and if the container goes away then the log file is also lost and also that every time I have to view the log file I have to manually copy the container log file to local system.What I want to do is that whatever log is being written to container log file, it should be persistent on the local system - so write to a local system log file or Auto mount the docker log file to local system.
Few things about my host machine:
I run multiple docker containers on the machine.
My sample docker-core file is:
FROM server-base-v1
ADD . /app
WORKDIR /app
ENV PATH /app:$PATH
CMD ["python","-u","app.py"]
My sample docker-base file is:
FROM python:3
ADD ./setup /app/setup
WORKDIR /app
RUN pip install -r setup/requirements.txt
A sample of my docker-compose.yml file is:
`
version: "2"
networks:
server-net:
services:
mongo:
container_name: mongodb
image: mongodb
hostname: mongodb
networks:
- server-net
volumes:
- /dockerdata/mongodb:/data/db
ports:
- "27017:27017"
- "28017:28017"
server-core-v1:
container_name: server-core-v1
image: server-core-v1:latest
depends_on:
- mongo
networks:
- server-net
ports:
- "8000:8000"
volumes:
- /etc/localtime:/etc/localtime:ro
`
Above yml file sample is just a part of my actual yml file. I have multiple server-core-v1 containers(with different names) running parallel with each having their own logging file.
I would also appreciate if there are some better strategies for doing logging in python with docker and make it persistent. I read few articles which mentioned using sys.stderr.write() and sys.stdout.write() but not sure how to use that especially with multiple containers running and logging.
Volumes are what you need.
You can create volumes to bind an internal container folder with a local system folder. So that you can store your logs in a logs folder and map this as a volume to any folder on your local system.
You can specify a volume in the docker-compose.yml file for each service you are creating. See the docs.
Bind-mounts are what you need.
As you can see, bind-mounts are accesible from yours host file system. It is very simmilar to shared folders in VM architecture.
You can simple achieve that with mounting your volume directly to path on your PC.
In yours case:
version: "2"
networks:
server-net:
services:
mongo:
container_name: mongodb
image: mongodb
hostname: mongodb
networks:
- server-net
volumes:
- /dockerdata/mongodb:/data/db
ports:
- "27017:27017"
- "28017:28017"
server-core-v1:
container_name: server-core-v1
image: server-core-v1:latest
depends_on:
- mongo
networks:
- server-net
ports:
- "8000:8000"
volumes:
- ./yours/example/host/path:/etc/localtime:ro
Just replace ./yours/example/host/path with target directory on yours host.
In this scenario, i belive that logger is on server side.
If you are working on windows remember to bind in current user directory!

One file to start all services... mongodb, redis, node ,angular and python

Well, my question is. How to create a file which can start node angular, python main_worker.py, MongoDB and redis? I really do not know where to start.
I just wanna start my web program without opening 7 consoles to start each service like python worker angular node and databases.
I know about angular and MongoDB others are not, will it be your help? try the following ways but you need one console
"scripts": {
"dev": "concurrently \"mongod\" \"ng serve --proxy-config proxy.conf.json --open\" \"tsc -w -p server\" \"nodemon dist/server/app.js\"",
"prod": "concurrently \"mongod\" \"ng build --aot --prod && tsc -p server && node dist/server/app.js\""
},
You can use Docker Compose to start all your services with a single command:
docker-compose up
Learn more about it here: https://docs.docker.com/compose/reference/up/
You will need to create a docker-compose.yml in your project which will looks something like:
version: "3.5"
services:
mongodb:
container_name: mongo
hostname: mongo
image: mongo
restart: always
volumes:
- mongo_data:/var/lib/mongo/data
networks:
- your-app-network
ports:
- 27017:27017
environment:
- YOUR_VARIABLE:value
redis:
container_name: redis
hostname: redis
image: redis
restart: always
volumes:
- rediso_data:/var/lib/redis/data
networks:
- your-app-network
ports:
- 6380:6380
environment:
- YOUR_VARIABLE:value
volumes:
mongo_data:
redis_data:
networks:
go-app:
name: your-app-network
Note, the sample above is not ready to use docker compose file. It just shows you the idea how you do it. You will have to edit it and add some variable and settings specific to your application as well as add more services like node.js, python, etc.

Categories

Resources