I want to run a python script in a docker container from another docker container within a docker-compose environment, here is an abstraction of the docker-compose.yml file:
app:
build: ./app
volumes:
- ./app:/app
- /var/run/docker.sock:/var/run/docker.sock
links:
- container1
- container2
- python_container
ports:
- "13000:3000"
working_dir: /app
command: npm install
entrypoint: /entrypoint.sh
container1:
image: container1:version
ports:
- "3005:3005"
volumes:
- ./volume:/volume
container2:
image: container2:version
ports:
- "3004:3004"
python_container:
image: some_image
volumes:
- ./scripts_volume:/scripts_volume
Is it possible to run a python script in de python_container from within the app container? I have a node application that needs to run python scripts for which we created a docker 'runtime' container with all dependencies prebuilt.
I already tried mounting de docker socket, however when i try to run
docker-compose run python_container python scriptname.py
it says:
Can't find a suitable configuration file in this directory or any parent. Are you in the right directory?
If it is possible, what is the best approach?
Regards
Related
I have developed a primarily raspberry pi app in Python that uses Redis as its local cache so naturally I turned to docker compose to define all my services i.e. redis and my app. I am using Docker Hub private repository to host my container. But I do not get how to use the docker buildx bake command to target linux/armv7 platform as --platform flag is not part of bake
All the examples that the Docker team has shown use the simple docker buildx command which cannot be run for compose files.
My docker-compose.yml file is defined as:
version: '3.0'
services:
redis:
image: redis:alpine
app:
image: dockerhub/repository
build: gateway
restart: always
Dockerfile:
# set base image (Host OS)
FROM python:3.8-slim
# set the working directory in the container
WORKDIR /run
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
CMD [ "python", "-u", "run.py" ]
Any help would be much appreciated. Thanks
you can supply platform parameter under key xbake as mentioned below. (reference document: https://docs.docker.com/engine/reference/commandline/buildx_bake/)
# docker-compose.yml
services:
addon:
image: ct-addon:bar
build:
context: .
dockerfile: ./Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
tags:
- ct-addon:foo
- ct-addon:alp
platforms:
- linux/amd64
- linux/arm64
cache-from:
- user/app:cache
- type=local,src=path/to/cache
cache-to: type=local,dest=path/to/cache
pull: true
aws:
image: ct-fake-aws:bar
build:
dockerfile: ./aws.Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
secret:
- id=mysecret,src=./secret
- id=mysecret2,src=./secret2
platforms: linux/arm64
output: type=docker
no-cache: true
I have two services, on two different GitLab repositories, deployed to the same host. I am currently using supervisord to run all of the services. The CI/CD for each repository pushes the code to the host.
I am trying to replace supervisord with Docker. What I did was the following:
Set up a Dockerfile for each service.
Created a third repository with only a docker-compose.yml, that runs docker-compose up in its CI to build and run the two services. I expect this repository to only be deployed once.
I am looking for a way to have the docker-compose automatically update when I deploy one of the two services.
Edit: Essentially, I am trying to figure out the best way to use docker-compose with a multi repository setup and one host.
My docker-compose:
version: "3.4"
services:
redis:
image: "redis:alpine"
api:
build: .
command: gunicorn -c gunicorn_conf.py --bind 0.0.0.0:5000 --chdir server "app:app" --timeout 120
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- redis
celery-worker:
build: .
command: celery worker -A server.celery_config:celery
volumes:
- .:/app
depends_on:
- redis
celery-beat:
build: .
command: celery beat -A server.celery_config:celery --loglevel=INFO
volumes:
- .:/app
depends_on:
- redis
other-service:
build: .
command: python other-service.py
volumes:
- .:/other-service
depends_on:
- redis
If you're setting this up in the context of a CI system, the docker-compose.yml file should just run the images; it shouldn't also take responsibility for building them.
Do not overwrite the code in a container using volumes:.
You mention each service's repository has a Dockerfile, which is a normal setup. Your CI system should run docker build there (and typically docker push). Then your docker-compose.yml file just needs to mention the image: that the CI system builds:
version: "3.4"
services:
redis:
image: "redis:alpine"
api:
image: "me/django:${DJANGO_VERSION:-latest}"
ports:
- "8000:8000"
depends_on:
- redis
celery-worker:
image: "me/django:${DJANGO_VERSION:-latest}"
command: celery worker -A server.celery_config:celery
depends_on:
- redis
I hint at docker push above. If you're using Docker Hub, or a cloud-hosted Docker image repository, or are running a private repository, the CI system should run docker push after it builds each image, and (if it's not Docker Hub) the image: lines need to include the repository address.
The other important question here is what to do on rebuilds. I'd recommend giving each build a unique Docker image tag, a timestamp or a source control commit ID both work well. In the docker-compose.yml file I show above, I use an environment variable to specify the actual image tag, so your CI system can run
DJANGO_VERSION=20200113.1114 docker-compose up -d
Then Compose will know about the changed image tag, and will be able to recreate the containers based on the new images.
(This approach is highly relevant in the context of cluster systems like Kubernetes. Pushing images to a registry is all but required there. In Kubernetes changing the name of an image: triggers a redeployment, so it's also all but required to use a unique image tag per build. Except that there are multiple and more complex YAML files, the overall approach in Kubernetes would be very similar to what I've laid out here.)
My docker-compose file is as follows:
version: '3'
services:
db:
image: mongo:4.2
container_name: mongo-db
restart: always
environment:
MONGO_INITDB_DATABASE: VMcluster
ports:
- "16006:27017"
volumes:
- ./initdb.js:/docker-entrypoint-initdb.d/initdb.js
web:
build:
context: .
dockerfile: Dockerfile_Web
command: python manage.py runserver 0.0.0.0:8000
container_name: cluster-monitor-web
volumes:
- .:/vmCluster_service
ports:
- "9900:8000"
depends_on:
- db
cronjobs:
build:
context: .
dockerfile: Dockerfile_Cron
command: ["cron", "-f"]
container_name: cluster-monitor-cron
I want to implement a feature where the user should be able to update the crontab from the Django web. I'm done with Django part i.e python code for backend. But the Django web container is not able to access the crontab container.
how can I make Django web container to access the crontab container and update the crontab?
Im using python-crontab module in django which throws an error :
"[Errno 2] No such file or directory: '/usr/bin/crontab':
'/usr/bin/crontab'"
I'm running a Django application in Docker with NGINX, and Gunicorn on Ubuntu Server. It is a fairly large 'legacy' project so I am not the one who configured it. Why do I have to run docker-compose down before running docker-compose up -d to reflect the changes I have made to Django's static files? The docker file runs django's CLI command to collect the static files across project directories.
In both cases I ran docker-compose build before putting up or taking down.
Stopping the NGINX container first, and then putting up the container seemed to allow NGINX to find static files but why was NGINX unable to serve files added to another container?
The docker compose file being run:
version: '2'
services:
nginx:
image: nginx:latest
container_name: smi-nginx
ports:
- "8080:8080"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- /static:/static
depends_on:
- web
web:
build: .
container_name: smi-App
volumes:
- /static:/static
command: bash -c "gunicorn -w 3 -t 14400 --max-requests 75 -b 0.0.0.0:5000 project.wsgi:application"
environment:
- APPLEVEL=DEVELOPMENT
- MachineID=99
- DbUser=djangouser
- Password=secret
expose:
- "5000"
Thanks!
I am running a python application as a docker container and in my python application I am using pythons logging class to log execution steps using logger.info, logger.debug and logger.error. The problem with this is the log file is only persistent within the docker container and if the container goes away then the log file is also lost and also that every time I have to view the log file I have to manually copy the container log file to local system.What I want to do is that whatever log is being written to container log file, it should be persistent on the local system - so write to a local system log file or Auto mount the docker log file to local system.
Few things about my host machine:
I run multiple docker containers on the machine.
My sample docker-core file is:
FROM server-base-v1
ADD . /app
WORKDIR /app
ENV PATH /app:$PATH
CMD ["python","-u","app.py"]
My sample docker-base file is:
FROM python:3
ADD ./setup /app/setup
WORKDIR /app
RUN pip install -r setup/requirements.txt
A sample of my docker-compose.yml file is:
`
version: "2"
networks:
server-net:
services:
mongo:
container_name: mongodb
image: mongodb
hostname: mongodb
networks:
- server-net
volumes:
- /dockerdata/mongodb:/data/db
ports:
- "27017:27017"
- "28017:28017"
server-core-v1:
container_name: server-core-v1
image: server-core-v1:latest
depends_on:
- mongo
networks:
- server-net
ports:
- "8000:8000"
volumes:
- /etc/localtime:/etc/localtime:ro
`
Above yml file sample is just a part of my actual yml file. I have multiple server-core-v1 containers(with different names) running parallel with each having their own logging file.
I would also appreciate if there are some better strategies for doing logging in python with docker and make it persistent. I read few articles which mentioned using sys.stderr.write() and sys.stdout.write() but not sure how to use that especially with multiple containers running and logging.
Volumes are what you need.
You can create volumes to bind an internal container folder with a local system folder. So that you can store your logs in a logs folder and map this as a volume to any folder on your local system.
You can specify a volume in the docker-compose.yml file for each service you are creating. See the docs.
Bind-mounts are what you need.
As you can see, bind-mounts are accesible from yours host file system. It is very simmilar to shared folders in VM architecture.
You can simple achieve that with mounting your volume directly to path on your PC.
In yours case:
version: "2"
networks:
server-net:
services:
mongo:
container_name: mongodb
image: mongodb
hostname: mongodb
networks:
- server-net
volumes:
- /dockerdata/mongodb:/data/db
ports:
- "27017:27017"
- "28017:28017"
server-core-v1:
container_name: server-core-v1
image: server-core-v1:latest
depends_on:
- mongo
networks:
- server-net
ports:
- "8000:8000"
volumes:
- ./yours/example/host/path:/etc/localtime:ro
Just replace ./yours/example/host/path with target directory on yours host.
In this scenario, i belive that logger is on server side.
If you are working on windows remember to bind in current user directory!