I wrote a ChatOps bot for the open source collaboration tool Mattermost using this framework. Now I want to write some integration tests and run them. When I follow the steps to run the integration tests from their project, they won't succeed. I used the command pytest --capture=no --log-cli-level=DEBUG . to run the integration tests.
It fails because localhost:8065 is not available yet after running the command docker-compose up -d. Anybody knows what I'm doing wrong?
Are you on Linux, Mac, or Windows? I think network_mode: host only works on Linux.
Try to edit the docker-compose.yml file, remove the network mode "host", and add a port mapping, something like this:
version: "3.7"
services:
app:
container_name: "mattermost-bot-test"
build: .
command: ./mm/docker-entry.sh
ports:
- "8065:8065"
extra_hosts:
- "dockerhost:127.0.0.1"
Related
I am developing a FastAPI app. It is running on Uvicorn in a Docker container using docker-compose.
I want to include some files other than *.py to trigger the auto reload while in development.
According to the docs Uvicorn needs the optional dependency WatchFiles installed to be able to use the --reload-include flag, which would enable me to include other file types to trigger a reload. However, when WatchFiles is installed (with Uvicorn confirming by printing this info at start up: Started reloader process [1] using WatchFiles) no auto reloads happen at all. Mind you, this is independent of changes to the run command, with or without the include flag.
Without WatchFiles installed, Uvicorn's default auto reload works as intended for just *.py files.
What I've got
This is the Dockerfile:
FROM python:3.10
WORKDIR /tmp
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install --no-cache-dir --upgrade -r requirements.txt
WORKDIR /code
CMD ["uvicorn", "package.main:app", "--host", "0.0.0.0", "--port", "80", "--reload"]
This is the docker-compose.yml:
version: "3.9"
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
(I need a docker-compose file because of some services required later on.)
The most basic FastAPI app:
from fastapi import FastAPI, HTTPException
app = FastAPI()
#app.get('/')
async def index():
raise HTTPException(418)
Mind you, this is probably of no concern as the problem does not seem to be related to FastAPI.
requirements.txt:
fastapi~=0.85
pydantic[email]~=1.10.2
validators~=0.20.0
uvicorn~=0.18
watchfiles
python-decouple==3.6
python-multipart
pyotp~=2.7
wheezy.template~=3.1
How did I try to resolve this issue?
I tried using command: uvicorn package.main:app --host 0.0.0.0 --port 80 --reload in docker-compose.yml instead of CMD [...] in the Dockerfile, which unsurprisingly changed nothing.
I created a file watch.py to test if WatchFiles works:
from watchfiles import watch
for changes in watch('/code', force_polling=True):
print(changes)
And…in fact it does work. Running it from the container in Docker CLI prints all the changes made. (python -m watch) And btw it works just as fine async/using asyncio. So it is probably nothing to do with the file system/share/mount within Docker.
So…
How do I fix it? What is wrong with Uvicorn(?) I need to check for other file types e.g. *.html in /templates. Do I have to get WatchFiles to work or are there other ways? If I do, how?
I just had the same problem and the problem is with WatchFiles.
In the watchfiles documentation it is understood that the detection relies on file system notifications, and I think that via docker its events are not launched when using a volume.
Notify will fall back to file polling if it can't use file system notifications
So you have to tell watchfiles to force the polling, that's what you did in your test python script with the parameter force_polling and that's why it works:
for changes in watch('/code', force_polling=True):
Fortunately in the documentation we are given the possibility to force the polling via an environment variable.
Add this environment variable to your docker-compose.yml and auto-reload will work:
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
environment:
- WATCHFILES_FORCE_POLLING=true
I have searched but I couldn't find a solution for my problem. My docker-compose.yml file as below.
#
version: '2.1'
services:
mongo:
image: mongo_db
build: mongo_image
container_name: my_mongodb
restart: always
networks:
- isolated_network
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
entrypoint: ["python3", "/tmp/script/get_api_to_mongodb.py", "&"]
networks:
isolated_network:
So here I use a custom Dockerfile. And my Dockerfile is like below.
FROM mongo:latest
RUN apt-get update -y
RUN apt-get install python3-pip -y
RUN pip3 install requests
RUN pip3 install pymongo
RUN apt-get clean -y
RUN mkdir -p /tmp/script
COPY get_api_to_mongodb.py /tmp/script/get_api_to_mongodb.py
#CMD ["python3","/tmp/script/get_api_to_mongodb.py","&"]
Here I want to create a container which have MongoDB and after create the container I collect a data using an API and send the data to MongoDB. But when I run the python script in that time mongodb is not initialized. So I need to run my script after container is created and right after mongodb initialized. Thanks in advance.
You should run this script as a separate container. It's not "part of the database", like an extension or plugin, but rather an ordinary client process that happens to connect to the database and that you want to run relatively early on. In general, if you're thinking about trying to launch a background process in a container, it's often a better approach to run foreground processes in two separate containers.
This setup means you can use a simpler Dockerfile that starts from an image with Python preinstalled:
FROM python:3.10
RUN pip install requests pymongo
WORKDIR /app
COPY get_api_to_mongodb.py .
CMD ["./get_api_to_mongodb.py"]
Then in your Compose setup, declare this as a second container alongside the first one. Since the script is in its own image, you can use the unmodified mongo image.
version: '2.4'
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
loader:
build: .
restart: on-failure
depends_on:
- mongodb
# environment:
# - MONGO_HOST=mongo
# - MONGO_USERNAME=root
# - MONGO_PASSWORD=root_pw
Note that the loader will re-run every time you run docker-compose up -d. You also may have to wait for the database to do its initialization before you can run the loader process; see Docker Compose wait for container X before starting Y.
It's likely you have an existing Compose service for your real application
version: '2.4'
services:
mongo: { ... }
app:
build: .
...
If that image contains the loader script, then you can docker-compose run it. This launches a new temporary container, using most of the attributes from the Compose service declaration, but you provide an alternate command: and the ports: are ignored.
docker-compose run app ./get_api_to_mongodb.py
One might ideally like a workflow where first the database container starts; then once it's accepting requests, run the loader script as a temporary container; then once that's completed start the main application server. This is mostly beyond Compose's capabilities, though you can probably get close with a combination of extended depends_on: declarations and a healthcheck: for the database.
I have a Python program that uses the Docker Python SDK to run a container for a third-party docker image. Something like that in my code:
import docker
docker.from_env().containers.run(image="<other_image>")
My program is also packaged in Docker, and I would like to test with gitlab CI. So my .gitlab-ci.yml looks like that:
image: <my_image>:latest
stages:
- Tests
pytest:
stage: Tests
script:
- pytest
And the python code from above is covered by the tests. The problem is that the CI fails on docker.from_env() with docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')).
I assume that is because there's no docker server accessible from within <my_image> container. I searched for solutions, and I found the docker:dind one + the "expose docker socket" i.e. /var/run/docker.sock:/var/run/docker.sock one, but I can't make it work.
All the examples that I found for docker:dind are for calling docker directly in the .gitlab-ci.yml, but I have found none that explains how to leverage it for using a Docker SDK in your own code. Also, the socket solution requires you to change the gitlab runner configuration, but I don't have access to it (I am using a gitlab free account and don't have the opportunity to host my own runner).
What I need is basically:
image:
name: <my_image>:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
But AFAIK it doesn't exist.
Is there any solution?
Update: I realize that this would actually work:
image: docker
services:
- docker:dind
stages:
- Tests
pytest:
stage: Tests
script:
- docker run -v /var/run/docker.sock:/var/run/docker.sock <my_image>:latest pytest
But I would still be interested to know if it's possible to do it while still using my image as the base (for example, maybe I don't want to have pytest installed in my image, but I would do it with a before_script instruction in .gitlab-ci.yml... this I could also do in a script to be executed instead of running pytest directly, so I agree that this might not be a very good reason).
You should be able to do this:
image: <your python/docker-SDK image>
pytest:
services:
- docker:dind
variables:
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: tcp://docker:2375
script:
- pytest
Just like regular docker, the SDK will use these environment variables to configure the client correctly to use the docker:dind service.
I have been working with Docker previously using services to run a website made with Django.
Now I would like to know how I should create a Docker to just run Python scripts without a web server and any service related with websited.
An example of normal docker which I am used to work is:
version: '2'
services:
nginx:
image: nginx:latest
container_name: nz01
ports:
- "8001:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: dz01
depends_on:
- db
volumes:
- ./src:/src
expose:
- "8000"
db:
image: postgres:latest
container_name: pz01
ports:
- "5433:5432"
volumes:
- postgres_database:/var/lib/postgresql/data:Z
volumes:
postgres_database:
external: true
How should be the docker-compose.yml file?
Simply remove everything from your Dockerfile that has nothing to do with your script and start with something simple, like
FROM python:3
ADD my_script.py /
CMD [ "python", "./my_script.py" ]
You do not need Docker compose for containerizing a single python script.
The example is taken from this simple tutorial about containerizing Python applications: https://runnable.com/docker/python/dockerize-your-python-application
You can easily overwrite the command specified in the Dockerfile (via CMD) when starting a container from the image. Just append the desired command to your docker run command, e.g:
docker run IMAGE /path/to/script.py
You can easily run Python interactively without even having to build a container:
docker run -it python
If you want to have access to some code you have written within the container, simply change that to:
docker run -it -v /path/to/code:/app: python
Making a Dockerfile is unnecessary for this simple application.
Most Linux distributions come with Python preinstalled. Using Docker here adds significant complexity and I'd pretty strongly advise against Docker just to run a simple script. You can use a virtual environment to isolate a particular Python package's dependencies from the rest of the system.
(There is a pretty consistent stream of SO questions around getting filesystem permissions and user IDs right for scripts that principally want to interact with the host system. Also remember that running docker anything implies root-equivalent permissions. If you don't want Docker's filesystem and user namespace isolation, IMHO it's easier to just not use Docker where it doesn't make sense.)
I'm trying to learn how to use docker and am having some troubles. I'm using a docker-compose.yaml file for running a python script that connects to a mysql container and I'm trying to use ddtrace to send traces to datadog. I'm using the following image from this github page from datadog
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=invalid_key_but_this_is_fine
ports:
- "127.0.0.1:8126:8126"
And my docker-compose.yaml looks like
version: "3"
services:
ddtrace-test:
build: .
volumes:
- ".:/app"
links:
- ddagent
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=<my key>
ports:
- "127.0.0.1:8126:8126"
So then I'm running the command docker-compose run --rm ddtrace-test python test.py, where test.py looks like
from ddtrace import tracer
#tracer.wrap('test', 'test')
def foo():
print('running foo')
foo()
And when I run the command, I'm returned with
Starting service---reprocess_ddagent_1 ... done
foo
cannot send spans to localhost:8126: [Errno 99] Cannot assign requested address
I'm not sure what this error means. When I use my key and run from local instead of over a docker image, it works fine. What could be going wrong here?
Containers are about isolation so in container "localhost" means inside container so ddtrace-test cannot find ddagent inside his container. You have 2 ways to fix that:
Put network_mode: host in ddtrace-test so he will bind to host's network interface, skipping network isolation
Change ddtrace-test to use "ddagent" host instead of localhost as in docker-compose services can be accessed using theirs names