I have built a docker image of my app (streamlit) and inside my image, I have another image which I want to run as it is a search engine inside my app.
I was doing this before (outside of dockerizing the app) via subprocess
filepath = '"C:/Users/k.queenan/Documents/wsearch/docker/search-engine:/home" '
p = subprocess.Popen ('docker run -v' + filepath + 'search-image' , stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
which worked fine. I am getting an error now saying that the filepath is not valid. How can I get around this inside the dockerized version?
There is a method called DinD (Docker in Docker), but it should be used for developing the docker itself.
By security perspective it is not secure, because your parent container needs privileged rights. (You also can control the docker daemon itself from a container by mounting the docker unix socket /var/run/docker.sock - but you need privileged rights too - so it depends on your use case, but its not recommended)
Use docker-compose instead.
A sample multi-container yaml file (this method completely matches your use case):
version: "3.7"
services:
app:
image: node:12-alpine
command: sh -c "yarn install && yarn run dev"
ports:
- 3000:3000
working_dir: /app
volumes:
- ./:/app
environment:
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_PASSWORD: secret
MYSQL_DB: todos
mysql:
image: mysql:5.7
volumes:
- todo-mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: todos
volumes:
todo-mysql-data:
Update
If you want to control the docker host from a container with python, you can do the following:
Map the docker socket for your container with (on windows):
docker run -v "//var/run/docker.sock://var/run/docker.sock" your_python_image
and use docker.py to control your docker host from inside a container (instead of subprocess):
>>> import docker
>>> c = docker.from_env()
>>> stdout = c.containers.run(image="search-image:latest",command="your_command", remove=True)
Related
I have searched but I couldn't find a solution for my problem. My docker-compose.yml file as below.
#
version: '2.1'
services:
mongo:
image: mongo_db
build: mongo_image
container_name: my_mongodb
restart: always
networks:
- isolated_network
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
entrypoint: ["python3", "/tmp/script/get_api_to_mongodb.py", "&"]
networks:
isolated_network:
So here I use a custom Dockerfile. And my Dockerfile is like below.
FROM mongo:latest
RUN apt-get update -y
RUN apt-get install python3-pip -y
RUN pip3 install requests
RUN pip3 install pymongo
RUN apt-get clean -y
RUN mkdir -p /tmp/script
COPY get_api_to_mongodb.py /tmp/script/get_api_to_mongodb.py
#CMD ["python3","/tmp/script/get_api_to_mongodb.py","&"]
Here I want to create a container which have MongoDB and after create the container I collect a data using an API and send the data to MongoDB. But when I run the python script in that time mongodb is not initialized. So I need to run my script after container is created and right after mongodb initialized. Thanks in advance.
You should run this script as a separate container. It's not "part of the database", like an extension or plugin, but rather an ordinary client process that happens to connect to the database and that you want to run relatively early on. In general, if you're thinking about trying to launch a background process in a container, it's often a better approach to run foreground processes in two separate containers.
This setup means you can use a simpler Dockerfile that starts from an image with Python preinstalled:
FROM python:3.10
RUN pip install requests pymongo
WORKDIR /app
COPY get_api_to_mongodb.py .
CMD ["./get_api_to_mongodb.py"]
Then in your Compose setup, declare this as a second container alongside the first one. Since the script is in its own image, you can use the unmodified mongo image.
version: '2.4'
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
loader:
build: .
restart: on-failure
depends_on:
- mongodb
# environment:
# - MONGO_HOST=mongo
# - MONGO_USERNAME=root
# - MONGO_PASSWORD=root_pw
Note that the loader will re-run every time you run docker-compose up -d. You also may have to wait for the database to do its initialization before you can run the loader process; see Docker Compose wait for container X before starting Y.
It's likely you have an existing Compose service for your real application
version: '2.4'
services:
mongo: { ... }
app:
build: .
...
If that image contains the loader script, then you can docker-compose run it. This launches a new temporary container, using most of the attributes from the Compose service declaration, but you provide an alternate command: and the ports: are ignored.
docker-compose run app ./get_api_to_mongodb.py
One might ideally like a workflow where first the database container starts; then once it's accepting requests, run the loader script as a temporary container; then once that's completed start the main application server. This is mostly beyond Compose's capabilities, though you can probably get close with a combination of extended depends_on: declarations and a healthcheck: for the database.
I am new to the docker world and I have some issues regarding how to connect 2 docker services tougher.
I am using https://memgraph.com/ as my database and when I am running it locally I am running it like this
docker run -it -p 7687:7687 -p 3000:3000 memgraph/memgraph-platform
I wrote my program which is going to connect to the database using mgclient and when I am running it locally everything is working fine.
Now I am trying to put in inside the docker container and running it using docker-compose.yaml
My docker-compose.yaml is:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
ports:
- "7687:7687"
- "3000:3000"
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
and when I am trying to run it with this command:
docker-compose up
I am getting an error regarding the connection to the server. Could anyone tell me what I am missing regarding the docker-compose.yaml?
How does your my_app connect to the database?
Are you using a connection string of the form localhost:7687 (or perhaps localhost:3000)? This would work locally because you are publishing (--publish=7687:7687 --publish=3000:3000) the container's ports 7687 and 3000 to the host port's (using the same ports).
NOTE You can remap ports when your docker run. For example, you could --publish=9999:7686 and then you would need to use port 9999 on your localhost to access the container's port 7687.
When you combine the 2 containers using Docker Compose, each container is given a name that matches the service name. In this case, your Memgraph database is called memgraph (matching the service name).
Using Docker Compose, localhost takes on a different mean. From my_app, localhost is my_app. So, using localhost under Docker Compose, my_app would try connecting to itself not the database.
Under Docker Compose, for my_app (the name for your app), you need to refer to Memgraph by its service name (memgraph). The ports will be unchanged as both 7687 and 3000 (whichever is correct).
NOTE The ports statement in your Docker Compose config is possibly redundant *unless you want to be able to access the database from your (local)host (which you may for debugging). From a best practice standpoint, once my_app is able to access the database correctly, you don't need to expose the database's ports to the host.
Update
It is good practice to externalize configuration (from your app). So that you can configure your app dynamically. An easy way to do this is to use environment variables.
For example:
main.py:
import os
conn = connect(
host=os.getenv("HOST"),
port=os.getenv("PORT"),
)
Then, when you run under e.g. Docker, you need to set these values:
docker run ... --env=HOST="localhost" --env=PORT="7687" ...
And under Docker Compose, you can:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
environment:
HOST: memgraph
PORT: 7687
I am trying to access a Jupyter Notebook created with the shell_plus command from django-extensions in a Docker container.
docker-compose -f local.yml run --rm django python manage.py shell_plus --notebook
My configuration is based on the answers of #RobM and #Mark Chackerian to this Stack Overflow question. I.e. I installed and configured a custom kernel and my Django apps config file has the constant NOTEBOOK_ARGUMENTS set to:
NOTEBOOK_ARGUMENTS = [
'--ip', '0.0.0.0',
'--port', '8888',
'--allow-root',
'--no-browser',
]
I can see the container starting successfully in the logs:
[I 12:58:54.877 NotebookApp] The Jupyter Notebook is running at:
[I 12:58:54.877 NotebookApp] http://10d56bab37fc:8888/?token=b2678617ff4dcac7245d236b6302e57ba83a71cb6ea558c6
[I 12:58:54.877 NotebookApp] or http://127.0.0.1:8888/?token=b2678617ff4dcac7245d236b6302e57ba83a71cb6ea558c6
But I can't open the url. I have forwarded the port 8888 in my docker-compose, tried to use localhost instead of 127.0.0.1 and also tried to use the containers IP w/o success.
It feels like I am missing the obvious here … Any help is appreciated.
For the sake of records as of 2020, I managed to have a working django setup with Postgresql in docker-compose:
development.py (settings.py)
INSTALLED_APPS += [
"django_extensions",
]
SHELL_PLUS = "ipython"
SHELL_PLUS_PRINT_SQL = True
NOTEBOOK_ARGUMENTS = [
"--ip",
"0.0.0.0",
"--port",
"8888",
"--allow-root",
"--no-browser",
]
IPYTHON_ARGUMENTS = [
"--ext",
"django_extensions.management.notebook_extension",
"--debug",
]
IPYTHON_KERNEL_DISPLAY_NAME = "Django Shell-Plus"
SHELL_PLUS_POST_IMPORTS = [ # extra things to import in notebook
("module1.submodule", ("func1", "func2", "class1", "etc")),
("module2.submodule", ("func1", "func2", "class1", "etc"))
]
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" # only use in development
requirements.txt
django-extensions
jupyter
notebook
Werkzeug # needed for runserver_plus
...
docker-compose.yml
version: "3"
services:
db:
image: postgres:13
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
restart: always
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build: .
environment:
- DJANGO_SETTINGS_MODULE=settings.development
command:
- scripts/startup.sh
volumes:
- ...
ports:
- "8000:8000" # webserver
- "8888:8888" # ipython notebook
depends_on:
- db
volumes:
postgres_data:
From your host terminal run this command:
docker-compose exec web python manage.py shell_plus --notebook
Finally navigate to http://localhost:8888/?token=<xxxx> in the web browser of host.
Got it to work, but why it does so is beyond me. Exposing the ports in the docker-compose run command did the trick.
docker-compose -f local.yml run --rm -p 8888:8888 django python manage.py shell_plus --notebook
I was under the impression exposing ports in my local.yml would open them also in containers started by run.
The compose run command will per default not expose the defined service ports. From the documentation at https://docs.docker.com/compose/reference/run/
The [...] difference is that the docker-compose run command does not
create any of the ports specified in the service configuration. This
prevents port collisions with already-open ports. If you do want the
service’s ports to be created and mapped to the host, specify the
--service-ports flag:
docker-compose run --service-ports web python manage.py shell
You will therefore need to run
docker-compose -f local.yml run --rm --service-ports django python manage.py shell_plus --notebook
It might also be that the default 8888 port is already used by a local jupyter server (e.g. one spun up by VS Code's jupyter notebook implementation. I therefore usually map to a different port in the settings.py NOTEBOOK_ARGUMENTS list. (In this case the port mapping in the compose file needs to be adjusted as well, of course, and there must not be another container running in the background with the same service definition as this might also occupy the port.)
If you want to use jupyter notebook like separated service:
jupyter_notebook:
build:
context: .
dockerfile: docker/dev/web/Dockerfile
command: python manage.py shell_plus --notebook
depends_on:
- web
ports:
- 8888:8888 # ipython notebook
env_file:
- .env
after:
docker-compose logs -f 'jupyter_notebook'
and you will get access token in logs
I have a Docker container which runs a Flask application. When Flask receives and http request, I would like to trigger the execution of a new ephemeral Docker container which shutdowns once it completes what it has to do.
I have read Docker-in-Docker should be avoided so this new container should be run as a sibling container on my host and not within the Flask container.
What would be the solution to do this with docker-py?
we are doing stuff like this by mounting docker.sock as shared volume between the host machine and the container. This allows the container sending commands to the machine such as docker run
this is an example from our CI system:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Answering my own question. Here is a complete setup which works.
In one folder, create the following files:
requirements.txt
Dockerfile
docker-compose.yml
api.py
requirements.txt
docker==3.5.0
flask==1.0.2
Dockerfile
FROM python:3.7-alpine3.7
# Project files
ARG PROJECT_DIR=/srv/api
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY requirements.txt ./
# Install Python dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
docker-compose.yml
Make sure to mount docker.sock in volumes as mentioned in the previous answer above.
version: '3'
services:
api:
container_name: test
restart: always
image: test
build:
context: ./
volumes:
- ./:/srv/api/
- /var/run/docker.sock:/var/run/docker.sock
environment:
FLASK_APP: api.py
command: ["flask", "run", "--host=0.0.0.0"]
ports:
- 5000:5000
api.py
from flask import Flask
import docker
app = Flask(__name__)
#app.route("/")
def hello():
client = docker.from_env()
client.containers.run('alpine', 'echo hello world', detach=True, remove=True)
return "Hello World!"
Then open your browser and navigate to http://0.0.0.0:5000/
It will trigger the execution of the alpine container. If you don't already have the alpine image, it will take a bit of time the first time because Docker will automatically download the image.
The arguments detach=True allows to execute the container asynchronously so that Flask does not wait for the end of the process before returning its response.
The argument remove=True indicates Docker to remove the container once its execution is completed.
i have a problem with this docker-compose
version: '3'
services:
app:
image: php:7
command: php -S 0.0.0.0:8000 /app/get_count_of_day.php
ports:
- "8000:8000"
volumes:
- .:/app
composer:
restart: 'no'
image: composer/composer:php7
command: install
volumes:
- .:/app
python:
image: python:3
command: bash -c "pip3 install -r /app/requirements.txt && celery worker -l info -A cron --beat --workdir=/app/python"
links:
- redis
volumes:
- .:/app
depends_on:
- app
redis:
image: 'redis:3.0-alpine'
command: redis-server
ports:
- "6379:6379"
My celery task
import os
from celery import Celery
from celery.schedules import crontab
os.chdir("..")
app = Celery(broker='redis://redis:6379/0')
#app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(10.0, run_cron.s(), name='add every 10')
#app.task
def run_cron():
os.system("/usr/local/bin/php index.php")
My error is php not found
python_1 | sh: 1: /usr/local/bin/php: not found
python_1 | [2018-06-15 15:08:29,491: INFO/ForkPoolWorker-2] Task cron.run_cron[e7c338c1-7b9c-4d6f-b607-f4e354fbd623] succeeded in
0.003908602000592509s: None
python_1 | [2018-06-15 15:08:39,487: INFO/Beat] Scheduler: Sending due task add every 10 (cron.run_cron)
but if i go manually to docker with
docker exec -i -t 1ff /bin/bash
i found php in directory
Binaries from container "app" are not exposed in container "python", this is docker's MO. To run index.php script you can jus open this page via http request: curl http://app/index.php or do the same entirely in python via urllib2 or requests (I recommend the last option).
But in case your request fails because it can't find the app domain - original answer below is your solution.
In case you have to perform more complicated operations inside the app container you should really think about exposing them through internal API or something like that, but as I understand it, docker containers should do one thing and one thing only. If you need to run some complex shell script in your php container - you are breaking this principle. app container is for serving php pages, so it should do exactly that.
As a last resort, you can totally hack on docker, for example exposing docker control socket inside your celery container and issuing commands to other containers directly. This can be really dangerous and heavily discouraged in the docs, but you do you ;)
[EDIT: originally misread question...]
In default docker network you can't address containers by name. Add
networks:
my-net:
to the end of docker-compose and
networks:
- my-net
to every container that needs to talk with each other.