I'm following a Python/TDD/Docker tutorial by TestDriven.io.
I build a custom image and I want to test it. I cannot (I think, I'm a noob with Docker and Python, please patience) do.
This is the image: registry.gitlab.com/sineverba/warehouse:latest. It works because I deployed to Heroku with success.
I don't want docker-compose for testing the final image, so I tried to do:
docker network create -d bridge flask-tdd-net
export DATABASE_TEST_URL=postgres://postgres:postgres#flask-tdd-net:5432/users_dev
docker run -d --name app -e "PORT=8765" -p 5002:8765 --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
docker run -d --name db -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=users_dev -p 5432:5432 --network=flask-tdd-net postgres:12-alpine
I can launch a simple
docker exec app python -V and get version, for example.
But when I launch
docker exec app python -m pytest "project/tests"
I get (split down, full log here: https://pastebin.com/tYjn65ys)
self = <[AttributeError("'NoneType' object has no attribute 'drivername'") raised in repr()] SQLAlchemy object at 0x7fc74676e7f0>
app = <Flask 'project'>, sa_url = None, options = {}
def apply_driver_hacks(self, app, sa_url, options):
"""This method is called before engine creation and used to inject
driver specific hacks into the options. The `options` parameter is
a dictionary of keyword arguments that will then be used to call
the :func:`sqlalchemy.create_engine` function.
The default implementation provides some saner defaults for things
like pool sizes for MySQL and sqlite. Also it injects the setting of
`SQLALCHEMY_NATIVE_UNICODE`.
"""
> if sa_url.drivername.startswith('mysql'):
E AttributeError: 'NoneType' object has no attribute 'drivername'
I did try also (after stopping and removing containers and recreating DBs)
export DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users
So, moving name from users_dev to users.
Full repo link: https://github.com/sineverba/flask-tdd-docker/tree/add-gitlab-warehouse
Thank you in advance!
Edit
I changed the env cause link db was wrong. These are new commands, but got same error. I tried also to export both env, without success.
docker network create -d bridge flask-tdd-net
export DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users
export DATABASE_URL=postgres://postgres:postgres#db:5432/users
docker run -d --name app -e "PORT=8765" -p 5002:8765 --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
docker run -d --name db -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=users -p 5432:5432 --network=flask-tdd-net postgres:12-alpine
docker exec app python -m pytest "project/tests"
docker container stop app && docker container rm app && docker container stop db && docker container rm db
Starting example
This is the testdriven.io example, from Gitlab integration (that I want not use). Only env exported for app is the DATABASE_TEST_URL
image: docker:stable
stages:
- build
- test
variables:
IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
build:
stage: build
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:latest || true
- docker build
--cache-from $IMAGE:latest
--tag $IMAGE:latest
--file ./Dockerfile.prod
"."
- docker push $IMAGE:latest
test:
stage: test
image: $IMAGE:latest
services:
- postgres:latest
variables:
POSTGRES_DB: users
POSTGRES_USER: runner
POSTGRES_PASSWORD: runner
DATABASE_TEST_URL: postgres://runner:runner#postgres:5432/users
script:
- pytest "project/tests" -p no:warnings
- flake8 project
- black project --check
- isort project/**/*.py --check-only
Solved
The error is the need to export the variables inside the docker command:
docker run -d --name app -e "PORT=8765" -p 5002:8765 -e "DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users" --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
The error is the need to export the variables inside the docker command:
docker run -d --name app -e "PORT=8765" -p 5002:8765 -e "DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users" --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
Related
I'm trying to pass 2 parameters to a docker container for a dash app (via a shell script). Passing one parameter works, but two doesn't. Here's what happens when I pass two parameters:
command:
sudo sh create_dashboard.sh 6 4
Error:
creating docker
Running for parameter_1: 6
Running for parameter_2: 4
usage: app.py [-h] [-g parameter_1] [-v parameter_2]
app.py: error: argument -g/--parameter_1: expected one argument
The shell script:
echo "creating docker"
docker build -t dash-example .
echo "Running for parameter_1: $1 "
echo "Running for parameter_2: $2 "
docker run --rm -it -p 8080:8080 --memory=10g dash-example $1 $2
Dockerfile:
FROM python:3.8
WORKDIR /app
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
COPY src /app
EXPOSE 8080
ENTRYPOINT [ "python", "app.py", "-g", "-v"]
When I use this command:
sudo sh create_dashboard.sh 6
the docker container runs perfectly, with parameter_2 being None.
You can pass a command into the shell of a container like this:
docker run --rm -it -p 8080:8080 dash-example sh -c "--memory=10g dash-example $1 $2"
So it allows arguments and any other command.
When you docker run ... dash-example $1 $2, the additional parameters are interpreted as the "command" the container should run. Since your image has an ENTRYPOINT, the words of the command are just tacked on to the end of the words of the entrypoint (see Understand how CMD and ENTRYPOINT interact in the Dockerfile documentation). There's no way to cause the words of one command to be interspersed with the words of another; you are effectively getting a command line of
python app.py -g -v 6 4
The approach I'd recommend here is to not use an ENTRYPOINT at all. Make sure you can directly run the application script (its first line should be #!/usr/bin/env python3, it should be executable) and make the image's default CMD be to run the script:
FROM python:3.9
...
# RUN chmod +x app.py # if needed
# no ENTRYPOINT at all
CMD ["./app.py"] # finds "python" via the shebang line
Then your wrapper can supply a complete command line, including the options you need to run:
#!/bin/sh
docker run --rm -it -p 8080:8080 --memory=10g dash-example \
./app.py -g "$1" -v "$2"
(There is an alternate "container as command" pattern, where the ENTRYPOINT contains the command to run and the CMD its options. This can lead to awkward docker run --entrypoint command lines for routine debugging tasks, and if the command itself is short it doesn't really save you a lot. You'd still need to repeat the -g and -v options in the wrapper.)
I have a Makefile, that runs a docker-compose, which has a container that executes a python script. I want to be able to pass a variable in the command-line to the Makefile and print it within the python script (testing.py).
My directory looks like:
main_folder:
-docker-compose.yaml
-Makefile
-testing.py
I have tried with the following configuration. The Makefile is:
.PHONY: run run-prod stop stop-prod rm
run:
WORKING_DAG=$(working_dag) docker-compose -f docker-compose.yml up -d --remove-orphans --build --force-recreate
The docker-compose is:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
And the file testing.py is:
import sys
print(sys.argv[0], flush=True)
When I run in the command line:
make working_dag=testing run
It doesn't fail but it does not print anything neither. How could I make it? Thanks
I believe that the variable WORKING_DAG is getting assigned correctly through the command-line and Makefile is passing it correctly to the docker-compose. I verified it by running the container to not be destroyed and then after logging into the container, I checked the value of WORKING_DAG:
To not destroy the container once the docker execution is completed, I modified the docker-compose.yml, as follows:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
environment:
WORKING_DAG: ${working_dag}
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 testing.py $$WORKING_DAG"
command: -c "tail -f /dev/null"
airflow#d8dcb07c926a:/opt/airflow$ echo $WORKING_DAG
testing
The issue that docker does not display Python's std.out when deploying with docker-compose was already commented in Github, here, and it still not resolved. Making it work when using docker-compose, is only possible if we transfer/mount the file into the container, or if we use Dockerfile instead.
When using a Dockerfile, you only have to run the corresponding script as follows,
CMD ["python", "-u", "testing.py", "$WORKING_DAG"]
To mount the script into the container, please look at #DazWilkin's answer, here.
You'll need to mount your testing.py into the container (using volumes). In the following, your current working directory (${PWD}) is used and testing.py is mounted in the container's root directory:
version: "3.7"
services:
prepare_files:
image: apache/airflow:1.10.14
volumes:
- ${PWD}/testing.py:/testing.py
environment:
PYTHONUNBUFFERED: 1
entrypoint: /bin/bash
command: -c "python3 /testing.py ${WORKING_DAG}"
NOTE There's no need to include WORKING_DAG in the service definition as it's exposed to the Docker Compose environment by your Makefile. Setting it as you did, overwrites it with "" (empty string) because ${working_dag} was your original environment variable but you remapped this to WORKING_DAG in your Makefile run step.
And
import sys
print(sys.argv[0:], flush=True)
Then:
make --always-make working_dag=Freddie run
WORKING_DAG=Freddie docker-compose --file=./docker-compose.yaml up
Recreating 66014039_prepare_files_1 ... done
Attaching to 66014039_prepare_files_1
prepare_files_1 | ['/testing.py', 'Freddie']
66014039_prepare_files_1 exited with code 0
I'd like to be able to configure env variables for my docker containers and use them in build process with .env file
I currently have the following .env file:
SSH_PRIVATE_KEY=TEST
APP_PORT=8040
my docker-compose:
version: '3'
services:
companies:
image: companies8
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
ports:
- ${APP_PORT}:${APP_PORT}
env_file: .env
build:
context: .
args:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
my Dockerfile:
FROM python:3.7
# set a directory for the app
COPY . .
#Accept input argument from docker-compose.yml
ARG SSH_PRIVATE_KEY=abcdef
ENV SSH_PRIVATE_KEY $SSH_PRIVATE_KEY
RUN echo $SSH_PRIVATE_KEY
# Pass the content of the private key into the container
RUN mkdir -p /root/.ssh
RUN chmod 400 /root/.ssh
RUN echo "$SSH_PRIVATE_KEY" > /root/.ssh/id_rsa
RUN echo "$SSH_PUBLIC_KEY" > /root/.ssh/id_rsa.pub
RUN chmod 400 /root/.ssh/id_rsa
RUN chmod 400 /root/.ssh/id_rsa.pub
RUN eval $(ssh-agent -s) && ssh-add /root/.ssh/id_rsa && ssh-keyscan bitbucket.org > /root/.ssh/known_hosts
RUN ssh -T git#bitbucket.org
#Install the packages
RUN pip install -r v1/requirements.txt
# Tell the port number the container should expose
EXPOSE 8040
# run the command
CMD ["python", "v1/__main__.py"]
and i have the same SSH_PRIVATE_KEY environment variable set on my windows with value "test1" and the build log gives me the result 'test1' from
ENV SSH_PRIVATE_KEY $SSH_PRIVATE_KEY
RUN echo $SSH_PRIVATE_KEY
not the value that's in the .env file.
I need this because some of the libraries listed in my requirements.txt are in an internal repository and I need ssh to access them, therefore the ssh private key. There might be another proper way to use this, but its the general scenario i want to achieve - to pass env variables values from .env file to my docker build
There's a certain overlap between ENV and ARG as shown in the image below:
Since you are having the variable already exported in the operating system, its value will be present in the image from the ENV instruction.
But if you do not really need the variable in the image and only in the build step (as far as I see from the docker-compose file), then the ARG instruction is enough.
Being new to python & docker, I created a small flask app (test.py) which has two hardcoded values:
username = "test"
password = "12345"
I'm able to create a Docker image and run a container from the following Dockerfile:
FROM python:3.6
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/code/test.py"]`
How can I create a ENV variable for username & password and pass dynamic values while running containers?
Within your python code you can read env variables like:
import os
username = os.environ['MY_USER']
password = os.environ['MY_PASS']
print("Running with user: %s" % username)
Then when you run your container you can set these variables:
docker run -e MY_USER=test -e MY_PASS=12345 ... <image-name> ...
This will set the env variable within the container and these will be later read by the python script (test.py)
More info on os.environ and docker env
In your Python code you can do something like this:
# USERNAME = os.getenv('NAME_OF_ENV_VARIABLE','default_value_if_no_env_var_is_set')
USERNAME = os.getenv('USERNAME', 'test')
Then you can create a docker-compose.yml file to run your dockerfile with:
version: '2'
services:
python-container:
image: python-image:latest
environment:
- USERNAME=test
- PASSWORD=12345
You will run the compose file with:
$ docker-compose up
All you need to remember is to build your dockerfile that you mentioned in your question with:
$ docker build -t python-image .
Let me know if that helps. I hope that answers your question.
FROM python:3
MAINTAINER <abc#test.com>
ENV username=test
password=12345
RUN mkdir /dir/name
RUN cd /dir/name && pip3 install -r requirements.txt
WORKDIR /dir/name
ENTRYPOINT ["/usr/local/bin/python", "./test.py"]
I split my docker-compose into docker-compose.yml (base), docker-compose.dev.yml, etc., then I had this issue.
I solved it by specifying the .env file explicitly in the base:
web:
env_file:
- .env
Not sure why, according to the docs it should just work if there's an .env file.
I have made a little python script to create a DB and some tables inside a RethinkDB
But now I'm trying to launch this python script inside my rethink container launched with docker-compose.
This is my docker-compose.yml rethink container config
# Rethink DB
rethink:
image: rethinkdb:latest
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
I'm trying to execute the script with after launching my container
docker exec -it rethink python src/app/db-install.py
But I get this error
rpc error: code = 2 desc = oci runtime error: exec failed: exec: "python": executable file not found in $PATH
Python is not found in me container. Is this possible to execute a python script inside a given container with docker-compose or with docker exec ?
First find out if you have python executable in the container:
docker exec -it rethink which python
If it exists, Use the absolute path provided by which command in previous step:
docker exec -it rethink /absolute/path/to/python src/app/db-install.py
If not, you can convert your python script to bash script, so you can run it without extra executables and libraries.
Or you can create a dockerfile, use base image, and install python.
dockerfile:
FROM rethinkdb:latest
RUN apt-get update && apt-get install -y python
Docker Compose file:
rethink:
build : .
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
Docker-compose
Assuming that python is installed, try:
docker-compose run --rm MY_DOCKER_COMPOSE_SERVICE MY_PYTHON_COMMAND
For a start, you might also just go into the shell at first and run a python script from the command prompt.
docker-compose run --rm MY_DOCKER_COMPOSE_SERVICE bash
In your case, MY_DOCKER_COMPOSE_SERVICE is 'rethink', and that is not the container name here, but the name of the service (first line rethink:), and only the service is run with docker-compose run, not the container.
The MY_PYTHON_COMMAND is, in your case of Python2, python src/app/db-install.py, but in Python3 it is python -m src/app/db-install (without the ".py"), or, if you have Python3 and Python2 installed, python3 -m src/app/db-install.
Dockerfile
To be able to run this python command, the Python file needs to be in the container. Therefore, in your Dockerfile that you need to call with build: ., you need to copy your build directory to a directory in the container of your choice
COPY $PROJECT_PATH /tmp
This /tmp will be created in your build directory. If you just write ".", you do not have any subfolder and save it directly in the build directory.
When using /tmp as the subfolder, you might write at the end of your Dockerfile:
WORKDIR /tmp
Docker-compose
Or if you do not change the WORKDIR from the build (".") context to /tmp and you still want to reach /tmp, run your Python file like /tmp/db-install.py.
The rethinkdb image is based on the debian:jessie image :
https://github.com/rethinkdb/rethinkdb-dockerfiles/blob/da98484fc73485fe7780546903d01dcbcd931673/jessie/2.3.5/Dockerfile
The debian:jessie image does not come with python installed.
So you will need to create your own Dockerfile, something like :
FROM rethinkdb:latest
RUN apt-get update && apt-get install -y python
Then change your docker-compose :
# Rethink DB
rethink:
build : .
container_name: rethink
ports:
- 58080:8080
- 58015:28015
- 59015:29015
build : . is the path to your Dockerfile.