debug_toolbar module is not persisted after docker down / docker up - python

I started a new project using Django. This project is build using Docker with few containers and poetry to install all dependencies.
When I first run docker-compose up -d, everything is installed correctly. Actually, this problem is not related with Docker I suppose.
After I run that command, I'm running docker-compose exec python make -f automation/local/Makefile which has this content
Makefile
.PHONY: all
all: install-deps run-migrations build-static-files create-superuser
.PHONY: build-static-files
build-static-files:
python manage.py collectstatic --noinput
.PHONY: create-superuser
create-superuser:
python manage.py createsuperuser --noinput --user=${DJANGO_SUPERUSER_USERNAME} --email=${DJANGO_SUPERUSER_USERNAME}#zitec.com
.PHONY: install-deps
install-deps: vendor
vendor: pyproject.toml $(wildcard poetry.lock)
poetry install --no-interaction --no-root
.PHONY: run-migrations
run-migrations:
python manage.py migrate --noinput
pyproject.toml
[tool.poetry]
name = "some-random-application-name"
version = "0.1.0"
description = ""
authors = ["xxx <xxx#xxx.com>"]
[tool.poetry.dependencies]
python = ">=3.6"
Django = "3.0.8"
docusign-esign = "^3.4.0"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
django-debug-toolbar = "^2.2"
Debug toolbar is installed by adding those entries under settings.py (MIDDLEWARE / INSTALLED_APP) and even DEBUG_TOOLBAR_CONFIG with next value: SHOW_TOOLBAR_CALLBACK.
Let me confirm that EVERYTHING works after fresh docker-compose up -d. The problem occurs after I stop container and start it again using next commands:
docker-compose down
docker-compose up -d
When I try to access the project it says that Module debug_toolbar does not exist!.
I read all questions from this website, but nothing worked for me.
Has anyone encountered this problem before?

That sounds like normal behavior. A container has a temporary filesystem, and when the container exits any changes that have been made in that filesystem will be permanently lost. Deleting and recreating containers is extremely routine (even just changing environment: or ports: settings in the docker-compose.yml file would cause that to happen).
You should almost never install software in a running container. docker exec is an extremely useful debugging tool, but it shouldn't be the primary way you interact with your container. In both cases you're setting yourself up to lose work if you ever need to change a Docker-level setting or update the base image.
For this example, you can split the contents of that Makefile into two parts, the install_deps target (that installs Python packages but doesn't have any external dependencies) and the rest (that will depend on a database running). You need to run the installation part at image-build time, but the Dockerfile can't access a database, so the remainder needs to happen at container-startup time.
So in your image's Dockerfile, RUN the installation part:
RUN make install-reps
You will also need an entrypoint script that does the rest of the first-time setup, then runs the main container command. This can look like:
#!/bin/sh
make run-migrations build-static-files create-superuser
exec "$#"
Then run this in your Dockerfile:
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD python3 manage.py runserver --host 0.0.0.0
(I've recently seen a lot of Dockerfiles that have just ENTRYPOINT ["python3"]. Splitting ENTRYPOINT and CMD this way isn't especially useful; just move the python3 interpreter command into CMD.)

Related

GitHub Action I wrote doesn't have access to repo's files that is calling the action

A sample repo with the directory structure of what I'm working on is on GitHub here. To run the GitHub Action, you just need to go to the Action tab of the repo and run the Action manually.
I have a custom GitHub Action I've written as well with python as the base image in the Docker container but want the python version to be an input for the GitHub Action. In order to do so, I am creating a second intermediate Docker container to run with the python version input argument.
The problem I'm running into is I don't have access to the original repo's files that is calling the GitHub Action. For example, say the repo is called python-sample-project and has folder structure:
python-sample-project
│ main.py
│ file1.py
│
└───folder1
│ │ file2.py
I see main.py, file1.py, and folder1/file2.py in entrypoint.sh. However, in docker-action/entrypoint.sh I only see the linux folder structure and the entrypoint.sh file copied over in docker-action/Dockerfile.
In the Alpine example I'm using, the action entrypoint.sh script looks like this:
#!/bin/sh -l
ALPINE_VERSION=$1
cd /docker-action
docker build -t docker-action --build-arg alpine_version="$ALPINE_VERSION" . && docker run docker-action
In docker-action/ I have a Dockerfile and entrypoint.sh script that should run for the inner container with the dynamic version of Alpine (or Python)
The docker-action/Dockerfile is as follows:
# Container image that runs your code
ARG alpine_version
FROM alpine:${alpine_version}
# Copies your code file from your action repository to the filesystem path `/` of the container
COPY entrypoint.sh /entrypoint.sh
RUN ["chmod", "+x", "/entrypoint.sh"]
# Code file to execute when the docker container starts up (`entrypoint.sh`)
ENTRYPOINT ["/entrypoint.sh"]
In the docker-action/entrypoint I run ls but I do not see the repository files.
Is it possible to access the main.py, file1.py, and folder1/file2.py in entrypoint.sh in the docker-action/entrypoint.sh?
There's generally two ways to get files from your repository available to a docker container you build and run. You either (1) add the files to the image when you build it or (2) mount the files into the container when you run it. There are some other ways, like specifying volumes, but that's probably out of scope for this case.
The Dockerfile docker-action/Dockerfile does not copy any files except for the entrypoint.sh script. Your entrypoint.sh also does not provide any mount points when running the container. Hence, the outcome you observe is the expected outcome based on these facts.
In order to resolve this, you must either (1) add COPY/ADD statements to your Dockerfile to copy files into the image (and set appropriate build context) OR (2) mount the files into the container when it runs by adding -v /source-path:/container-path to the docker run command in your entrypoint.sh.
See references:
COPY reference
Docker run reference
Though, this approach of building another container just to get a user-provided python version is a highly questionable practice for GitHub Actions and should probably be avoided. Consider leaning on the setup-python action instead.
The docker-in-docker problem
Nevertheless, if you continue this route and want to go about mounting the directory, you'll have to keep in mind that, when invoking docker from within a docker action on GitHub, the filesystem in the mount specification refers to the filesystem of the docker host, not the filesystem of the container.
It works on my machine?!
Counter to what you might experience running docker on a local system for example, this does not work in GitHub -- the working directory is not mounted:
docker run -v $(pwd):/opt/workspace \
--workdir /opt/workspace \
--entrypoint /bin/ls \
my-container "-R"
This doesn't work either:
docker run -v $GITHUB_WORKSPACE:$GITHUB_WORKSPACE \
--workdir $GITHUB_WORKSPACE \
--entrypoint /bin/ls \
my-container "-R"
This kind of thing would work perfectly fine if you tried it on a system running docker locally. What gives?
Dealing with the devil (daemon)
In Actions, the starting working directory where files are checked out into $GITHUB_WORKSPACE. In docker actions, that's /github/workspace. The workspace files populate into the workspace when your action runs by the Actions runner mounting the workspace from the host where the docker daemon is running.
You can see that in the command run when your action starts:
/usr/bin/docker run --name f884202608aa2bfab75b6b7e1f87b3cd153444_f687df --label f88420 --workdir /github/workspace --rm -e INPUT_ALPINE-VERSION -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_RUN_ATTEMPT -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_NAME -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/my-repo/my-repo":"/github/workspace" f88420:2608aa2bfab75b6b7e1f87b3cd153444 "3.9.5"
The important bits are this:
-v "/home/runner/work/my-repo/my-repo":"/github/workspace"
-v "/var/run/docker.sock":"/var/run/docker.sock"
/home/runner/work/my-repo/my-repo is the path on the host, where the repository files are. As mentioned, that first line is what gets it mounted into /github/workspace in your action container when it gets run.
The second line is mounting the docker socket from the host to the action container. This means any time you call docker within your action, you're actually talking to the docker daemon outside of your container. This is important because that means when you use the -v argument inside your action, the arguments need to reflect directories that exist outside of the container.
So, what you would actually need to do instead is this:
docker run -v /home/runner/work/my-repo/my-repo:/opt/workspace \
--workdir /opt/workspace \
--entrypoint /bin/ls \
my-container "-R"
Becoming useful to others
And that works. If you only use it for the project itself. However, you have (among others) a remaining problem if you want this action to be consumable by other projects. How do you know where the workspace is on the host? This path will change for each repository, after all. GitHub does not guarantee these paths, either. They may be different on different platforms, or your action may be running on a self-hosted runner.
So how do you content with that problem? There is no inbuilt environment variable that points to this directory you need specifically, unfortunately. However, by relying on implementation detail, you might be able to get away with using the $RUNNER_WORKSPACE variable, which will point, in this case to /home/runner/work/your-project. This is not the same place as the origin of $GITHUB_WORKSPACE but it's close. You can use the GITHUB_REPOSITORY variable to build the path, though this isn't guaranteed to always be the case afaik:
PROJECT_NAME="$(basename ${GITHUB_REPOSITORY})"
WORKSPACE="${RUNNER_WORKSPACE}/${PROJECT_NAME}"
You also have some other things to fix like the working directory form which you build.
TL;DR
You need to mount files in the container when you run it. In GitHub, you're running docker-in-docker, so paths you need to use to mount files work different, so you need to find the correct paths to pass to docker when called from within your action container.
A minimally working solution for the example project you linked is this entrypoint.sh in the root of the repo looks like this:
#!/usr/bin/env sh
ALPINE_VERSION=$1
docker build -t docker-action \
-f ./docker-action/Dockerfile \
--build-arg alpine_version="$ALPINE_VERSION" \
./docker-action
PROJECT_NAME="$(basename ${GITHUB_REPOSITORY})"
WORKSPACE="${RUNNER_WORKSPACE}/${PROJECT_NAME}"
docker run --workdir=$GITHUB_WORKSPACE \
-v $WORKSPACE:$GITHUB_WORKSPACE \
docker-action "$#"
There are probably further concerns with your action, depending on what it does, like making available all the default and user-defined environment variables for the action to the 'inner' container, if that's important.
So, is this possible? Sure. Is it reasonable just to get a dynamic version of alpine/python? I don't think so. There's probably better ways of accomplishing what you want to do, like using setup-python, but that sounds like a different question.

How to run python code from linux via a docker containing a specific python version

I have a linux server running in which I want to be able to run some python scripts. To do so, I created a docker image of python (3.6.8) with some specific dependencies to run my code.
I am new to linux command line and need help on how to write a line that would run a given python script based on my docker (python 3.6.8)
My server's directory structure looks like this :
My docker is named geomatique_python and its image is located in docker_image.
As for the structure of the code itself, I am starting from scratch and am looking for some hints and advices.
Thanks
I'm very much for a all the things in docker approach. Ref. your mention of having specific versions set in stone, the declaritive nature of docker is great for that. You can extend an official python docker image with your libraries then bind-mount the folders into your container during the run. A minimal project might look like:
.
├── app.py
└── Dockerfile
My app.py is a simple requests script:
#!/usr/bin/env python3
import requests
r = requests.get('https://api.github.com')
if r.status_code == 200:
print("HTTP {}".format(r.status_code))
My Dockerfile contains the runtime dependencies for my app:
FROM python:3.6-slim
RUN python3 -m pip install requests
Note: I'm extending the official python image in this example.
After building the docker image (i.e. docker build --rm -t so:57697538 .) you can run a container from the image bind-mounting the directory that contains the scripts inside the container and execute them: docker run --rm -it -v ${PWD}:/src --entrypoint python3 so:57697538 /src/app.py
Admittedly for python virtualenv / virtualenvwrapper can be convenient however it's very much python-only whereas docker is language agnostic.

Run the right scripts before deployment elastic beanstalk

I am editing my .ebextensions .config file to run some initialisation commands before deployment. I thought this commands would be run in the same folder of the extracted .zip containing my app. But that's not the case. manage.py is in the root directory of my zip and if I do the commands:
01_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
I get a ERROR: [Instance: i-085e84b9d1df851c9] Command failed on instance. Return code: 2 Output: python: can't open file 'manage.py': [Errno 2] No such file or directory.
I could do command: "python /opt/python/current/app/manage.py collectstatic --noinput" but that would run the manage.py that successfully was deployed previously instead of running the one that is being deployed atm.
I tried to check what was the working directory of the commands ran by the .config by doing command: "pwd" and it seems that pwd is /opt/elasticbeanstalk/eb_infra which doesn't contain my app.
So I probably need to change $PYTHONPATH to contain the right path, but I don't know which path is it.
In this comment the user added the following to his .config file:
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myapp.settings
PYTHONPATH: "./src"
Because his manage.py lives inside the src folder within the root of his zip. In my case I would do PYTHONPATH: "." but it's not working.
AWS support solved the problem. Here's their answer:
When Beanstalk is deploying an application, it keeps your application files in a "staging" directory while the EB Extensions and Hook Scripts are being processed. Once the pre-deploy scripts have finished, the application is then moved to the "production" directory. The issue you are having is related to the "manage.py" file not being in the expected location when your "01_collectstatic" command is being executed.
The staging location for your environment (Python 3.4, Amazon Linux 2017.03) is "/opt/python/ondeck/app".
The EB Extension "commands" section is executed before the staging directory is actually created. To run your script once the staging directory has been created, you should use "container_commands". This section is meant for modifying your application after the application has been extracted, but before it has been deployed to the production directory. It will automatically run your command in your staging directory.
Can you please try implementing the container_command section and see if it helps resolve your problem? The syntax will look similar to this (but please test it before deploying to production):
container_commands:
01_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
So, the thing to remember about beanstalk is that each of the commands are independent, and you do not maintain state between them. You have two options in this case, put your commands into a shell script that is uploaded in the files section of ebextensions. Or, you can write one line commands that do all stateful activities prefixed to your command of interest.
e.g.,
00_collectstatic:
command: "pushd /path/to/django && source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput && popd"

Autoreload flask on file save using heroku local [duplicate]

Finally I migrated my development env from runserver to gunicorn/nginx.
It'd be convenient to replicate the autoreload feature of runserver to gunicorn, so the server automatically restarts when source changes. Otherwise I have to restart the server manually with kill -HUP.
Any way to avoid the manual restart?
While this is old question you need to know that ever since version 19.0 gunicorn has had the --reload option.
So now no third party tools are needed.
One option would be to use the --max-requests to limit each spawned process to serving only one request by adding --max-requests 1 to the startup options. Every newly spawned process should see your code changes and in a development environment the extra startup time per request should be negligible.
Bryan Helmig came up with this and I modified it to use run_gunicorn instead of launching gunicorn directly, to make it possible to just cut and paste these 3 commands into a shell in your django project root folder (with your virtualenv activated):
pip install watchdog -U
watchmedo shell-command --patterns="*.py;*.html;*.css;*.js" --recursive --command='echo "${watch_src_path}" && kill -HUP `cat gunicorn.pid`' . &
python manage.py run_gunicorn 127.0.0.1:80 --pid=gunicorn.pid
I use git push to deploy to production and set up git hooks to run a script. The advantage of this approach is you can also do your migration and package installation at the same time. https://mikeeverhart.net/2013/01/using-git-to-deploy-code/
mkdir -p /home/git/project_name.git
cd /home/git/project_name.git
git init --bare
Then create a script /home/git/project_name.git/hooks/post-receive.
#!/bin/bash
GIT_WORK_TREE=/path/to/project git checkout -f
source /path/to/virtualenv/activate
pip install -r /path/to/project/requirements.txt
python /path/to/project/manage.py migrate
sudo supervisorctl restart project_name
Make sure to chmod u+x post-receive, and add user to sudoers. Allow it to run sudo supervisorctl without password. https://www.cyberciti.biz/faq/linux-unix-running-sudo-command-without-a-password/
From my local / development server, I set up git remote that allows me to push to the production server
git remote add production ssh://user_name#production-server/home/git/project_name.git
# initial push
git push production +master:refs/heads/master
# subsequent push
git push production master
As a bonus, you will get to see all the prompts as the script is running. So you will see if there is any issue with the migration/package installation/supervisor restart.

Best practices for debugging vagrant+docker+flask

My goal is to run a flask webserver from a Docker container. Working on a Windows machine this requires Vagrant for creating a VM. Running vagrant up --provider=docker leads to the following complaint:
INFO interface: error: The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
This is my Dockerfile
FROM mrmrcoleman/python_webapp
EXPOSE 5000
# Install Python
RUN apt-get install -y python python-dev python-distribute python-pip
# Add and install Python modules
RUN pip install Flask
#copy the working directory to the container
ADD . /
CMD python run.py
And this is the Vagrantfile
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.build_dir = "." #searches for a local dockerfile
end
config.vm.synced_folder ".", "/vagrant", type: "rsync"
rsync__chown = false
end
Because the Vagrantfile and run.py work without trouble independently, I suspect I made a mistake in the Dockerfile. My question is twofold:
Is there something clearly wrong with the Dockerfile or the
Vagrantfile?
Is there a way to have vagrant/docker produce more
specific error messages?
I think the answer I was looking for is using the command
vagrant docker-logs
I broke the Dockerfile because I did not recognize good behaviour as such, because nothing really happens if the app runs as it should. docker-logs confirms that the flask app is listening for requests.
Is there something clearly wrong with the Dockerfile or the Vagrantfile?
Your Dockerfile and Vagrantfiles look good, but I think you need to modify the permissions of run.py to be executable:
...
#copy the working directory to the container
ADD . /
RUN chmod +x run.py
CMD python run.py
Does that work?
Is there a way to have vagrant/docker produce more specific error messages?
Try taking a look at the vagrant debugging page. Another approach I use is to log into the container and try running the script manually.
# log onto the vm running docker
vagrant ssh
# start your container in bash, assuming its already built.
docker run -it my/container /bin/bash
# now from inside your container try to start your app
python run.py
Also, if you want to view your app locally, you'll want to add port forwarding to your Vagrantfile.

Categories

Resources