I try to debug an existing Django project in Pycharm on Linux, using an existing docker-compose file and a remote interpreter.
I followed a tutorial on configuring the docker-compose interpreter and the run configuration and the configurations look like this:
When I try to start the project, the right docker container is launcher, but I get the error:
Attaching to docker_web_1
web_1 | Unknown command: 'python'
web_1 | Type 'django-admin.py help' for usage.
_web_1 exited with code 1
I have tried other interpreter paths (e.g. /usr/bin/python2.7) but the error remains. Did I miss something in this configuration?
I've tried adding the following snippet to my dockerfile, but it did not help:
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
I know this has been asked a long time ago but there it goes for the people who come later...
I paste the image, if further explanations are required, ask for them.
( I use runserver_plus but it works for runserver either )
Related
I am trying to build a CI with Travis for my docker app. In my docker compose I import a file called ".env". This file is gitignored so Travis cant use it. To fix the problem, I create the empty file in my .travis.yml file and set the environment variables on the website :
language: python
python:
- "3.6"
services:
- docker
before_script:
- touch .env
- pip install docker-compose
script:
- docker-compose run web sh -c "python manage.py test"
When I push on git, everything seem to work Travis side until the test start and Travis come to this line of code in my app :
ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS").split(" ")
There I have this error in Travis logs :
File "/home/pur_beurre/web/pur_beurre/settings.py", line 29, in <module>
ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS").split(" ")
AttributeError: 'NoneType' object has no attribute 'split'
1
The command "docker-compose run web sh -c "python manage.py test"" exited with 1.
Note : DJANGO_ALLOWED_HOSTS = localhost
When and where do you run export DJANGO_ALLOWED_HOSTS = localhost?
Also how do you call docker-compose run etc.?
You should consider that in order for your environment variables to be available to your docker-compose.yml file they need to be called from the same terminal where you exported DJANGO_ALLOWED_HOSTS.
You need to source your env file before you call docker-compose up -d as described in this answer:
set -a
source .my-env
docker-compose up -d
I advise you to read the answer I linked above.
I started a new project using Django. This project is build using Docker with few containers and poetry to install all dependencies.
When I first run docker-compose up -d, everything is installed correctly. Actually, this problem is not related with Docker I suppose.
After I run that command, I'm running docker-compose exec python make -f automation/local/Makefile which has this content
Makefile
.PHONY: all
all: install-deps run-migrations build-static-files create-superuser
.PHONY: build-static-files
build-static-files:
python manage.py collectstatic --noinput
.PHONY: create-superuser
create-superuser:
python manage.py createsuperuser --noinput --user=${DJANGO_SUPERUSER_USERNAME} --email=${DJANGO_SUPERUSER_USERNAME}#zitec.com
.PHONY: install-deps
install-deps: vendor
vendor: pyproject.toml $(wildcard poetry.lock)
poetry install --no-interaction --no-root
.PHONY: run-migrations
run-migrations:
python manage.py migrate --noinput
pyproject.toml
[tool.poetry]
name = "some-random-application-name"
version = "0.1.0"
description = ""
authors = ["xxx <xxx#xxx.com>"]
[tool.poetry.dependencies]
python = ">=3.6"
Django = "3.0.8"
docusign-esign = "^3.4.0"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
django-debug-toolbar = "^2.2"
Debug toolbar is installed by adding those entries under settings.py (MIDDLEWARE / INSTALLED_APP) and even DEBUG_TOOLBAR_CONFIG with next value: SHOW_TOOLBAR_CALLBACK.
Let me confirm that EVERYTHING works after fresh docker-compose up -d. The problem occurs after I stop container and start it again using next commands:
docker-compose down
docker-compose up -d
When I try to access the project it says that Module debug_toolbar does not exist!.
I read all questions from this website, but nothing worked for me.
Has anyone encountered this problem before?
That sounds like normal behavior. A container has a temporary filesystem, and when the container exits any changes that have been made in that filesystem will be permanently lost. Deleting and recreating containers is extremely routine (even just changing environment: or ports: settings in the docker-compose.yml file would cause that to happen).
You should almost never install software in a running container. docker exec is an extremely useful debugging tool, but it shouldn't be the primary way you interact with your container. In both cases you're setting yourself up to lose work if you ever need to change a Docker-level setting or update the base image.
For this example, you can split the contents of that Makefile into two parts, the install_deps target (that installs Python packages but doesn't have any external dependencies) and the rest (that will depend on a database running). You need to run the installation part at image-build time, but the Dockerfile can't access a database, so the remainder needs to happen at container-startup time.
So in your image's Dockerfile, RUN the installation part:
RUN make install-reps
You will also need an entrypoint script that does the rest of the first-time setup, then runs the main container command. This can look like:
#!/bin/sh
make run-migrations build-static-files create-superuser
exec "$#"
Then run this in your Dockerfile:
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD python3 manage.py runserver --host 0.0.0.0
(I've recently seen a lot of Dockerfiles that have just ENTRYPOINT ["python3"]. Splitting ENTRYPOINT and CMD this way isn't especially useful; just move the python3 interpreter command into CMD.)
How to run a command in Docker using custom arguments?
I'm trying to run a command that causes django to rotate using an environment variable through an argument in the act of creating the server.
Thank you very much for your attention.
I need to run the command in this format to work.
# VAR=enviroment_name python manage.py migrate --database=01_sistema
docker
docker exec 24e2b5c60a79 VAR=enviroment_name python manage.py migrate --database=01_sistema
Error
OCI runtime exec failed: exec failed: container_linux.go:344: starting
container process caused "exec: \"VAR=enviroment_name\": executable
file not found in $PATH": unknown
In bash you set environment by appending key=value to command. However, this is not the case for docker. You can pass environment to docker exec by adding argument -e key=value (can be specified multiple times). In your case, that is
docker exec -e VAR=enviroment_name 24e2b5c60a79 python manage.py migrate --database=01_sistema
My goal is to run a flask webserver from a Docker container. Working on a Windows machine this requires Vagrant for creating a VM. Running vagrant up --provider=docker leads to the following complaint:
INFO interface: error: The container started either never left the "stopped" state or
very quickly reverted to the "stopped" state. This is usually
because the container didn't execute a command that kept it running,
and usually indicates a misconfiguration.
If you meant for this container to not remain running, please
set the Docker provider configuration "remains_running" to "false":
config.vm.provider "docker" do |d|
d.remains_running = false
end
This is my Dockerfile
FROM mrmrcoleman/python_webapp
EXPOSE 5000
# Install Python
RUN apt-get install -y python python-dev python-distribute python-pip
# Add and install Python modules
RUN pip install Flask
#copy the working directory to the container
ADD . /
CMD python run.py
And this is the Vagrantfile
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.build_dir = "." #searches for a local dockerfile
end
config.vm.synced_folder ".", "/vagrant", type: "rsync"
rsync__chown = false
end
Because the Vagrantfile and run.py work without trouble independently, I suspect I made a mistake in the Dockerfile. My question is twofold:
Is there something clearly wrong with the Dockerfile or the
Vagrantfile?
Is there a way to have vagrant/docker produce more
specific error messages?
I think the answer I was looking for is using the command
vagrant docker-logs
I broke the Dockerfile because I did not recognize good behaviour as such, because nothing really happens if the app runs as it should. docker-logs confirms that the flask app is listening for requests.
Is there something clearly wrong with the Dockerfile or the Vagrantfile?
Your Dockerfile and Vagrantfiles look good, but I think you need to modify the permissions of run.py to be executable:
...
#copy the working directory to the container
ADD . /
RUN chmod +x run.py
CMD python run.py
Does that work?
Is there a way to have vagrant/docker produce more specific error messages?
Try taking a look at the vagrant debugging page. Another approach I use is to log into the container and try running the script manually.
# log onto the vm running docker
vagrant ssh
# start your container in bash, assuming its already built.
docker run -it my/container /bin/bash
# now from inside your container try to start your app
python run.py
Also, if you want to view your app locally, you'll want to add port forwarding to your Vagrantfile.
I am receiving the error:
ImportError at /
No module named Interest.urls
even though my settings file has been changed several times:
ROOT_URLCONF = 'urls'
or
ROOT_URLCONF = 'interest.urls'
I keep getting the same error, as if it doesn't matter what I put in my settings file, it is still looking for Interest.urls, even though my urls file is located at Interest(django project)/interest/urls.py
I have restarted my nginx server several times and it changes nothing, is there another place I should be looking to change where it looks for my urls file?
Thanks!
I had to restart my supervisorctl, which restarted the gunicorn server which was actually handling the django files
There's not need to restart nginx, you can do these steps:
Install fabric (pip install fabric)
Create a "restart" function into fabfile.py that has the following:
def restart():
sudo('kill -9 `ps -ef | grep -m 1 \'[y]our_project_name\' | awk \'{print $2}\'`')
Call the function through:
$ fab restart
Optional, you might want to add the command into a script with your password just adding "-p mypass" to fabric command
That will kill all your gunicorn processes allowing supervisord to start them again.