I've written a Django app, and now I want to make it easy to deploy on multiple servers.
The basic installation is:
Copy the app folder into the Django project folder
Add it to INSTALLED_APPS in settings.py
Run ./manage.py collectstatic
This particular app doesn't need to use the DB, but if it did, I'd use south and run ./manage.py migrate, but that's another story.
The part I'm having trouble with is #2. I don't want to have to manually edit this file every time. What's the easiest/most robust way to update that?
I was thinking I could use the inspect module to find the variable and then somehow append it, but I'm not having any luck. inspect.getsourcelines won't find variables.
You can modify your settings.py using bash.
#set $SETTINGS_FILE variable to full path of the your django project settings.py file
SETTINGS_FILE="/path/to/your/django/project/settings.py"
# checks that app $1 is in the django project settings file
is_app_in_django_settings() {
# checking that django project settings file exists
if [ ! -f $SETTINGS_FILE ]; then
echo "Error: The django project settings file '$SETTINGS_FILE' does not exist"
exit 1
fi
cat $SETTINGS_FILE | grep -Pzo "INSTALLED_APPS\s?=\s?\[[\s\w\.,']*$1[\s\w\.,']*\]\n?" > /dev/null 2>&1
# now $?=0 if app is in settings file
# $? not 0 otherwise
}
# adds app $1 to the django project settings
add_app2django_settings() {
is_app_in_django_settings $1
if [ $? -ne 0 ]; then
echo "Info. The app '$1' is not in the django project settings file '$SETTINGS_FILE'. Adding."
sed -i -e '1h;2,$H;$!d;g' -re "s/(INSTALLED_APPS\s?=\s?\[[\n '._a-zA-Z,]*)/\1 '$1',\n/g" $SETTINGS_FILE
# checking that app $1 successfully added to django project settings file
is_app_in_django_settings $1
if [ $? -ne 0 ]; then
echo "Error. Could not add the app '$1' to the django project settings file '$SETTINGS_FILE'. Add it manually, then run this script again."
exit 1
else
echo "Info. The app '$1' was successfully added to the django settings file '$SETTINGS_FILE'."
fi
else
echo "Info. The app '$1' is already in the django project settings file '$SETTINGS_FILE'"
fi
}
Use:
add_app2django_settings "my_app"
Here are my reasons why I think this would be wrong:
it is extra code complexity without any big need, adding one line to settings every time is not that bad, especially if you are doing step #1 and #3.
it will become not explicit what apps your project is using. When another developer will work on your project, he might not know that your app is installed.
you should do step #1 and step #2 on code versioning system, test the whole system and then commit the changes and just then deploy it.
I think you have something wrong (from my point of view) in your develop/deploy process if you are looking for such an "optimization". I think it is much easier and better to use INSTALLED_APPS.
If you are building something for public use and you want to make it as easy as possible to add modules then it would be nice. In this case I would recommend to package project and it's apps as python eggs and make use of entry points. Then you could deploy an app into project like this:
pip install my-app-name
Without even step #1 and #3! Step #1 will be done by pip, and step #2 and #3 will be done by setup hooks defined in your project.
Paste script is a good example of entry-points utilization:
# Install paste script:
pip install pastescript
# install django templates for pastescript:
pip install fez.djangoskel
# now paste script knows about fez.djangoskel because of entry-points
# start a new django project from fez's templates:
paste create -t django_buildout
Here is a portion of setup.py from fez.djangoskel package:
...
entry_points="""
[paste.paster_create_template]
django_buildout=fez.djangoskel.pastertemplates:DjangoBuildoutTemplate
django_app=fez.djangoskel.pastertemplates:DjangoAppTemplate
...
zc.buildout is another great tool which might make your deployments much easier. Python eggs plays very nice with buildout.
Related
I started a new project using Django. This project is build using Docker with few containers and poetry to install all dependencies.
When I first run docker-compose up -d, everything is installed correctly. Actually, this problem is not related with Docker I suppose.
After I run that command, I'm running docker-compose exec python make -f automation/local/Makefile which has this content
Makefile
.PHONY: all
all: install-deps run-migrations build-static-files create-superuser
.PHONY: build-static-files
build-static-files:
python manage.py collectstatic --noinput
.PHONY: create-superuser
create-superuser:
python manage.py createsuperuser --noinput --user=${DJANGO_SUPERUSER_USERNAME} --email=${DJANGO_SUPERUSER_USERNAME}#zitec.com
.PHONY: install-deps
install-deps: vendor
vendor: pyproject.toml $(wildcard poetry.lock)
poetry install --no-interaction --no-root
.PHONY: run-migrations
run-migrations:
python manage.py migrate --noinput
pyproject.toml
[tool.poetry]
name = "some-random-application-name"
version = "0.1.0"
description = ""
authors = ["xxx <xxx#xxx.com>"]
[tool.poetry.dependencies]
python = ">=3.6"
Django = "3.0.8"
docusign-esign = "^3.4.0"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
django-debug-toolbar = "^2.2"
Debug toolbar is installed by adding those entries under settings.py (MIDDLEWARE / INSTALLED_APP) and even DEBUG_TOOLBAR_CONFIG with next value: SHOW_TOOLBAR_CALLBACK.
Let me confirm that EVERYTHING works after fresh docker-compose up -d. The problem occurs after I stop container and start it again using next commands:
docker-compose down
docker-compose up -d
When I try to access the project it says that Module debug_toolbar does not exist!.
I read all questions from this website, but nothing worked for me.
Has anyone encountered this problem before?
That sounds like normal behavior. A container has a temporary filesystem, and when the container exits any changes that have been made in that filesystem will be permanently lost. Deleting and recreating containers is extremely routine (even just changing environment: or ports: settings in the docker-compose.yml file would cause that to happen).
You should almost never install software in a running container. docker exec is an extremely useful debugging tool, but it shouldn't be the primary way you interact with your container. In both cases you're setting yourself up to lose work if you ever need to change a Docker-level setting or update the base image.
For this example, you can split the contents of that Makefile into two parts, the install_deps target (that installs Python packages but doesn't have any external dependencies) and the rest (that will depend on a database running). You need to run the installation part at image-build time, but the Dockerfile can't access a database, so the remainder needs to happen at container-startup time.
So in your image's Dockerfile, RUN the installation part:
RUN make install-reps
You will also need an entrypoint script that does the rest of the first-time setup, then runs the main container command. This can look like:
#!/bin/sh
make run-migrations build-static-files create-superuser
exec "$#"
Then run this in your Dockerfile:
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD python3 manage.py runserver --host 0.0.0.0
(I've recently seen a lot of Dockerfiles that have just ENTRYPOINT ["python3"]. Splitting ENTRYPOINT and CMD this way isn't especially useful; just move the python3 interpreter command into CMD.)
This is my fabric script that runs on the jenkins server.
sudo('/home/myjenkins/killit.sh',pty=False)
sudo('/home/myjenkins/makedir.sh',pty=False)
sudo('/home/myjenkins/runit.sh',pty=False)
This kills the old server, creates a virtualenv, installs the requirements and restarts the server.
The problem is the with the script that starts the server - runit.sh :-
nohup /home/myjenkins/devserver/dev/bin/python /home/myjenkins/devserver
/workspace/manage.py runserver --noreload 0:80 >> jenkins.out &
When the jenkins server that starts the server and I navigate to the homepage, it gives me a 404 Page Not Found. It says /static/index.html not found. But the file exists. When I run 'sudo bash runit.sh' and I access the homepage, It works fine.
mkdir -p /home/myjenkins/devserver
cp -rf /home/myjenkins/workspace /home/jenkins/devserver/
cp -f /home/myjenkins/dev_settings.py /home/myjenkins/devserver/workspace/mywebsite/settings.py
cd /home/myjenkins/devserver
virtualenv -p python3 dev
cd /home/myjenkins/devserver/workspace
../dev/bin/pip install -r requirements.txt
Please ask me for more details if you need it.
EDITED 9/2/18
When I start the script from the folder containing manage.py, the server is able to serve the files. But Jenkins was starting the script from the home folder and if I also start the script from the home folder - the server is not able to find the files. look at my comment for more details. It would be great if someone could explain why this happens even though I've specified the full path in the script.
nohup /home/myjenkins/devserver/dev/bin/python /home/myjenkins/devserver
/workspace/manage.py runserver --noreload 0:80 >> jenkins.out &
Okay I figured out the whole deal. My django server was taking the output of the npm build from the wrong folder.
In the settings.py file, the variable STATICFILES_DIRS was set as:-
STATICFILES_DIRS = ('frontend/dist',)
instead of:-
STATICFILES_DIRS = (os.path.join(BASE_DIR,'frontend/dist'),)
Thus, when Jenkins was running the script, it was doing so from the home folder. This made Django's staticfiles finders to look at /home/myjenkins/frontend/dist instead of the relative '../frontend/dist'
I have custom locale path in my settings
PROJECT_ROOT = os.path.normpath(os.path.join(os.path.dirname(__file__), '..'))
LOCALE_PATHS = (
os.path.join(PROJECT_ROOT,'templates','v1','locale'),
)
but when i'm trying to create .po files, i've get error:
$ python manage.py makemessages --locale=ru
CommandError: This script should be run from the Django Git tree or your project or app tree. If you did indeed run it from the Git checkout or your project or application, maybe you are just missing the conf/locale (in the django tree) or locale (for project and application) directory? It is not created automatically, you have to create it by hand if you want to enable i18n for your project or application.
Why django doesn't want to use LOCALE_PATHS?
Absolute paths are:
PROJECT_ROOT = '/home/creotiv/ENVS/project_env/project/project'
LOCALE_PATHS = ('/home/creotiv/ENVS/project_env/project/project/templates/v1/locale')
PS: Also i added translation to main app and django doesnt see it either.
First things first; You need to give correct path of the manage.py to the python.
$ python /path/to/the/manage.py makemessages --locale=ru
Or go to directory which is included manage.py. After that run that command you've wrote in your question.
$ python manage.py makemessages --locale=ru
Next; Did you create ./templates/v1/locale directories under the project home?
According to error you may check existence of the ./templates/v1/locale' folder.
Also; You may need to append LANGUAGES setting in to project settings.py file:
LANGUAGES = (
('ru', _('Russian')),
('en', _('English')), # (Optional)
)
Appendix:
If you are developing under virtual environment, do not forget to enable virtual environment first.
Maybe you have to go inside the app or project directory. Try this:
$ ls # should return also the manage.py
$ cd myproject
$ ls # should return also the settings.py
$ python ../manage.py makemessages --locale=ru
Just updated to django 1.7 and problem gone.
Synopsis of problem:
I want to use django-pytest to test my django scripts yet py.test complains that it cannot find DJANGO_MODULE_SETTINGS. I have followed the documentation of django-pytest for setting DJANGO_MODULE_SETTINGS. I am not getting the expected behavior from py.test.
I would like help troubleshooting this problem. Thank you in advance.
References—django-pytest documentation
Setting the DJANGO_SETTINGS_MODULE in virtualevn postactivate script:
In a new shell, the $DJANGO_SETTINGS_MODULE is not set.
I expect that the $DJANGO_SETTINGS_MODULE would be None. It
is.
In: echo $DJANGO_SETTINGS_MODULE
Out:
Contents of .virtualenv/browsing/bin/postactivate
As documented in the django-pytest documents, one may set
$DJANGO_SETTINGS_MODULE by exporting it in the postactivate
script for virutalenv.
export DJANGO_SETTINGS_MODULE=automated_browsing.settings
At terminal cli:
As expected, $DJANGO_SETTING_MODULE is defined.
# activate the virtualenv
In: workon browsing
In: echo $DJANGO_SETTINGS_MODULE
Out: automated_browsing.settings
As expected, the settings module is set as shown when the server runs:
# cd into django project
In: cd ../my_django/automated_browsing
# see if server runs
In: python manage.py runserver
Out: # output
Validating models...
0 errors found
September 02, 2014 - 10:45:35
Django version 1.6.6, using settings 'automated_browsing.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
In: ^C
As NOT expected, py.test cannot find it.
In: echo $DJANGO_SETTINGS_MODULE
Out: automated_browsing.settings
In: py.test tests/test_browser.py
Out: …
_pytest.config.UsageError: Could not import settings
'automated_browsing.settings'
(Is it on sys.path? Is there an import error in the settings file?)
No module named automated_browsing.settings
installed packages
Django==1.6.6
EasyProcess==0.1.6
PyVirtualDisplay==0.1.5
South==1.0
argparse==1.2.1
ipython==2.2.0
lxml==3.3.6
psycopg2==2.5.4
py==1.4.23
pytest-django==2.6.2
pytest==2.6.1
selenium==2.42.1
test-pkg==0.0
wsgiref==0.1.2
Had the same problem today. In the project/settings/ you have to create init.py . Like this django recognize the folder.
In my case I just typed __inti__.py which was wrong. I corrected inti.py --> init.py and it works fine.
Hope it helps.
V.
Inspired by this answer, I have solved my problem.
According to the django-pytest documentation for managing the python path:
By default, pytest-django tries to find Django projects by
automatically looking for the project’s manage.py file and adding its
directory to the Python path.
I assumed that executing py.test tests/test_browser.py inside of the django project containing manage.py would handle the PYTHONPATH. This was an incorrect assumption in my situation.
I added the following to .virtualenv/$PROJECT/bin/postactivate:
export PYTHONPATH=$PYTHONPATH:$HOME/development/my_python/my_django/automated_browsing
And now py.test works as expected.
I tried two ways to do code completion, one is OK, the other fails.
OK one does like below:
$> cd myDjangoProject/
myDjangoProject $> export PYTHONPATH="."
myDjangoProject $> DJANGO_SETTINGS_MODULE=settings vim urls.py
Then ^x ^o can work well. But this method leads me repeatly do above when edit a file in project.
So an idea comes to me, why not create a script to do above automatically?
Refer to Blog for django code completion in vim, this is exactly what I think, but I encounter a problem during my configuration.
Fail one below:
create a script in /usr/bin named vim_wrapper
#!/bin/bash
export PYTHONPATH="${PYTHONPATH}:/path/to/myDjangoProject/"
DJANGO_SETTINGS_MODULE="/path/to/myDjangoProject/settings" vim $#
Add alias in ~/.bashrc
alias vi="vim_wrapper"
Restart terminal sesstion, command vi /path/to/myDjangoProject/urls.py, make a test :python from django import db, an Error happen says:
ImportError: Could not import settings
'myDjangoProject/settings' (Is it on
sys.path?): Import by filename i s not
supported.
I don't know how to solve this. Thanks for help.
Try just settings DJANGO_SETTINGS_MODULE=settings as you did when it worked. My hope is that your setting of PYTHONPATH will be sufficient.
Create the vim_django executable script in /usr/bin
Content in vim_django script:
#!/bin/bash
PROJECT=`python -c "import os; print os.getcwd().partition('Solutions')[2].split(os.sep)[1]"`
export PYTHONPATH="${PYTHONPATH}:/path/to/django-projects-parent/"
DJANGO_SETTINGS_MODULE=$PROJECT.settings vim $#
Type vim_django urls.py(or other files) to edit in django project, Ctrl-x & Ctrl-o for code completion.
NOTE: In PROJECT settings, you may notice Solutions that is parent directory of all my django projects