I have a Django application running in Heroku. On the initial deployment, I manually migrated the database schema using heroku run.
The next time I needed to push migrations to the app, the release went off without a complaint.
However, when I went to the page to see it live, I was returned a programming error: the new column didn't exist. The migrations had never been run.
Here's my Procfile:
web: gunicorn APP_NAME.wsgi --log-file -
release: python manage.py migrate
release: python manage.py collectstatic --noinput
worker: celery worker -A APP_NAME -B -E -l info
The collectstatic release is run successfully, but the migrate release is seemingly ignored or overlooked. When I manually migrated, they migrated without error. There is an empty __init__.py file in the migrations folder.
If anyone knows what could possibly be hindering the migrate release from running, that would be awesome.
Okay, so I've figured it out. Although in its documentation Heroku seems to imply that there can be more than one release tag in a Procfile, this is untrue.
The last release tag in the Procfile takes precedent.
This means that in order to run multiple commands in the release stage, you have to use a shell script.
Now, my Procfile looks like this:
web: gunicorn APP_NAME.wsgi --log-file -
release: ./release.sh
worker: celery worker -A APP_NAME -B -E -l info
And I have a release.sh script that looks like this:
python manage.py migrate
python manage.py collectstatic --no-input
MAKE SURE TO MAKE YOUR RELEASE.SH SCRIPT EXECUTABLE:
Running chmod u+x release.sh in terminal prior to committing should do the trick.
As I cannot comment #rchurch4's answer, here it is: if you just have a few commands to run at release time, you can use the following in your Procfile:
release: command1 && command2 && command3 [etc.]
for instance
release: python manage.py migrate && python manage.py loaddata foo && python manage.py your_custom_management_command
Related
So I am a beginner in docker and Django. What I have here is a django app which I am trying to dockerize and run. My requirements.txt has only django and gunicorn as the packages.
I am getting the below in terminal after building and running the docker image:
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
August 26, 2021 - 06:57:22
Django version 3.2.6, using settings 'myproject.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
Below is my Dockerfile:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
RUN mkdir /Django
WORKDIR /Django
ADD . /Django
RUN pip install -r requirements.txt
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
The commands I am using are:
docker build . -t myproj
docker run -p 8000:8000 myproj
I have tried adding allowedhosts = ['127.0.0.1'] in settings.py but still I am getting "The site can't be reached. 127.0.0.1 refused to connect.
Not able to see the "Congratulations" screen.
Please help me out with this.
P.s: I am using windows machine
Updates
I tried running the below line and got the following output:
docker exec 8e6c4e4a58db curl 127.0.0.1:8000
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"curl\": executable file not found in $PATH": unknown
Without your settings.py, this can be hard to figure out. You say you have allowedhosts = ['127.0.0.1'] in there and that should definitely not be necessary. It might actually be what's blocking your host, since requests from your host come from a different IP address.
I've made the following Dockerfile that creates a starter project and runs it
FROM python:latest
RUN python -m pip install Django
RUN django-admin startproject mysite
WORKDIR /mysite
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
If I build it and run it with
docker build -t mysite .
docker run -d -p 8000:8000 mysite
I can connect to http://localhost:8000/ on my machine and get the default page (I'm on Windows too).
I hope that helps you to locate your issue.
PS: Your curl command fails because curl isn't installed in your image, so it failing has nothing to do with your issue.
I was working on a django project. The Procfile is the place that heroku always looks on before deployment.
Currently it is configured this way and it works all fine.
web: python manage.py collectstatic --no-input; gunicorn project_folder.wsgi --log-file - --log-level debug
This means it runs collectstatic command every time I make any changes in my project and deploy it on server. Can the same thing apply to run a background task? This background task needs to run automatically after my project is deployed.
python manage.py process_tasks is what I type in my command to trigger this background_task in django on my local server. So, would something like this help invoke it the same way as python manage.py collectstatic is (that is collecting all static files from my static folder)?
web: python manage.py collectstatic --no-input; python manage.py process_tasks; gunicorn project_folder.wsgi --log-file - --log-level debug
Did you try the release process type? (as mentioned here)
web: python manage.py collectstatic --no-input; gunicorn project_folder.wsgi --log-file - --log-level debug
release: python manage.py process_tasks
It should run the release command after deployment is complete.
I've been having a hard time trying to get a successful deployment of my Django Web App to AWS' Elastic Beanstalk. I am able to deploy my app from the EB CLI on my local machine with no problem at all until I add a list of container_commands config file inside a .ebextensions folder.
Here are the contents of my config file:
container_commands:
01_makeAppMigrations:
command: "django-admin.py makemigrations"
leader_only: true
02_migrateApps:
command: "django-admin.py migrate"
leader_only: true
03_create_superuser_for_django_admin:
command: "django-admin.py createfirstsuperuser"
leader_only: true
04_collectstatic:
command: "django-admin.py collectstatic --noinput"
I've dug deep into the logs and found these messages in the cfn-init-cmd.log to be the most helpful:
2020-06-18 04:01:49,965 P18083 [INFO] Config postbuild_0_DjangoApp_smt_prod
2020-06-18 04:01:49,991 P18083 [INFO] ============================================================
2020-06-18 04:01:49,991 P18083 [INFO] Test for Command 01_makeAppMigrations
2020-06-18 04:01:49,995 P18083 [INFO] Completed successfully.
2020-06-18 04:01:49,995 P18083 [INFO] ============================================================
2020-06-18 04:01:49,995 P18083 [INFO] Command 01_makeAppMigrations
2020-06-18 04:01:49,998 P18083 [INFO] -----------------------Command Output-----------------------
2020-06-18 04:01:49,998 P18083 [INFO] /bin/sh: django-admin.py: command not found
2020-06-18 04:01:49,998 P18083 [INFO] ------------------------------------------------------------
2020-06-18 04:01:49,998 P18083 [ERROR] Exited with error code 127
I'm not sure why it can't find that command in this latest environment.
I've deployed this same app with this same config file to a prior beanstalk environment with no issues at all. The only difference now is that this new environment was launched within a VPC and is using the latest recommended platform.
Old Beanstalk environment platform: Python 3.6 running on 64bit Amazon Linux/2.9.3
New Beanstalk environment platform: Python 3.7 running on 64bit Amazon Linux 2/3.0.2
I've ran into other issues during this migration related to syntax updates with this latest platform. I'm hoping this issue is also just a simple syntax issue, but I've dug far and wide with no luck...
If someone could point out something obvious that I'm missing here, I would greatly appreciate it!
Please let me know if I can provide some additional info!
Finally got to the bottom of it all, after deep-diving through the AWS docs and forums...
Essentially, there were a lot of changes that came along with Beanstalk moving from Amazon Linux to Amazon Linux 2. A lot of these changes are vaguely mentioned here.
One major difference for the Python platform as mentioned in the link above is that "the path to the application's directory on Amazon EC2 instances of your environment is /var/app/current. It was /opt/python/current/app on Amazon Linux AMI platforms." This is crucial for when you're trying to create the Django migrate scripts as I'll explain further in detail below, or when you eb ssh into the Beanstalk instance and navigate it yourself.
Another major difference is the introduction of Platform hooks, which is mentioned in this wonderful article here. According to this article, "Platform hooks are a set of directories inside the application bundle that you can populate with scripts." Essentially these scripts will now handle what the previous container_commands handled in the .ebextensions config files. Here is the directory structure of these Platform hooks:
Knowing this, and walking through this forum here, where wonderful community members went through the trouble of filling in the gaps in Amazon's docs, I was able to successfully deploy with the following file set up:
(Please note that "MDGOnline" is the name of my Django app)
.ebextensions\01_packages.config:
packages:
yum:
git: []
postgresql-devel: []
libjpeg-turbo-devel: []
.ebextensions\django.config:
container_commands:
01_sh_executable:
command: find .platform/hooks/ -type f -iname "*.sh" -exec chmod +x {} \;
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: MDGOnline.settings
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: static
/static_files: static_files
aws:elasticbeanstalk:container:python:
WSGIPath: MDGOnline.wsgi:application
.platform\hooks\predeploy\01_migrations.sh:
#!/bin/bash
source /var/app/venv/*/bin/activate
cd /var/app/staging
python manage.py makemigrations
python manage.py migrate
python manage.py createfirstsuperuser
python manage.py collectstatic --noinput
Please note that the '.sh' scripts need to be linux-based. I ran into an error for a while where the deployment would fail and provide this message in the logs: .platform\hooks\predeploy\01_migrations.sh failed with error fork/exec .platform\hooks\predeploy\01_migrations.sh: no such file or directory .
Turns out this was due to the fact that I created this script on my windows dev environment. My solution was to create it on the linux environment, and copy it over to my dev environment directory within Windows. There are methods to convert DOS to Unix out there I'm sure. This one looks promising dos2unix!
I really wish AWS could document this migration better, but I hope this answer can save someone the countless hours I spent getting this deployment to succeed.
Please feel free to ask me for clarification on any of the above!
EDIT: I've added a "container_command" to my config file above as it was brought to my attention that another user also encountered the "permission denied" error for the platform hook when deploying. This "01_sh_executable" command is to chmod all of the .sh scripts within the hooks directory of the app, so that Elastic Beanstalk can have the proper permission to execute them during the deployment process. I found this container command solution in this forum here:
This might work
.ebextensions/django.config
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: mysite.wsgi:application
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: static
packages:
yum:
python3-devel: []
mariadb-devel: []
container_commands:
01_collectstatic:
command: "source /var/app/venv/staging-LQM1lest/bin/activate && python manage.py collectstatic --noinput"
02_migrate:
command: "source /var/app/venv/staging-LQM1lest/bin/activate && python manage.py migrate --noinput"
leader_only: true
This works for me.
container_commands:
01_migrate:
command: "source /var/app/venv/*/bin/activate && python /var/app/staging/manage.py migrate --noinput"
leader_only: true
In my Procfile I have the following:
worker: cd appname && celery -A appname worker -l info --app=appname.celery_setup:app
However, when my app submits a task it never happens, but I think the celery worker is at least sort of working, because
heroku logs --app appname
every so often gives me one of these:
2016-07-22T07:53:21+00:00 app[heroku-redis]: source=REDIS sample#active-connections=14 sample#load-avg-1m=0.03 sample#load-avg-5m=0.09 sample#load-avg-15m=0.085 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664884.0kB sample#memory-free=13458244.0kB sample#memory-cached=187136kB sample#memory-redis=566800bytes sample#hit-rate=0.17778 sample#evicted-keys=0
Also, when I open up bash by running
heroku run bash --app appname
and then type in
cd appname && celery -A appname worker -l info --app=appname.celery_setup:app
It immediately tells me the task has been received and then executes it. I would like to have this happen without me having to manually log in and execute the command - is that possible? Do I need a paid account on heroku to do that?
I figured it out. Turns out you also have to do
heroku ps:scale worker=1 --app appname
Or else you won't actually be running a worker.
I created a django app on openshift successfully. But, I'm not able to run syncdb using the following deploy hook.
#!/bin/bash
source $OPENSHIFT_HOMEDIR/python/virtenv/bin/activate
cd $OPENSHIFT_REPO_DIR/wsgi/$OPENSHIFT_APP_NAME
python manage.py syncdb --noinput
What could be wrong? Please help!
I think its simply because you forget to add the right for file execution with chmod +x filename