I've been having a hard time trying to get a successful deployment of my Django Web App to AWS' Elastic Beanstalk. I am able to deploy my app from the EB CLI on my local machine with no problem at all until I add a list of container_commands config file inside a .ebextensions folder.
Here are the contents of my config file:
container_commands:
01_makeAppMigrations:
command: "django-admin.py makemigrations"
leader_only: true
02_migrateApps:
command: "django-admin.py migrate"
leader_only: true
03_create_superuser_for_django_admin:
command: "django-admin.py createfirstsuperuser"
leader_only: true
04_collectstatic:
command: "django-admin.py collectstatic --noinput"
I've dug deep into the logs and found these messages in the cfn-init-cmd.log to be the most helpful:
2020-06-18 04:01:49,965 P18083 [INFO] Config postbuild_0_DjangoApp_smt_prod
2020-06-18 04:01:49,991 P18083 [INFO] ============================================================
2020-06-18 04:01:49,991 P18083 [INFO] Test for Command 01_makeAppMigrations
2020-06-18 04:01:49,995 P18083 [INFO] Completed successfully.
2020-06-18 04:01:49,995 P18083 [INFO] ============================================================
2020-06-18 04:01:49,995 P18083 [INFO] Command 01_makeAppMigrations
2020-06-18 04:01:49,998 P18083 [INFO] -----------------------Command Output-----------------------
2020-06-18 04:01:49,998 P18083 [INFO] /bin/sh: django-admin.py: command not found
2020-06-18 04:01:49,998 P18083 [INFO] ------------------------------------------------------------
2020-06-18 04:01:49,998 P18083 [ERROR] Exited with error code 127
I'm not sure why it can't find that command in this latest environment.
I've deployed this same app with this same config file to a prior beanstalk environment with no issues at all. The only difference now is that this new environment was launched within a VPC and is using the latest recommended platform.
Old Beanstalk environment platform: Python 3.6 running on 64bit Amazon Linux/2.9.3
New Beanstalk environment platform: Python 3.7 running on 64bit Amazon Linux 2/3.0.2
I've ran into other issues during this migration related to syntax updates with this latest platform. I'm hoping this issue is also just a simple syntax issue, but I've dug far and wide with no luck...
If someone could point out something obvious that I'm missing here, I would greatly appreciate it!
Please let me know if I can provide some additional info!
Finally got to the bottom of it all, after deep-diving through the AWS docs and forums...
Essentially, there were a lot of changes that came along with Beanstalk moving from Amazon Linux to Amazon Linux 2. A lot of these changes are vaguely mentioned here.
One major difference for the Python platform as mentioned in the link above is that "the path to the application's directory on Amazon EC2 instances of your environment is /var/app/current. It was /opt/python/current/app on Amazon Linux AMI platforms." This is crucial for when you're trying to create the Django migrate scripts as I'll explain further in detail below, or when you eb ssh into the Beanstalk instance and navigate it yourself.
Another major difference is the introduction of Platform hooks, which is mentioned in this wonderful article here. According to this article, "Platform hooks are a set of directories inside the application bundle that you can populate with scripts." Essentially these scripts will now handle what the previous container_commands handled in the .ebextensions config files. Here is the directory structure of these Platform hooks:
Knowing this, and walking through this forum here, where wonderful community members went through the trouble of filling in the gaps in Amazon's docs, I was able to successfully deploy with the following file set up:
(Please note that "MDGOnline" is the name of my Django app)
.ebextensions\01_packages.config:
packages:
yum:
git: []
postgresql-devel: []
libjpeg-turbo-devel: []
.ebextensions\django.config:
container_commands:
01_sh_executable:
command: find .platform/hooks/ -type f -iname "*.sh" -exec chmod +x {} \;
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: MDGOnline.settings
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: static
/static_files: static_files
aws:elasticbeanstalk:container:python:
WSGIPath: MDGOnline.wsgi:application
.platform\hooks\predeploy\01_migrations.sh:
#!/bin/bash
source /var/app/venv/*/bin/activate
cd /var/app/staging
python manage.py makemigrations
python manage.py migrate
python manage.py createfirstsuperuser
python manage.py collectstatic --noinput
Please note that the '.sh' scripts need to be linux-based. I ran into an error for a while where the deployment would fail and provide this message in the logs: .platform\hooks\predeploy\01_migrations.sh failed with error fork/exec .platform\hooks\predeploy\01_migrations.sh: no such file or directory .
Turns out this was due to the fact that I created this script on my windows dev environment. My solution was to create it on the linux environment, and copy it over to my dev environment directory within Windows. There are methods to convert DOS to Unix out there I'm sure. This one looks promising dos2unix!
I really wish AWS could document this migration better, but I hope this answer can save someone the countless hours I spent getting this deployment to succeed.
Please feel free to ask me for clarification on any of the above!
EDIT: I've added a "container_command" to my config file above as it was brought to my attention that another user also encountered the "permission denied" error for the platform hook when deploying. This "01_sh_executable" command is to chmod all of the .sh scripts within the hooks directory of the app, so that Elastic Beanstalk can have the proper permission to execute them during the deployment process. I found this container command solution in this forum here:
This might work
.ebextensions/django.config
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: mysite.wsgi:application
aws:elasticbeanstalk:environment:proxy:staticfiles:
/static: static
packages:
yum:
python3-devel: []
mariadb-devel: []
container_commands:
01_collectstatic:
command: "source /var/app/venv/staging-LQM1lest/bin/activate && python manage.py collectstatic --noinput"
02_migrate:
command: "source /var/app/venv/staging-LQM1lest/bin/activate && python manage.py migrate --noinput"
leader_only: true
This works for me.
container_commands:
01_migrate:
command: "source /var/app/venv/*/bin/activate && python /var/app/staging/manage.py migrate --noinput"
leader_only: true
Related
So I am a beginner in docker and Django. What I have here is a django app which I am trying to dockerize and run. My requirements.txt has only django and gunicorn as the packages.
I am getting the below in terminal after building and running the docker image:
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
August 26, 2021 - 06:57:22
Django version 3.2.6, using settings 'myproject.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
Below is my Dockerfile:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
RUN mkdir /Django
WORKDIR /Django
ADD . /Django
RUN pip install -r requirements.txt
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
The commands I am using are:
docker build . -t myproj
docker run -p 8000:8000 myproj
I have tried adding allowedhosts = ['127.0.0.1'] in settings.py but still I am getting "The site can't be reached. 127.0.0.1 refused to connect.
Not able to see the "Congratulations" screen.
Please help me out with this.
P.s: I am using windows machine
Updates
I tried running the below line and got the following output:
docker exec 8e6c4e4a58db curl 127.0.0.1:8000
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"curl\": executable file not found in $PATH": unknown
Without your settings.py, this can be hard to figure out. You say you have allowedhosts = ['127.0.0.1'] in there and that should definitely not be necessary. It might actually be what's blocking your host, since requests from your host come from a different IP address.
I've made the following Dockerfile that creates a starter project and runs it
FROM python:latest
RUN python -m pip install Django
RUN django-admin startproject mysite
WORKDIR /mysite
EXPOSE 8000
CMD python manage.py runserver 0.0.0.0:8000
If I build it and run it with
docker build -t mysite .
docker run -d -p 8000:8000 mysite
I can connect to http://localhost:8000/ on my machine and get the default page (I'm on Windows too).
I hope that helps you to locate your issue.
PS: Your curl command fails because curl isn't installed in your image, so it failing has nothing to do with your issue.
I have a Django application running in Heroku. On the initial deployment, I manually migrated the database schema using heroku run.
The next time I needed to push migrations to the app, the release went off without a complaint.
However, when I went to the page to see it live, I was returned a programming error: the new column didn't exist. The migrations had never been run.
Here's my Procfile:
web: gunicorn APP_NAME.wsgi --log-file -
release: python manage.py migrate
release: python manage.py collectstatic --noinput
worker: celery worker -A APP_NAME -B -E -l info
The collectstatic release is run successfully, but the migrate release is seemingly ignored or overlooked. When I manually migrated, they migrated without error. There is an empty __init__.py file in the migrations folder.
If anyone knows what could possibly be hindering the migrate release from running, that would be awesome.
Okay, so I've figured it out. Although in its documentation Heroku seems to imply that there can be more than one release tag in a Procfile, this is untrue.
The last release tag in the Procfile takes precedent.
This means that in order to run multiple commands in the release stage, you have to use a shell script.
Now, my Procfile looks like this:
web: gunicorn APP_NAME.wsgi --log-file -
release: ./release.sh
worker: celery worker -A APP_NAME -B -E -l info
And I have a release.sh script that looks like this:
python manage.py migrate
python manage.py collectstatic --no-input
MAKE SURE TO MAKE YOUR RELEASE.SH SCRIPT EXECUTABLE:
Running chmod u+x release.sh in terminal prior to committing should do the trick.
As I cannot comment #rchurch4's answer, here it is: if you just have a few commands to run at release time, you can use the following in your Procfile:
release: command1 && command2 && command3 [etc.]
for instance
release: python manage.py migrate && python manage.py loaddata foo && python manage.py your_custom_management_command
I have deployed Django using gunicorn and nginx. The django project is located in a virtual environment. Everything is working perfectly when I run -
gunicorn mydjangoproject.wsgi -c gunicorn_config.py
I am running the above command inside my Django project folder containing manage.py with the virtual environment active.
However now i want to close the server terminal and want gunicorn to run automatically. For this I am using Supervisor. I have installed supervisor using apt-get and created a gunicorn.conf file in supervisor's conf.d.
But when I run supervisorctl start gunicorn I am getting a fatal error-
gunicorn: ERROR (abnormal termination)
So checked the log file and it says-
supervisor:couldn't exec root/ervirtualenvpy2/bin/gunicorn: ENOENT
child process was not spawned
My configuration file for supervisor's gunicorn.conf looks like this-
[program:gunicorn]
command = root/ervirtualenvpy2/bin/gunicorn myproject.wsgi -c root/path/to/the/gunicorn_conf.py/file
directory = root/ervirtualenvpy2/path/to/myproject/
user=root
autorestart=true
Going by what you said and your config everything seems right except that you have specified relative path rather than absolute path:
see gunicorn docs
Instead it should be:
[program:gunicorn]
command = /root/ervirtualenvpy2/bin/gunicorn myproject.wsgi -c /root/path/to/the/gunicorn_conf.py/file
directory = /root/ervirtualenvpy2/path/to/myproject
user=root
autorestart=true
Also Check your gunicorn file path in env/bin/gunicorn file
In my case, I changed my env directory to another place, so please be sure with that
Wrong path: #!/home/ubuntu/nikhil_project/env/bin/python
Correct path: #!/home/ubuntu/env/bin/python
I'm trying to deploy my first app using Python/Flask on Heroku. I don't really know what I'm doing and am just following the tutorial at https://devcenter.heroku.com/articles/python#prerequisites. When I type the command heroku ps:scale web=1 I'm getting the error message "No such type as web". My Procfile says web: python scrabble_cheater.py, which I believe is correct. Here's the log of my terminal:
(venv)jason-olsens-macbook-pro:scrabble paulnichols$ heroku status
=== Heroku Status
Development: No known issues at this time.
Production: No known issues at this time.
(venv)jason-olsens-macbook-pro:scrabble paulnichols$ heroku config
=== enigmatic-mountain-1395 Config Vars
LANG: en_US.UTF-8
LD_LIBRARY_PATH: /app/.heroku/vendor/lib
LIBRARY_PATH: /app/.heroku/vendor/lib
PATH: /app/.heroku/venv/bin:/bin:/usr/local/bin:/usr/bin
PYTHONHASHSEED: random
PYTHONHOME: /app/.heroku/venv/
PYTHONPATH: /app/
PYTHONUNBUFFERED: true
(venv)jason-olsens-macbook-pro:scrabble paulnichols$ heroku ps
(venv)jason-olsens-macbook-pro:scrabble paulnichols$ git push heroku master
Warning: Permanently added the RSA host key for IP address '50.19.85.154' to the list of known hosts.
Everything up-to-date
(venv)jason-olsens-macbook-pro:scrabble paulnichols$ heroku ps:scale web=1
Scaling web processes... failed
! No such type as web
(venv)jason-olsens-macbook-pro:scrabble paulnichols$
Any help is greatly appreciated!
Procfile was saved with a .txt extension by mistake--once I removed the extension it worked.
I am using Heroku with python and Flask. My app was working fine until I updated a few lines in my python application file. The app runs fine locally, but I now have the following error when I try to access my app:
"An error occurred in the application and your page could not be served. Please try again in a few moments.
If you are the application owner, check your logs for details."
My logs look something like this:
2012-10-03T17:40:26+00:00 heroku[web.1]: Process exited with status 1
2012-10-03T17:40:26+00:00 heroku[web.1]: State changed from starting to crashed
2012-10-03T17:51:25+00:00 heroku[web.1]: State changed from crashed to starting
2012-10-03T17:51:26+00:00 heroku[web.1]: Starting process with command `python presentation.py`
2012-10-03T17:51:26+00:00 app[web.1]: ImportError: No module named site
I am also no longer able to run python through heroku:
Cinnas-MacBook-Pro:infinite-fortress-4866 cinna$ heroku run python
Running `python` attached to terminal... up, run.1
ImportError: No module named site
The next thing I have tried to do is check my environment variables:
Cinnas-MacBook-Pro:infinite-fortress-4866 cinna$ heroku config
=== infinite-fortress-4866 Config Vars
LANG: en_US.UTF-8
LD_LIBRARY_PATH: /app/.heroku/vendor/lib
LIBRARY_PATH: /app/.heroku/vendor/lib
PATH: /app/.heroku/venv/bin:/bin:/usr/local/bin:/usr/bin
PYTHONHASHSEED: random
PYTHONHOME: /app/.heroku/venv/
PYTHONPATH: /app/
PYTHONUNBUFFERED: true
However, when I try to look inside the library directories, I get something like this:
Cinnas-MacBook-Pro:infinite-fortress-4866 cinna$ heroku run ls /app/.heroku/vendor/lib
Running `ls /app/.heroku/vendor/lib` attached to terminal... up, run.1
ls: cannot access /app/.heroku/vendor/lib: No such file or directory
I am not sure where to proceed at this moment. I miss my app, please help!
Additional information:
The problems all started when I added the following lines to my app.py code:
#app.route('/my_fb_graph',methods=['GET','POST'])
def my_fb_graph():
return render_template('my_fb_graph.html')
When I pushed the code and the app no longer worked. I then removed these lines of code, pushed the code again, and still got the same errors. The next thing I did was to completely remove the app.py file and try to a small test code which still did not work.
The root of the problem seems to be the error:
2012-10-03T17:51:26+00:00 app[web.1]: ImportError: No module named site
I was able to fix the problem, but still dont know why it occurred in the first place!
After a lot of experimentation, I ended up setting up a completely new app on Heroku. I checked the environment variables in the new app and got the following:
Cinnas-MacBook-Pro:thawing-temple-4323 cinna$ heroku config
=== thawing-temple-4323 Config Vars
FACEBOOK_APP_ID: ***
FACEBOOK_SECRET: ***
PATH: bin:/usr/local/bin:/usr/bin:/bin
PYTHONUNBUFFERED: true
Checking my original app (the broken one), I realized that new environment variables were somehow added in my last push as indicated by my logs:
2012-10-04T04:20:04+00:00 heroku[api]: Add PYTHONUNBUFFERED, PYTHONPATH, PYTHONHOME, LANG, LD_LIBRARY_PATH, PATH, PYTHONHASHSEED, LIBRARY_PATH config by ***#***
and by checking my environment variables:
Cinnas-MacBook-Pro:infinite-fortress-4866 cinna$ heroku config
=== infinite-fortress-4866 Config Vars
LANG: en_US.UTF-8
LD_LIBRARY_PATH: /app/.heroku/vendor/lib
LIBRARY_PATH: /app/.heroku/vendor/lib
PATH: /app/.heroku/venv/bin:/bin:/usr/local/bin:/usr/bin
PYTHONHASHSEED: random
PYTHONHOME: /app/.heroku/venv/
PYTHONPATH: /app/
PYTHONUNBUFFERED: true
I removed these new variables with the command:
heroku config:remove PYTHONPATH PYTHONHOME LANG LD_LIBRARY_PATH PYTHONHASHSEED LIBRARY_PATH
and my app started to work again. I've been pushing more code, and this problem has not occurred again.
I am still really curious why/how these variables were added in the first place since all I did was do a git push.
I experienced a very similar problem yesterday (6th Dec. 2012). Out of the blue, every python invocation died with 'ImportError: No module named site'. Heroku support got back to me today and they say it's fixed on their end, so the following workaround shouldn't be required. I'll leave this here in case it helps someone else diagnose.
I checked my heroku config vars though, and there were no PYTHON* variables set. They were set as env vars at the shell level though:
$ heroku run set | grep PYTHON
PYTHONHASHSEED=random
PYTHONHOME=/app/.heroku/venv/
PYTHONPATH=/app/
PYTHONUNBUFFERED=true
/app/.heroku/venv was a non-existent directory. If I overrode PYTHONHOME with a config var, and pointed to where my virtualenv actually was, it all started working again:
$ heroku config:set PYTHONHOME=/app
/app appears to be a mount point for the project root directory. Digging through the history of the Python buildpack, it looks like when I started my project, everyone made their virtualenvs in the project root. Now new projects make virtualenvs in a venv/ subdirectory. Support said they were gradually rolling out a buildpack change, and I guess the checks for the old way of doing things didn't kick in for me.
Here's where to look for the buildpack internals:
https://github.com/heroku/heroku-buildpack-python/blob/master/bin/compile
This bit me, too. I'm not sure when it started. I don't believe it was in response to any change on my part, I just noticed Application Error on my site today, and found this in the logs:
2012-12-12T16:02:06+00:00 heroku[web.1]: State changed from crashed to starting
2012-12-12T16:02:09+00:00 heroku[web.1]: Starting process with command `aspen --network_address=:40856 --www_root=doc/ --project_root=doc/.aspen`
2012-12-12T16:02:10+00:00 app[web.1]: ImportError: No module named site
2012-12-12T16:02:11+00:00 heroku[web.1]: Process exited with status 1
2012-12-12T16:02:11+00:00 heroku[web.1]: State changed from starting to crashed
I had another release ready to go, so I just deployed as usual. After a git push heroku, the site is back up. My heroku config doesn't have the extra envvars listed above:
$ heroku config
=== aspen-io Config Vars
ASPEN_IO_SHOW_GA: yes
PATH: bin:/usr/local/bin:/usr/bin:/bin
PYTHONUNBUFFERED: true
Update: #kennethreitz pointed me to this:
Cannot import module site
This occurs when the configured environment variables don't match the
paths of the installed Python. When this occurs, it is because someone
purged an app's cache without understanding the above implications.
To fix, either purge the cache and update the configuration, or
restore the expected configurations (preferred).