Run the right scripts before deployment elastic beanstalk - python

I am editing my .ebextensions .config file to run some initialisation commands before deployment. I thought this commands would be run in the same folder of the extracted .zip containing my app. But that's not the case. manage.py is in the root directory of my zip and if I do the commands:
01_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
I get a ERROR: [Instance: i-085e84b9d1df851c9] Command failed on instance. Return code: 2 Output: python: can't open file 'manage.py': [Errno 2] No such file or directory.
I could do command: "python /opt/python/current/app/manage.py collectstatic --noinput" but that would run the manage.py that successfully was deployed previously instead of running the one that is being deployed atm.
I tried to check what was the working directory of the commands ran by the .config by doing command: "pwd" and it seems that pwd is /opt/elasticbeanstalk/eb_infra which doesn't contain my app.
So I probably need to change $PYTHONPATH to contain the right path, but I don't know which path is it.
In this comment the user added the following to his .config file:
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myapp.settings
PYTHONPATH: "./src"
Because his manage.py lives inside the src folder within the root of his zip. In my case I would do PYTHONPATH: "." but it's not working.

AWS support solved the problem. Here's their answer:
When Beanstalk is deploying an application, it keeps your application files in a "staging" directory while the EB Extensions and Hook Scripts are being processed. Once the pre-deploy scripts have finished, the application is then moved to the "production" directory. The issue you are having is related to the "manage.py" file not being in the expected location when your "01_collectstatic" command is being executed.
The staging location for your environment (Python 3.4, Amazon Linux 2017.03) is "/opt/python/ondeck/app".
The EB Extension "commands" section is executed before the staging directory is actually created. To run your script once the staging directory has been created, you should use "container_commands". This section is meant for modifying your application after the application has been extracted, but before it has been deployed to the production directory. It will automatically run your command in your staging directory.
Can you please try implementing the container_command section and see if it helps resolve your problem? The syntax will look similar to this (but please test it before deploying to production):
container_commands:
01_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"

So, the thing to remember about beanstalk is that each of the commands are independent, and you do not maintain state between them. You have two options in this case, put your commands into a shell script that is uploaded in the files section of ebextensions. Or, you can write one line commands that do all stateful activities prefixed to your command of interest.
e.g.,
00_collectstatic:
command: "pushd /path/to/django && source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput && popd"

Related

Manage.py unknown command

I am a student and my profesor needs me to install Django on PyCharm.
I made a big folder called PyCharmProjects and it includes like everything I have done in Python.
The problem is that I made a new folder inside this PyCharmProjects called Elementar, and I need to have the Django folders in there but it's not downloading.
I type in the PyCharm terminal django-admin manage.py startproject taskmanager1 (this is how my profesor needs me to name it)
After I run the code it says:
No Django settings specified.
Unknown command: 'manage.py'
Type 'django-admin help' for usage.
I also tried to install it through the MacOS terminal but I don't even have acces the folder named Elementar (cd: no such file or directory: Elementar) although it is created and it is seen in the PyCharm.
Manage.py its python file after you start your project, you cant call this file until this command:
django-admin startproject mysite
Then run:
python manage.py runserver
And if you want apps in your project run:
python manage.py startapp my_app
First of all, you can't create a project using manage.py because the manage.py file doesn't exist yet. It will be created automatically in the folder taskmanager1 if you run the command below.
You can create a project with the command
django-admin startproject taskmanager1
After that you can change the directory to the taskmanager1 folder with the cd taskmanager/ command.
When you changed the directory you can use the python manage.py commando for example if you want to run your migrations or creating an app.
python manage.py migrate

debug_toolbar module is not persisted after docker down / docker up

I started a new project using Django. This project is build using Docker with few containers and poetry to install all dependencies.
When I first run docker-compose up -d, everything is installed correctly. Actually, this problem is not related with Docker I suppose.
After I run that command, I'm running docker-compose exec python make -f automation/local/Makefile which has this content
Makefile
.PHONY: all
all: install-deps run-migrations build-static-files create-superuser
.PHONY: build-static-files
build-static-files:
python manage.py collectstatic --noinput
.PHONY: create-superuser
create-superuser:
python manage.py createsuperuser --noinput --user=${DJANGO_SUPERUSER_USERNAME} --email=${DJANGO_SUPERUSER_USERNAME}#zitec.com
.PHONY: install-deps
install-deps: vendor
vendor: pyproject.toml $(wildcard poetry.lock)
poetry install --no-interaction --no-root
.PHONY: run-migrations
run-migrations:
python manage.py migrate --noinput
pyproject.toml
[tool.poetry]
name = "some-random-application-name"
version = "0.1.0"
description = ""
authors = ["xxx <xxx#xxx.com>"]
[tool.poetry.dependencies]
python = ">=3.6"
Django = "3.0.8"
docusign-esign = "^3.4.0"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
django-debug-toolbar = "^2.2"
Debug toolbar is installed by adding those entries under settings.py (MIDDLEWARE / INSTALLED_APP) and even DEBUG_TOOLBAR_CONFIG with next value: SHOW_TOOLBAR_CALLBACK.
Let me confirm that EVERYTHING works after fresh docker-compose up -d. The problem occurs after I stop container and start it again using next commands:
docker-compose down
docker-compose up -d
When I try to access the project it says that Module debug_toolbar does not exist!.
I read all questions from this website, but nothing worked for me.
Has anyone encountered this problem before?
That sounds like normal behavior. A container has a temporary filesystem, and when the container exits any changes that have been made in that filesystem will be permanently lost. Deleting and recreating containers is extremely routine (even just changing environment: or ports: settings in the docker-compose.yml file would cause that to happen).
You should almost never install software in a running container. docker exec is an extremely useful debugging tool, but it shouldn't be the primary way you interact with your container. In both cases you're setting yourself up to lose work if you ever need to change a Docker-level setting or update the base image.
For this example, you can split the contents of that Makefile into two parts, the install_deps target (that installs Python packages but doesn't have any external dependencies) and the rest (that will depend on a database running). You need to run the installation part at image-build time, but the Dockerfile can't access a database, so the remainder needs to happen at container-startup time.
So in your image's Dockerfile, RUN the installation part:
RUN make install-reps
You will also need an entrypoint script that does the rest of the first-time setup, then runs the main container command. This can look like:
#!/bin/sh
make run-migrations build-static-files create-superuser
exec "$#"
Then run this in your Dockerfile:
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD python3 manage.py runserver --host 0.0.0.0
(I've recently seen a lot of Dockerfiles that have just ENTRYPOINT ["python3"]. Splitting ENTRYPOINT and CMD this way isn't especially useful; just move the python3 interpreter command into CMD.)

How to properly run virtualenv via .sh run script (django)

I am having an issue via an apache Nearly Free Speech server (I have been following the NFS guide via this link: https://blog.nearlyfreespeech.net/2014/11/17/how-to-django-on-nearlyfreespeech-net/. I have created a run-django.sh file to properly run the application which also opens a virtualenv (I have named the folder 'venv'). These are the contents of my run-django.sh file:
#!/bin/sh
. venv/bin/activate
exec python3 manage.py runserver
On the current step of the guide I am attempting to run the run-django.sh as follows:
[questionanswer /home/protected]$ ls
question_answer run-django.sh venv
[questionanswer /home/protected]$ cd question_answer/
[questionanswer /home/protected/question_answer]$ ../run-django.sh
.: cannot open bin/activate: No such file or directory
How is this not detecting my directory of 'venv/bin/activate' ?

Why does Jenkins starting my django server give me 404, but manually running the same script work properly?

This is my fabric script that runs on the jenkins server.
sudo('/home/myjenkins/killit.sh',pty=False)
sudo('/home/myjenkins/makedir.sh',pty=False)
sudo('/home/myjenkins/runit.sh',pty=False)
This kills the old server, creates a virtualenv, installs the requirements and restarts the server.
The problem is the with the script that starts the server - runit.sh :-
nohup /home/myjenkins/devserver/dev/bin/python /home/myjenkins/devserver
/workspace/manage.py runserver --noreload 0:80 >> jenkins.out &
When the jenkins server that starts the server and I navigate to the homepage, it gives me a 404 Page Not Found. It says /static/index.html not found. But the file exists. When I run 'sudo bash runit.sh' and I access the homepage, It works fine.
mkdir -p /home/myjenkins/devserver
cp -rf /home/myjenkins/workspace /home/jenkins/devserver/
cp -f /home/myjenkins/dev_settings.py /home/myjenkins/devserver/workspace/mywebsite/settings.py
cd /home/myjenkins/devserver
virtualenv -p python3 dev
cd /home/myjenkins/devserver/workspace
../dev/bin/pip install -r requirements.txt
Please ask me for more details if you need it.
EDITED 9/2/18
When I start the script from the folder containing manage.py, the server is able to serve the files. But Jenkins was starting the script from the home folder and if I also start the script from the home folder - the server is not able to find the files. look at my comment for more details. It would be great if someone could explain why this happens even though I've specified the full path in the script.
nohup /home/myjenkins/devserver/dev/bin/python /home/myjenkins/devserver
/workspace/manage.py runserver --noreload 0:80 >> jenkins.out &
Okay I figured out the whole deal. My django server was taking the output of the npm build from the wrong folder.
In the settings.py file, the variable STATICFILES_DIRS was set as:-
STATICFILES_DIRS = ('frontend/dist',)
instead of:-
STATICFILES_DIRS = (os.path.join(BASE_DIR,'frontend/dist'),)
Thus, when Jenkins was running the script, it was doing so from the home folder. This made Django's staticfiles finders to look at /home/myjenkins/frontend/dist instead of the relative '../frontend/dist'

How do I run a Python script that is part of an application I uploaded in an AWS SSH session?

I'm trying to run a Python script I've uploaded as part of my AWS Elastic Beanstalk application from my development machine, but can't figure out how to. I believe I've located the script correctly, but when I attempt to run it under SSH, I get an import error.
For example, I have a Flask-Migrate migration script as part of my application (pretty much the same as the example in the documentation), but after successfully SSHing to my EB instance with
> eb ssh
and locating the script with
$ sudo find / -name migrate.py
when I run in the directory (/opt/python/current) where I located it with
$ python migrate.py db upgrade
at the SSH prompt I get
Traceback (most recent call last):
File "db_migrate.py", line 15, in <module>
from flask.ext.script import Manager
ImportError: No module named flask.ext.script
even though my requirements.txt (present along with the rest of my files in the same directory) has flask-script==2.0.5.
On Heroku I can accomplish all of this in two steps with
> heroku run bash
$ python migrate.py db upgrade
Is there equivalent functionality on AWS? How do I run a Python script that is part of an application I uploaded in an AWS SSH session? Perhaps I'm missing a step to set up the environment in which the code runs?
To migrate your database the best is to use container_commands, they are commands that will run every time you deploy your application. There is a good example in the EBS documentation (Step 6) :
container_commands:
01_syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
The reason why you're getting an ImportError is because EBS installs your packages in a virtualenv. Before running arbitrary scripts in your application in SSH, first change to the directory containing your (latest) code with
cd /opt/python/current
and then activate the virtualenv
source /opt/python/run/venv/bin/activate
and set the environment variables (that your script probably expects)
source /opt/python/current/env

Categories

Resources