Here is my content of fabfile.py
from fabric.api import run, local, abort, env, put, task
from fabric.contrib.files import exists
from fabric.context_managers import cd, lcd, settings, hide
from fabric.operations import require
PROD_SERVER='user#user.webfactional.com'
# Host and login username:
def prod():
env.hosts=[PROD_SERVER]
env.remote_app_dir='~/webapps/django/myapp/'
env.remote_apache_dir='~/webapps/django/apache2/'
def commit():
message=raw_input("Enter a git commit message: ")
local("git add -A && git commit -m '%s'" %message)
local("git push webfactioncarmarket")
print "changes pushed to remote repository...."
def test():
local("python2.7 manage.py test")
def install_dependencies():
with cd(env.remote_app_dir):
run("pip2.7 install -r requirements.txt")
def testing_fabric():
prod()
print env.hosts
print env.remote_app_dir
print env.remote_apache_dir
def collectstatic():
prod()
print env.hosts
require('hosts',provided_by=[prod])
run ("cd $(remote_app_dir);python2.7 manage.py collectstatic --noinput
Running :
fab testing_fabric #works fine
['user#user.webfactional.com']
~/webapps/django/myapp/
~/webapps/django/myapp/
Done.
Running:
fab collectstatic
['user#user.webfactional.com']
No hosts found. Please specify (single) host string for connection:
why do I get this prompt: No hosts found. I print env.hosts and it prints the host?
how do I fix this?
EDIT: I tried the chaining:
I get this error:
run: cd $(remote_app_dir);python2.7 manage.py collectstatic --noinput
out: /bin/bash: remote_app_dir: command not found
why can I not pass the value of remote_app_dir does not get passed?
There are two ways to do this:
Set env.hosts at the module level, so it gets set when the fabfile is imported
Chain modules to set the env variable.
In your case, the second one can be accomplished by removing the line prod() from collectstatic and instead invoking fabric like this:
fab prod collecstatic
This sort of decoupling can be immensely powerful. See here: http://fabric.readthedocs.org/en/latest/usage/execution.html#globally-via-env
To run from a specific directory, try using with cd:
def collectstatic():
with cd(env.remote_app_dir):
run("python2.7 manage.py collectstatic --noinput")
See here: https://fabric.readthedocs.org/en/latest/api/core/context_managers.html
I write answer, becaue I don't have enough points to make a comment:
run ("cd $(remote_app_dir);python2.7 manage.py collectstatic --noinput
You should not use remote_app_dir variable (wrapping in it $() ) as you would do using bash/shell. First of all, this variable is not available in shell.
env.remote_app_dir='~/webapps/django/myapp/'
cause that python object env gets new attribute remote_app_dir
please change your code and use your variable in pythonic way:
run ("cd %spython2.7 manage.py collectstatic --noinput" % env.remote_app_dir)
Related
I am editing my .ebextensions .config file to run some initialisation commands before deployment. I thought this commands would be run in the same folder of the extracted .zip containing my app. But that's not the case. manage.py is in the root directory of my zip and if I do the commands:
01_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
I get a ERROR: [Instance: i-085e84b9d1df851c9] Command failed on instance. Return code: 2 Output: python: can't open file 'manage.py': [Errno 2] No such file or directory.
I could do command: "python /opt/python/current/app/manage.py collectstatic --noinput" but that would run the manage.py that successfully was deployed previously instead of running the one that is being deployed atm.
I tried to check what was the working directory of the commands ran by the .config by doing command: "pwd" and it seems that pwd is /opt/elasticbeanstalk/eb_infra which doesn't contain my app.
So I probably need to change $PYTHONPATH to contain the right path, but I don't know which path is it.
In this comment the user added the following to his .config file:
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myapp.settings
PYTHONPATH: "./src"
Because his manage.py lives inside the src folder within the root of his zip. In my case I would do PYTHONPATH: "." but it's not working.
AWS support solved the problem. Here's their answer:
When Beanstalk is deploying an application, it keeps your application files in a "staging" directory while the EB Extensions and Hook Scripts are being processed. Once the pre-deploy scripts have finished, the application is then moved to the "production" directory. The issue you are having is related to the "manage.py" file not being in the expected location when your "01_collectstatic" command is being executed.
The staging location for your environment (Python 3.4, Amazon Linux 2017.03) is "/opt/python/ondeck/app".
The EB Extension "commands" section is executed before the staging directory is actually created. To run your script once the staging directory has been created, you should use "container_commands". This section is meant for modifying your application after the application has been extracted, but before it has been deployed to the production directory. It will automatically run your command in your staging directory.
Can you please try implementing the container_command section and see if it helps resolve your problem? The syntax will look similar to this (but please test it before deploying to production):
container_commands:
01_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
So, the thing to remember about beanstalk is that each of the commands are independent, and you do not maintain state between them. You have two options in this case, put your commands into a shell script that is uploaded in the files section of ebextensions. Or, you can write one line commands that do all stateful activities prefixed to your command of interest.
e.g.,
00_collectstatic:
command: "pushd /path/to/django && source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput && popd"
Trying to create a super user for my database:
manage.py createsuperuser
Getting a sad recursive message:
Superuser creation skipped due to not running in a TTY. You can run manage.py createsuperuser in your project to create one manually.
Seriously Django? Seriously?
The only information I found for this was the one listed above but it didn't work:
Unable to create superuser in django due to not working in TTY
And this other one here, which is basically the same:
Can't Create Super User Django
If you run $ python manage.py createsuperuser
Superuser creation skipped due to not running in a TTY. You can run manage.py createsuperuser in your project to create one manually. from Git Bash and face the above error message try to append winpty i.e. for example:
$ winpty python manage.py createsuperuser
Username (leave blank to use '...'):
To be able to run python commands as usual on windows as well what I normally do is appending an alias line to the ~/.profile file i.e.
MINGW64 ~$ cat ~/.profile
alias python='winpty python'
After doing so, either source the ~/.profile file or simply restart the terminal and the initial command python manage.py createsuperuser should work as expected!
I had same problem when trying to create superuser in the docker container with command:
sudo docker exec -i <container_name> sh. Adding option -t solved the problem:
sudo docker exec -it <container_name> sh
In virtualenv, for creating super-user for Django project related to git-bash use the command:
winpty python manage.py createsuperuser.
Since Django 3.0 you can create a superuser without TTY in two ways
Way 1: Pass values and secrets as ENV in the command line
DJANGO_SUPERUSER_USERNAME=admin2 DJANGO_SUPERUSER_PASSWORD=psw \
python manage.py createsuperuser --email=admin#admin.com --noinput
Way 2: set DJANGO_SUPERUSER_PASSWORD as the environment variable
# .admin.env
DJANGO_SUPERUSER_PASSWORD=psw
# bash
source '.admin.env' && python manage.py createsuperuser --username=admin --email=admin#admin.com --noinput
The output should say: Superuser created successfully.
To create an admin username and password, you must first use the command:
python manage.py migrate
Then after use the command:
python manage.py createsuperuser
Once these steps are complete, the program will ask you to enter:
username
email
password
With the password, it will not show as you are typing so it will appear as though you are not typing, but ignore it as it will ask you to renter the password.
When you complete these steps, use the command:
python manage.py runserver
In the browser add "/admin", which will take you to the admin site, and then type in your new username and password.
Check your docker-compose.yml file and make sure your django application is labeled by web under services.
I tried creating superuser from Stash [ App: Pythonista on iOS ]
[ Make sure migrations are already made ]
$ django-admin createsuperuser
I figured out how to do so. What I did was I went to VIEWS.py. Next, I imported the module os. Then I created a function called createSuperUser(request):. Then, I then created a variable called admin and set it equal to os.system("python manage.py createsuperuser"). Then after that, return admin. Finally, I restarted the Django site, then it will prompt you in the terminal.
import os
def createSuperUser(request):
admin = os.system("python manage.py createsuperuser")
return
I am receiving the error:
ImportError at /
No module named Interest.urls
even though my settings file has been changed several times:
ROOT_URLCONF = 'urls'
or
ROOT_URLCONF = 'interest.urls'
I keep getting the same error, as if it doesn't matter what I put in my settings file, it is still looking for Interest.urls, even though my urls file is located at Interest(django project)/interest/urls.py
I have restarted my nginx server several times and it changes nothing, is there another place I should be looking to change where it looks for my urls file?
Thanks!
I had to restart my supervisorctl, which restarted the gunicorn server which was actually handling the django files
There's not need to restart nginx, you can do these steps:
Install fabric (pip install fabric)
Create a "restart" function into fabfile.py that has the following:
def restart():
sudo('kill -9 `ps -ef | grep -m 1 \'[y]our_project_name\' | awk \'{print $2}\'`')
Call the function through:
$ fab restart
Optional, you might want to add the command into a script with your password just adding "-p mypass" to fabric command
That will kill all your gunicorn processes allowing supervisord to start them again.
I'm using this guide:
http://www.jeffknupp.com/blog/2012/02/09/starting-a-django-project-the-right-way/
To set up my django project. But I'm stuck at deployment part.
I installed fabric using pip in my virtualenv.
I created this file inside myproject directory:
from fabric.api import local
def prepare_deployment(branch_name):
local('python manage.py test finance')
local('git add -p && git commit')
local('git checkout master && git merge ' + branchname)
from fabric.api import lcd
def deploy():
with lcd('home/andrius/djcode/myproject/'):
local('git pull /home/andrius/djcode/dev/')
local('python manage.py migrate finance')
local('python manage.py test finance')
local('/my/command/to/restart/webserver')
But when I enter this command (as shown in a guide):
fab prepare_deployment
I get this error:
Traceback (most recent call last):
File "/home/andrius/env/local/lib/python2.7/site-packages/fabric/main.py", line 732, in main
*args, **kwargs
File "/home/andrius/env/local/lib/python2.7/site-packages/fabric/tasks.py", line 345, in execute
results['<local-only>'] = task.run(*args, **new_kwargs)
File "/home/andrius/env/local/lib/python2.7/site-packages/fabric/tasks.py", line 121, in run
return self.wrapped(*args, **kwargs)
TypeError: prepare_deployment() takes exactly 1 argument (0 given)
So even though it didn't specify in guide to enter argument, I suppose it requires my branch name. So entered this:
fab prepare_deployment v0.1
(v0.1 is my branch name)
So now I got this error:
Warning: Command(s) not found:
v0.1
Available commands:
deploy
prepare_deployment
Also I noticed in a guide for file fabfile.py in function prepare_deployment, input is written as 'branch_name' and inside function there is argument 'branchname'. So I thought it should be the same and renamed 'branchname' to 'branch_name', but still getting the same error.
So I think I'm doing something wrong here. What could be the problem?
Update:
I tried to call function inside fabfile.py with:
prepare_deployment("v0.1")
Output was this:
[localhost] local: python manage.py test finance
Creating test database for alias 'default'...
Got an error creating the test database: permission denied to create database
Type 'yes' if you would like to try deleting the test database 'test_finance', or 'no' to cancel: yes
Destroying old test database 'default'...
Got an error recreating the test database: database "test_finance" does not exist
Fatal error: local() encountered an error (return code 2) while executing 'python manage.py test finance'
Aborting.
I think I should also mention that my app name is 'finance' as database name 'finance'. Maybe those are conflicting?
fabric uses a specific syntax for passing arguments to tasks from the command line. From a bash shell you would need to use
fab prepare_deployment:v0.1
You can read about it in the fabric documentation on per task arguments.
If you ever actually need to use brackets in a bash command you would need to escape them.
As you stated, this function should look like this...
def prepare_deployment(branch_name):
local('python manage.py test finance')
local('git add -p && git commit')
local('git checkout master && git merge ' + branch_name)
Then you simply call...
fab prepare_deployment("v0.1")
when i run 'dotcloud push traing'... running postinstall script take a long time and get error below.
I created a new account.
cd to project and run command: 'dotcloud create training' and 'dotcloud push training' but nothing change.
anyone can help me?plz
Running postinstall script...
ERROR: deployment aborted due to unexpected command result: "./postinstall" failed with return code [Timeout]
postinstall
#!/bin/sh
#python createdb.py
python training/manage.py syncdb --noinput
python mkadmin.py
mkdir -p /home/dotcloud/data/media /home/dotcloud/volatile/static
python training/manage.py collectstatic --noinput
requirements.txt
Django==1.4
PIL==1.1.7
Try this as your postinstall. It may help with locating the error (expanding on Ken's advice):
#!/bin/bash
# set -e makes the script exit on the first error
set -e
# set -x will add debug trace information to all of your commands
set -x
echo "$0 starting"
#python createdb.py
python training/manage.py syncdb --noinput
python mkadmin.py
mkdir -p /home/dotcloud/data/media /home/dotcloud/volatile/static
python training/manage.py collectstatic --noinput
echo "$0 complete"
More debugging info available at http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_03.html
Any error messages like "./postinstall failed with return code" means that there is a problem with your postinstall script.
In order to debug postinstall executions easily on dotCloud, you can do the following:
Let's assume that your app is "ramen" and your service is "www".
$ dotcloud -A ramen run www
> ~/current/postinstall
It'll re-execute the postinstall but from your session this time, so you'll be able to easily update the postinstall code and re-run it without having to push again and again.
Once you found the root cause, fix it locally and repush your application.