I'm using fabric to launch a command on a remote server.
I'd like to launch this command as a different user (neither the one connected nor root).
def colstat():
run('python manage.py collectstatic --noinput')
Trying
def colstat():
sudo('-u www-data python manage.py collectstatic --noinput')
Oviously this won't work because -u will be considered as a command and not an option of sudo
out: /bin/bash: -u : command not found
(www-data is the user which should run the command)
How can I use www-data to run my command from Fabric ?
Judging from the documentation:
sudo('python manage.py collectstatic --noinput', user='www-data')
Related
Trying to create a super user for my database:
manage.py createsuperuser
Getting a sad recursive message:
Superuser creation skipped due to not running in a TTY. You can run manage.py createsuperuser in your project to create one manually.
Seriously Django? Seriously?
The only information I found for this was the one listed above but it didn't work:
Unable to create superuser in django due to not working in TTY
And this other one here, which is basically the same:
Can't Create Super User Django
If you run $ python manage.py createsuperuser
Superuser creation skipped due to not running in a TTY. You can run manage.py createsuperuser in your project to create one manually. from Git Bash and face the above error message try to append winpty i.e. for example:
$ winpty python manage.py createsuperuser
Username (leave blank to use '...'):
To be able to run python commands as usual on windows as well what I normally do is appending an alias line to the ~/.profile file i.e.
MINGW64 ~$ cat ~/.profile
alias python='winpty python'
After doing so, either source the ~/.profile file or simply restart the terminal and the initial command python manage.py createsuperuser should work as expected!
I had same problem when trying to create superuser in the docker container with command:
sudo docker exec -i <container_name> sh. Adding option -t solved the problem:
sudo docker exec -it <container_name> sh
In virtualenv, for creating super-user for Django project related to git-bash use the command:
winpty python manage.py createsuperuser.
Since Django 3.0 you can create a superuser without TTY in two ways
Way 1: Pass values and secrets as ENV in the command line
DJANGO_SUPERUSER_USERNAME=admin2 DJANGO_SUPERUSER_PASSWORD=psw \
python manage.py createsuperuser --email=admin#admin.com --noinput
Way 2: set DJANGO_SUPERUSER_PASSWORD as the environment variable
# .admin.env
DJANGO_SUPERUSER_PASSWORD=psw
# bash
source '.admin.env' && python manage.py createsuperuser --username=admin --email=admin#admin.com --noinput
The output should say: Superuser created successfully.
To create an admin username and password, you must first use the command:
python manage.py migrate
Then after use the command:
python manage.py createsuperuser
Once these steps are complete, the program will ask you to enter:
username
email
password
With the password, it will not show as you are typing so it will appear as though you are not typing, but ignore it as it will ask you to renter the password.
When you complete these steps, use the command:
python manage.py runserver
In the browser add "/admin", which will take you to the admin site, and then type in your new username and password.
Check your docker-compose.yml file and make sure your django application is labeled by web under services.
I tried creating superuser from Stash [ App: Pythonista on iOS ]
[ Make sure migrations are already made ]
$ django-admin createsuperuser
I figured out how to do so. What I did was I went to VIEWS.py. Next, I imported the module os. Then I created a function called createSuperUser(request):. Then, I then created a variable called admin and set it equal to os.system("python manage.py createsuperuser"). Then after that, return admin. Finally, I restarted the Django site, then it will prompt you in the terminal.
import os
def createSuperUser(request):
admin = os.system("python manage.py createsuperuser")
return
I wanted to write a command to ssh into vagrant, change the current working directory, and then run nosetests.
I found in the documentation for vagrant that this could be done with vagrant ssh -c COMMAND
http://docs.vagrantup.com/v2/cli/ssh.html
The problem is I'm getting different results if I run nose through -c or manually after SSH.
Command:
vagrant ssh -c 'pwd && cd core && pwd && nosetests -x --failed' web
Output:
/web
/web/core
----------------------------------------------------------------------
Ran 0 tests in 4.784s
OK
Connection to 127.0.0.1 closed.
Commands:
vagrant ssh web
/web$ pwd && cd core && pwd && nosetests -x --failed
Output
/web
/web/core
.........................................................
.........................................................
.........................................................
.........................................................
<snip>
...............................
---------------------------------------------------------
Ran 1399 tests in 180.325s
I don't understand why it makes a difference.
The first ssh session is not a terminal session. If you try ssh -t instead of vagrant ssh -c, likely the outputs will be the same. A command like the following should give output comparable to what you get locally:
ssh -t <username>#<ip-of-vagrant-machine> -p <vagrant-vm-ssh-port> 'pwd && cd core && pwd && nosetests -x --failed'
The default for username and password on vagrant machines is both "vagrant", the ssh-port and IP to ssh into are shown during the provisioning of the vagrant machine with vagrant up. If you prefer public key ssh login vagrant also can point you to the location of the ssh public key.
Depending on where you want to run nose on your vm you will have to adjust the cd command above, it seems that the vagrant ssh wrapper automatically moved you to /web on the VM.
If you just worry about wether the results of the tests will differ because of the visual difference: No, they shouldn't, the reason is just that on a non-interactive terminal nose displays the results in a different manner.
I was able to resolve this issue by running:
vagrant ssh -c 'cd core && nosetests -x --failed --exe' web
I'm not sure why this made a difference on my box.
Is there any way by which I can run syncdb from my terminal? I don't know why my action_hooks/deploy script is not running. When I open my openshift database it show no table created.
source ${OPENSHIFT_HOMEDIR}python-2.6/virtenv/bin/activate
export PYTHON_EGG_CACHE=${OPENSHIFT_HOME_DIR}python-2.6/virtenv/lib/python-2.6/site-packages
echo "Executing 'python ${OPENSHIFT_REPO_DIRwsgi/my/manage.py syncdb --noinput'"
python "$OPENSHIFT_REPO_DIR"my/manage.py syncdb --noinput
echo "Executing 'python ${OPENSHIFT_REPO_DIR}wsgi/my/manage.py collectstatic --noinput -v0'"
python "$OPENSHIFT_REPO_DIR"my/manage.py collectstatic --noinput -v0
git repo at https://github.com/sarvesh-onlyme/ninja/tree/master/openshift/django
How about:
source $OPENSHIFT_HOMEDIR/python-2.6/virtenv/bin/activate
cd $OPENSHIFT_REPO_DIR/wsgi/$OPENSHIFT_APP_NAME
python manage.py syncdb --noinput
Please make sure to do something similar if your application type is python 2.7 based.
let me know if it does not work.
check logs (rhc tail app_name). Try to login to OpenShift app through ssh (rhc ssh app_name) and try to run deploy script manually (cd app-root/runtime/repo/.openshift/action_hooks; ./deploy). Do you see any errors?
Post your logs/errors here. I will update my answer afterwards.
//lol, sorry, I did not notice it's two year old question.
How do you export environment variables in the command executed by Supervisor? I first tried:
command="export SITE=domain1; python manage.py command"
but Supervisor reports "can't find command".
So then I tried:
command=/bin/bash -c "export SITE=domain1; python manage.py command"
and the command runs, but this seems to interfere with the daemonization since when I stop the Supervisor daemon, all the other daemons it's running aren't stopped.
To add a single environment variable, You can do something like this.
[program:django]
environment=SITE=domain1
command = python manage.py command
But, if you want to export multiple environment variables, you need to separate them by comma.
[program:django]
environment =
SITE=domain1,
DJANGO_SETTINGS_MODULE=foo.settings.local,
DB_USER=foo,
DB_PASS=bar
command = python manage.py command
Just do it separately:
environment=SITE=domain1
command=python manage.py command
Refer to http://supervisord.org/subprocess.html#subprocess-environment for more info.
when i run 'dotcloud push traing'... running postinstall script take a long time and get error below.
I created a new account.
cd to project and run command: 'dotcloud create training' and 'dotcloud push training' but nothing change.
anyone can help me?plz
Running postinstall script...
ERROR: deployment aborted due to unexpected command result: "./postinstall" failed with return code [Timeout]
postinstall
#!/bin/sh
#python createdb.py
python training/manage.py syncdb --noinput
python mkadmin.py
mkdir -p /home/dotcloud/data/media /home/dotcloud/volatile/static
python training/manage.py collectstatic --noinput
requirements.txt
Django==1.4
PIL==1.1.7
Try this as your postinstall. It may help with locating the error (expanding on Ken's advice):
#!/bin/bash
# set -e makes the script exit on the first error
set -e
# set -x will add debug trace information to all of your commands
set -x
echo "$0 starting"
#python createdb.py
python training/manage.py syncdb --noinput
python mkadmin.py
mkdir -p /home/dotcloud/data/media /home/dotcloud/volatile/static
python training/manage.py collectstatic --noinput
echo "$0 complete"
More debugging info available at http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_03.html
Any error messages like "./postinstall failed with return code" means that there is a problem with your postinstall script.
In order to debug postinstall executions easily on dotCloud, you can do the following:
Let's assume that your app is "ramen" and your service is "www".
$ dotcloud -A ramen run www
> ~/current/postinstall
It'll re-execute the postinstall but from your session this time, so you'll be able to easily update the postinstall code and re-run it without having to push again and again.
Once you found the root cause, fix it locally and repush your application.