I installed supervisor and gunicorn in my virtual environment (venv).
I am using this tutorial: https://realpython.com/blog/python/kickstarting-flask-on-ubuntu-setup-and-deployment/
I'm confused as to where I should be creating the config file for supervisor as the default etc/supervisor won't apply to me.
The supervisorctl file is in the directory:
/home/giri/venv/py2.7/lib/python2.7/site-packages/supervisor
I noticed this line in the supervisorctl file:
Options:
-c/--configuration -- configuration file path (default /etc/supervisord.conf)
Do I need to manually set this flag each time I run the supervisorctl script or is there another way?
Thanks
As found in the docs (http://supervisord.org/configuration.html):
The Supervisor configuration file is conventionally named
supervisord.conf. It is used by both supervisord and supervisorctl. If
either application is started without the -c option (the option which
is used to tell the application the configuration filename
explicitly), the application will look for a file named
supervisord.conf within the following locations, in the specified
order. It will use the first file it finds.
$CWD/supervisord.conf
$CWD/etc/supervisord.conf
/etc/supervisord.conf
So put the supervisor.conf in your current working directory and you're fine.
Related
I am editing my .ebextensions .config file to run some initialisation commands before deployment. I thought this commands would be run in the same folder of the extracted .zip containing my app. But that's not the case. manage.py is in the root directory of my zip and if I do the commands:
01_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
I get a ERROR: [Instance: i-085e84b9d1df851c9] Command failed on instance. Return code: 2 Output: python: can't open file 'manage.py': [Errno 2] No such file or directory.
I could do command: "python /opt/python/current/app/manage.py collectstatic --noinput" but that would run the manage.py that successfully was deployed previously instead of running the one that is being deployed atm.
I tried to check what was the working directory of the commands ran by the .config by doing command: "pwd" and it seems that pwd is /opt/elasticbeanstalk/eb_infra which doesn't contain my app.
So I probably need to change $PYTHONPATH to contain the right path, but I don't know which path is it.
In this comment the user added the following to his .config file:
option_settings:
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: myapp.settings
PYTHONPATH: "./src"
Because his manage.py lives inside the src folder within the root of his zip. In my case I would do PYTHONPATH: "." but it's not working.
AWS support solved the problem. Here's their answer:
When Beanstalk is deploying an application, it keeps your application files in a "staging" directory while the EB Extensions and Hook Scripts are being processed. Once the pre-deploy scripts have finished, the application is then moved to the "production" directory. The issue you are having is related to the "manage.py" file not being in the expected location when your "01_collectstatic" command is being executed.
The staging location for your environment (Python 3.4, Amazon Linux 2017.03) is "/opt/python/ondeck/app".
The EB Extension "commands" section is executed before the staging directory is actually created. To run your script once the staging directory has been created, you should use "container_commands". This section is meant for modifying your application after the application has been extracted, but before it has been deployed to the production directory. It will automatically run your command in your staging directory.
Can you please try implementing the container_command section and see if it helps resolve your problem? The syntax will look similar to this (but please test it before deploying to production):
container_commands:
01_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
So, the thing to remember about beanstalk is that each of the commands are independent, and you do not maintain state between them. You have two options in this case, put your commands into a shell script that is uploaded in the files section of ebextensions. Or, you can write one line commands that do all stateful activities prefixed to your command of interest.
e.g.,
00_collectstatic:
command: "pushd /path/to/django && source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput && popd"
I have installed luigi by pip command and I would like to change the port for the web UI. I tried to find the config file but I couldn't. Do I need to create one?
You can start luigid with the --port option.
luigid --port 80
Configuration file locations are:
/etc/luigi/luigi.cfg
luigi.cfg (or its legacy name client.cfg) in
your current working directory
LUIGI_CONFIG_PATH environment variable
in increasing order of preference. You do need to create one. e.g.,
[core]
default-scheduler-host=www.example.com
default-scheduler-port=8088
In spite of the documentation saying otherwise, the port configuration from the config file is not used, at least in some versions or some circumstances
Until this is resolved, you should always use the --port option of luigid:
luigid --port 12345
Also see https://github.com/spotify/luigi/issues/2235
For other configuration options a config file should be used. See https://luigi.readthedocs.io/en/stable/configuration.html
For a configuration global to the host you can create a file:
/etc/luigi/luigi.cfg
Make sure it is readable by the user that runs luigid and luig.
Alternatively a local configuration file that will be recognized is
luigi.cfg
which you would have to create in the current working directory.
If you want a custom config file location you could set the environment variable LUIGI_CONFIG_PATH to the full path of your config file.
I am using:
yowsup-celery: https://github.com/jlmadurga/yowsup-celery
For trying to integrate whats app in my system.
I have been successfully able to store messages and want to now run celery in daemon mode rather than running in terminal
To run it normally we use:
celery multi start -P gevent -c 2 -l info --yowconfig:conf_wasap
To run daemon mode we use:
sudo /etc/init.d/celeryd start
Here how can I pass config file as argument or is there a way to remove dependency of passing it as an argument rather reading the file inside script.
Since version yowsup-celery 0.2.0 it is possible to pass config file path through configuration instead of argument.
YOWSUPCONFIG = "path/to/credentials/file"
I have set up an environment variable which I execute locally using a .sh file:
.sh file:
#!/bin/sh
echo "environment variables"
export BROKER="amqp://admin:password#11.11.11.11:4672//"
Locally inside a virtual environment I can now read this in Python using:
BROKER = os.environ['BROKER']
However, on my production server (Ubuntu). I run the same file chmod +x name_of_file.sh and source settings.sh and can see the variable using printenv, but Python gives the error KeyError: 'BROKER' Why?
This only happens on my production machine despite the fact I can see the variable using printenv. Note my production machines does not use virtualenv.
If I run the python shell on Ubuntu and do os.environ['BROKER'] it prints out the correct value. So I have not idea what the app file does not find it.
This is the task that gets run which cannot find the variable (supervisor task)
[program:celery]
directory = /srv/app_test/
command=celery -A tasks worker -l info
stdout_logfile = /var/log/celeryd_.log
autostart=true
autorestart=true
startsecs=5
stopwaitsecs = 600
killasgroup=true
priority=998
user=ubuntu
Celery Config (which does not find the variable when executed under supervisor:
from kombu import Exchange, Queue
import os
# Celery Settings
BROKER = os.environ['BROKER']
When I restart supervisor it gives the key error.
The environment variables from your shell will not be visible within supervisor tasks.
You need to use the environment setting in your supervisor config:
[program:celery]
...
environment=BROKER="amqp://admin:password#11.11.11.11:4672//"
This requires supervisor 3.0+.
When I was developing and testing my project, I used to use virtualenvwrapper to manage the environment and run it:
workon myproject
python myproject.py
Of course, once I was in the right virtualenv, I was using the right version of Python, and other corresponding libraries for running my project.
Now, I want to use Supervisord to manage the same project as it is ready for deployment. The question is what is the proper way to tell Supervisord to activate the right virtualenv before executing the script? Do I need to write a separate bash script that does this, and call that script in the command field of Supervisord config file?
One way to use your virtualenv from the command line is to use the python executable located inside of your virtualenv.
for me i have my virtual envs in .virtualenvs directory. For example
/home/ubuntu/.virtualenvs/yourenv/bin/python
no need to workon
for a supervisor.conf managing a tornado app i do:
command=/home/ubuntu/.virtualenvs/myapp/bin/python /usr/share/nginx/www/myapp/application.py --port=%(process_num)s
Add your virtualenv/bin path to your supervisord.conf's environment:
[program:myproj-uwsgi]
process_name=myproj-uwsgi
command=/home/myuser/.virtualenvs/myproj/bin/uwsgi
--chdir /home/myuser/projects/myproj
-w myproj:app
environment=PATH="/home/myuser/.virtualenvs/myproj/bin:%(ENV_PATH)s"
user=myuser
group=myuser
killasgroup=true
startsecs=5
stopwaitsecs=10
First, run
$ workon myproject
$ dirname `which python`
/home/username/.virtualenvs/myproject/bin
Add the following
environment=PATH="/home/username/.virtualenvs/myproject/bin"
to the related supervisord.conf under [program:blabla] section.