The celery docs says that
However, in production you probably want to run the worker in the background as a daemon.
I made the init.d script as below
#!/bin/sh
#
# chkconfig: 345 99 15
# description: celery init.d
# Where to chdir at start.
CELERYD_CHDIR="/home/username/django/django_myapp"
# How to call "manage.py celeryd_multi"
CELERYD="/opt/python27/bin/python manage.py celeryd "
#CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit 300 --concurrency=8"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="root"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
CELERYD_PIDFILE=/var/run/celery.pid
# Source function library.
. /etc/init.d/functions
# Celery options
CELERYD_OPTS="-B -l info"
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
start () {
cd $CELERYD_CHDIR
daemon --user $CELERYD_USER --pidfile $CELERYD_PIDFILE $CELERYD $CELERYD_OPTS &
}
stop () {
if [[ -s $CELERYD_PIDFILE ]] ; then
echo "Stopping Celery"
killproc -p $CELERYD_PIDFILE python
echo "done!"
rm -f $CELERYD_PIDFILE
else
echo "Celery not running."
fi
}
check_status() {
status -p $CELERYD_PIDFILE python
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
check_status
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
;;
esac
When i execute /etc/init.d/celeryd start
Now it runs fine but again in foreground not in background
IS that the case or i am doing it wrong
Related
I installed the latest version of docker, installed WSL 2 according by the manual. And installed the container with the command docker-compose up. I need to run the tests by command tests/run_tests.sh. But after launching, after a few seconds, the window with the test closes, my container disappears in the docker, and when I try to write the command docker-compose up again, I get an error Error response from daemon: open \\.\pipe\docker_engine_linux: The system cannot find the file specified.
run_tests:
#!/usr/bin/env sh
# To run locally, execute the command NOT in container:
# bash tests/run_tests.sh
set -x
if [ -z "$API_ENV" ]; then
API_ENV=test
fi
if [ "$API_ENV" = "bitbucket_test" ]; then
COMPOSE_FILE="-f docker-compose.test.yml"
fi
docker-compose build connectors
API_ENV=$API_ENV docker-compose ${COMPOSE_FILE} up -d --force-recreate
connectors_container=$(docker ps -f name=connectors -q | tail -n1)
if [ "$API_ENV" = "bitbucket_test" ]; then
mkdir -p artifacts && docker logs --follow ${connectors_container} > ./artifacts/docker_connectors_logs.txt 2>&1 &
pytest_n_processes=100
else
pytest_n_processes=25
fi
# Timeout for the tests. In bitbucket we want to stop the tests a bit before the max time, so that
# artifacts are created and logs can be inspected
timeout_cmd="timeout 3.5m"
if [ "$API_ENV" = "bitbucket_test" ] || [ "$API_ENV" = "test" ]; then
export PYTEST_SENTRY_DSN='http://d07ba0bfff4b41888e311f8398321d14#sentry.windsor.ai/4'
export PYTEST_SENTRY_ALWAYS_REPORT=1
fi
git fetch origin "+refs/heads/master:refs/remotes/origin/master"
# Lint all the files that are modified in this branch
$(dirname "$0")/run_linters.sh &
linting_pid=$!
# bitbucket pipelines have 8 workers, use 6 for tests
#
# WARNING: Tests require gunicorn and is enabled when containers are started with: API_ENV=test docker-compose up -d --force-recreate
# Tests are run in parallel and the cache-locking in threaded flask doesnt work in this case
${timeout_cmd} docker exec ${connectors_container} bash -c \
"PYTEST_SENTRY_DSN=$PYTEST_SENTRY_DSN \
PYTEST_SENTRY_ALWAYS_REPORT=$PYTEST_SENTRY_ALWAYS_REPORT \
pytest \
--cov=connectors --cov=api --cov=base \
--cov-branch --cov-report term-missing --cov-fail-under=71.60 \
--timeout 60 \
-v \
--durations=50 \
-n $pytest_n_processes \
tests || ( \
code=$? `# store the exit code to exit with it` \
&& echo 'TESTS FAILED' \
&& mkdir -p ./artifacts \
&& docker logs ${connectors_container} > ./artifacts/docker_connectors_failure_logs.txt 2>&1 `# Ensure that the logs are complete` \
) "&
# Get the tests pid
tests_pid=$!
# wait for linting to finish
wait $linting_pid
linting_code=$?
echo "Linting code: ${linting_code}"
if [ $linting_code -ne 0 ]; then
echo 'Linting failed'
# kill running jobs on exit in local ubuntu. Some tests were left running by only killing the test_pid.
kill "$(jobs -p)"
# kills the test process explicitly in gitlab pipelines. Was needed because jobs returns empty in gitlab pipelines.
kill $tests_pid
exit 1
fi
# wait for tests to finish
wait $tests_pid
testing_code=$?
echo "Testing code: ${testing_code}"
if [ $testing_code -ne 0 ]; then
echo 'Tests failed'
exit 1
else
echo 'Tests and linting passed'
exit 0
fi
I'm new to shell scripting, i want the command to be in running always.
My .sh file - startscrapy.sh
#!/bin/bash
echo "Scrapyd is started now"
scrapyd
i have changed the permission also chmod +x etc/init.d/startscrapy.sh
I have placed this file in etc/init.d but it is not working.
My understanding as of now " the location etc/init.d is to run the .sh files
whenever the server or system boots up but i want my .sh file to be running state always.
Using crontab you can easily auto start any scripts in ubuntu.
Please do the following steps,
Run the command crontab -e so that you can edit the crontab.
Now add the following line to the crontab editor #reboot sudo
<script> in your case it should be #reboot sudo scrapyd.
Now reboot your system, then you will find scrapyd running.
Hope it Helps.
Take a look at this init.d template and change your one accordingly.
Then you need to register the startup script with your initialisation daemon. Under Ubuntu that would be update-rc.d NAMEofDAEMON default
You want to create a daemon. There are some tutorials on internet to do this , i took this one for you. On the final part, you might use a different way to register the script, this one is for ubuntu.
you need to put the following into a file of the name of your choice (i will take "startscrapy.sh" for now) (you can modify it, obviously, according to your needs)
#!/bin/sh -e
DAEMON="scrapyd" #Command to run
daemon_OPT="" #arguments for your program
DAEMONUSER="user" #Program user
daemon_NAME="scarpyd" #Program name (need to be identical to the executable).
PATH="/sbin:/bin:/usr/sbin:/usr/bin" #don't touch
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
d_start () {
log_daemon_msg "Starting system $daemon_NAME Daemon"
start-stop-daemon --background --name $daemon_NAME --start --quiet --chuid $DAEMONUSER --exec $DAEMON -- $daemon_OPT
log_end_msg $?
}
d_stop () {
log_daemon_msg "Stopping system $daemon_NAME Daemon"
start-stop-daemon --name $daemon_NAME --stop --retry 5 --quiet --name $daemon_NAME
log_end_msg $?
}
case "$1" in
start|stop)
d_${1}
;;
restart|reload|force-reload)
d_stop
d_start
;;
force-stop)
d_stop
killall -q $daemon_NAME || true #replace with an apropriate killing method
sleep 2
killall -q -9 $daemon_NAME || true #replace with an apropriate killing method
;;
status)
status_of_proc "$daemon_NAME" "$DAEMON" "system-wide $daemon_NAME" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$daemon_NAME {start|stop|force-stop|restart|reload|force-reload|status}"
exit 1
;;
esac
exit 0
Then run as root :
chmod +x etc/init.d/startscrapy.sh
chmod 0755 /etc/init.d/startscrapy.sh (modify by your script location)
systemctl daemon-reload
update-rc.d startscrapy.sh defaults
To remove the daemon, run as root :
update-rc.d -f startscrapy.sh remove
I am trying to run a huey task queue on elastic beanstalk that is needed by my Flask app. But there is no built in way to run huey as a daemon process. The author of huey has advised to run huey with supervisor (this link) and since elastic beanstalk already uses supervisor, I thought we could just add the program to be managed by supervisor. But I am not sure how to do this programatically. Currently, I am using the container_commands (ref link) key in the config file to run this but elastic beanstalk gives me a timeout error after sometime as it runs in the foreground. Below is the config file I am using.
packages:
yum:
gcc: []
gcc-c++: []
gcc-gfortran: []
htop: []
make: []
wget: []
atlas-devel: []
lapack-devel: []
commands:
01enable_swap:
command:
- sudo dd if=/dev/zero of=/var/swap1 bs=1M count=1024
- sudo mkswap /var/swap1
- sudo chmod 644 /var/swap1
- sudo swapon /var/swap1
cwd: /home/ec2-user
02install_redis:
command:
- wget "http://download.redis.io/redis-stable.tar.gz"
- tar -xvzf redis-stable.tar.gz
- rm redis-stable.tar.gz
- cd redis-stable
- sudo make
- sudo make install
cwd: /home/ec2-user
container_commands:
01download_nltk_packages:
command: "python install_resources.py"
02run_redis:
command: "redis-server --host 127.0.0.1 --port 6379 --daemonize yes"
03run_huey:
command: "huey_consumer jupiter.huey"
Here's what I want to achieve:
1. huey should run as a background process when my Flask app is deployed.
2. supervisor should handle automatic start/stop of the huey process.
I solved this problem by doing the following in an ebextensions file called 002_supervisor.conf. This is for django but I'm sure it could be adapted for flask.
Create a supervisor config file
Create a supervisor init.d file
Create a huey.conf file to be loaded by supervisor
files:
/usr/local/etc/supervisord.conf:
mode: "000755"
owner: root
group: root
content: |
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /usr/local/etc/*.conf
[inet_http_server]
port = 127.0.0.1:9001
/etc/init.d/supervisord:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=/usr/local/bin/supervisorctl
supervisord=${SUPERVISORD-/usr/local/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/tmp/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /usr/local/etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_huey.sh" :
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
env=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
env=${env%?}
# Create huey configuration script
hueyconf="[program:huey]
; Set full path to celery program if using virtualenv
command=/opt/python/current/app/production.py run_huey
user=nobody
numprocs=1
stdout_logfile=/var/log/huey.log
stderr_logfile=/var/log/huey.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
environment=$env"
# Create the celery supervisord conf script
echo "$hueyconf" | tee /usr/local/etc/huey.conf
# Update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl reread
/usr/local/bin/supervisorctl update
# Start/Restart huey through supervisord
/usr/local/bin/supervisorctl -c /usr/local/etc/supervisord.conf restart huey
commands:
01_start_supervisor:
command: '/etc/init.d/supervisord restart'
leader_only: true
(I have searched and didn't fine what i was looking for)
Recently I installed GNU Health using Ubuntu 14.04.3 following the wikibooks tutorial. Everything worked as expected. But i have to boot up the Tryton server manually every time i start/restart ubuntu. (as given in https://en.wikibooks.org/wiki/GNU_Health/Installation#Booting_up_the_Tryton_Server ).
I was wondering if there any way to make it boot automatically at system startup. A script was found in a site but that seemed to be outdated and didn't work. Is there any application or script to boot the server automatically? so that i can use the machine as server without any screen/keyboard/mouse?
This is not specific tryton question but more ubuntu question. You need setup init script and install it to System-V scripts.
Puts this script to /etc/init.d/tryton-server file, replace DEAMON variable with your trytond path, check other variables. Then run update-rc.d tryton-server defaults command.
#!/bin/sh
### BEGIN INIT INFO
# Provides: tryton-server
# Required-Start: $syslog $remote_fs
# Required-Stop: $syslog $remote_fs
# Should-Start: $network postgresql mysql
# Should-Stop: $network postgresql mysql
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Application Platform
# Description: Tryton is an Application Platform serving as a base for
# a complete ERP software.
### END INIT INFO
PATH="/sbin:/bin:/usr/sbin:/usr/bin"
DAEMON="[REPLACE WITH YOUR trytond PATH]"
test -x "${DAEMON}" || exit 0
NAME="trytond"
DESC="Tryton Application Platform"
DAEMONUSER="tryton"
PIDDIR="/var/run/${NAME}"
PIDFILE="${PIDDIR}/${NAME}.pid"
LOGFILE="/var/log/tryton/${NAME}.log"
DEFAULTS="/etc/default/tryton-server"
CONFIGFILE="/etc/${NAME}.conf"
DAEMON_OPTS="--config=${CONFIGFILE} --logfile=${LOGFILE}"
# Include tryton-server defaults if available
if [ -r "${DEFAULTS}" ]
then
. "${DEFAULTS}"
. /lib/lsb/init-functions
# Make sure trytond is started with configured locale
if [ -n "${LANG}" ]
then
LANG="${LANG}"
export LANG
set -e
do_start ()
if [ ! -d "${PIDDIR}" ]
then
mkdir -p "${PIDDIR}"
chown "${DAEMONUSER}":"${DAEMONUSER}" "${PIDDIR}"
start-stop-daemon --start --quiet --pidfile ${PIDFILE} \
--chuid ${DAEMONUSER} --background --make-pidfile \
--exec ${DAEMON} -- ${DAEMON_OPTS}
do_stop ()
start-stop-daemon --stop --quiet --pidfile ${PIDFILE} --oknodo
case "${1}" in
start)
log_daemon_msg "Starting ${DESC}" "${NAME}"
do_start
log_end_msg ${?}
;;
stop)
log_daemon_msg "Stopping ${DESC}" "${NAME}"
do_stop
log_end_msg ${?}
;;
restart|force-reload)
log_daemon_msg "Restarting ${DESC}" "${NAME}"
do_stop
sleep 1
do_start
log_end_msg ${?}
status)
status_of_proc -p ${PIDFILE} ${DAEMON} ${NAME} && \
exit 0 || exit ${?}
N="/etc/init.d/${NAME}"
echo "Usage: ${N} {start|stop|restart|force-reload|status}" >&2
exit 1
;;
esac
exit 0
I want to restart a process (python script) automatically if it ends/crashed.
Below my bash init.d script so far.
I was thinking to replace the do_start() call in the start section with:
until do_start; do
echo "Restarting.." >> error.txt
sleep 1
done
Unfortunately this seems not to be working. My script is not restarting. Has anyone a tip?
#!/bin/bash
### BEGIN INIT INFO
# Provides: RPiQuadroServer
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Put a short description of the service here
# Description: Put a long description of the service here
### END INIT INFO
WORK_DIR="/var/lib/RPiQuadroServer"
DAEMON="/usr/bin/python"
ARGS="/usr/local/bin/RPiQuadroServer.py"
PIDFILE="/var/run/RPiQuadroServer.pid"
USER="root"
. /lib/lsb/init-functions
do_start() {
log_daemon_msg "Starting system $DAEMON $ARGS daemon"
mkdir -p "$WORK_DIR"
/sbin/start-stop-daemon --start --pidfile $PIDFILE \
--user $USER --group $USER \
-b --make-pidfile \
--chuid $USER \
--exec $DAEMON $ARGS
log_end_msg $?
}
do_stop() {
log_daemon_msg "Stopping system $DAEMON $ARGS daemon"
/sbin/start-stop-daemon --stop --pidfile $PIDFILE --verbose
log_end_msg $?
}
case "$1" in
start)
do_start
;;
stop)
log_daemon_msg "Stopping system $DAEMON $ARGS daemon"
do_stop
;;
restart|reload|force-reload)
do_stop
do_start
;;
status)
status_of_proc "$DAEMON $ARGS" "$DAEMON $ARGS" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$USER {start|stop|restart|status}"
exit 1
;;
esac
You need to have some kind of a supervision.
For example monit. It will do what you described: start/restart process:
For example:
check process RPiQuadroServe pidfile /var/run/RPiQuadroServer.pid
start program = "/etc/init.d/RPiQuadroServe start"
stop program = "/etc/init.d/RPiQuadroServec stop"
In the until loop, the loop executes until the do_start command executes successfully, which means it continues as long as do_start exits with any exit code other than 0. But in your code, you didn't set any exit status, so it would be 0. You might want to set the exit code by for example return $? after executing Python code.