How to make the .sh file should be in running state always - python

I'm new to shell scripting, i want the command to be in running always.
My .sh file - startscrapy.sh
#!/bin/bash
echo "Scrapyd is started now"
scrapyd
i have changed the permission also chmod +x etc/init.d/startscrapy.sh
I have placed this file in etc/init.d but it is not working.
My understanding as of now " the location etc/init.d is to run the .sh files
whenever the server or system boots up but i want my .sh file to be running state always.

Using crontab you can easily auto start any scripts in ubuntu.
Please do the following steps,
Run the command crontab -e so that you can edit the crontab.
Now add the following line to the crontab editor #reboot sudo
<script> in your case it should be #reboot sudo scrapyd.
Now reboot your system, then you will find scrapyd running.
Hope it Helps.

Take a look at this init.d template and change your one accordingly.
Then you need to register the startup script with your initialisation daemon. Under Ubuntu that would be update-rc.d NAMEofDAEMON default

You want to create a daemon. There are some tutorials on internet to do this , i took this one for you. On the final part, you might use a different way to register the script, this one is for ubuntu.
you need to put the following into a file of the name of your choice (i will take "startscrapy.sh" for now) (you can modify it, obviously, according to your needs)
#!/bin/sh -e
DAEMON="scrapyd" #Command to run
daemon_OPT="" #arguments for your program
DAEMONUSER="user" #Program user
daemon_NAME="scarpyd" #Program name (need to be identical to the executable).
PATH="/sbin:/bin:/usr/sbin:/usr/bin" #don't touch
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
d_start () {
log_daemon_msg "Starting system $daemon_NAME Daemon"
start-stop-daemon --background --name $daemon_NAME --start --quiet --chuid $DAEMONUSER --exec $DAEMON -- $daemon_OPT
log_end_msg $?
}
d_stop () {
log_daemon_msg "Stopping system $daemon_NAME Daemon"
start-stop-daemon --name $daemon_NAME --stop --retry 5 --quiet --name $daemon_NAME
log_end_msg $?
}
case "$1" in
start|stop)
d_${1}
;;
restart|reload|force-reload)
d_stop
d_start
;;
force-stop)
d_stop
killall -q $daemon_NAME || true #replace with an apropriate killing method
sleep 2
killall -q -9 $daemon_NAME || true #replace with an apropriate killing method
;;
status)
status_of_proc "$daemon_NAME" "$DAEMON" "system-wide $daemon_NAME" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$daemon_NAME {start|stop|force-stop|restart|reload|force-reload|status}"
exit 1
;;
esac
exit 0
Then run as root :
chmod +x etc/init.d/startscrapy.sh
chmod 0755 /etc/init.d/startscrapy.sh (modify by your script location)
systemctl daemon-reload
update-rc.d startscrapy.sh defaults
To remove the daemon, run as root :
update-rc.d -f startscrapy.sh remove

Related

Interact with docker container in the middle of a bash script execution [in that container]

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.
I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.
So I just wanted some advice on a tty stuff that I am probably missing.
Run the initial container detached, with input and a tty.
docker run -dit --rm {env_vars} {script} --name test_name test
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using echo to input)
#!/bin/bash
docker run --name test_name -id debian \
bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
while ! docker logs test_name | grep reading; do
sleep 3
done
echo "attach input" | docker attach test_name
The complete output after it finishes:
$ docker logs test_name
start
reading
var=attach input
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
if [ $? -ne 0 ]; then
echo Investigate _make_ manually: "$REPO_NAME"
mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
if [ $res -ne 0 ]; then exit 33; fi
fi
Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.
def monitor_container_output(container):
line = b''
for log in container.logs(stream=True):
if log == b'\n':
print(line.decode())
if b'mypipe_' in line:
Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
line = b''
continue
line += log
client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)
After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
echo 0 > "/tmp/mypipe_$githash"

How to boot Tryton server automattically

(I have searched and didn't fine what i was looking for)
Recently I installed GNU Health using Ubuntu 14.04.3 following the wikibooks tutorial. Everything worked as expected. But i have to boot up the Tryton server manually every time i start/restart ubuntu. (as given in https://en.wikibooks.org/wiki/GNU_Health/Installation#Booting_up_the_Tryton_Server ).
I was wondering if there any way to make it boot automatically at system startup. A script was found in a site but that seemed to be outdated and didn't work. Is there any application or script to boot the server automatically? so that i can use the machine as server without any screen/keyboard/mouse?
This is not specific tryton question but more ubuntu question. You need setup init script and install it to System-V scripts.
Puts this script to /etc/init.d/tryton-server file, replace DEAMON variable with your trytond path, check other variables. Then run update-rc.d tryton-server defaults command.
#!/bin/sh
### BEGIN INIT INFO
# Provides: tryton-server
# Required-Start: $syslog $remote_fs
# Required-Stop: $syslog $remote_fs
# Should-Start: $network postgresql mysql
# Should-Stop: $network postgresql mysql
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Application Platform
# Description: Tryton is an Application Platform serving as a base for
# a complete ERP software.
### END INIT INFO
PATH="/sbin:/bin:/usr/sbin:/usr/bin"
DAEMON="[REPLACE WITH YOUR trytond PATH]"
test -x "${DAEMON}" || exit 0
NAME="trytond"
DESC="Tryton Application Platform"
DAEMONUSER="tryton"
PIDDIR="/var/run/${NAME}"
PIDFILE="${PIDDIR}/${NAME}.pid"
LOGFILE="/var/log/tryton/${NAME}.log"
DEFAULTS="/etc/default/tryton-server"
CONFIGFILE="/etc/${NAME}.conf"
DAEMON_OPTS="--config=${CONFIGFILE} --logfile=${LOGFILE}"
# Include tryton-server defaults if available
if [ -r "${DEFAULTS}" ]
then
. "${DEFAULTS}"
. /lib/lsb/init-functions
# Make sure trytond is started with configured locale
if [ -n "${LANG}" ]
then
LANG="${LANG}"
export LANG
set -e
do_start ()
if [ ! -d "${PIDDIR}" ]
then
mkdir -p "${PIDDIR}"
chown "${DAEMONUSER}":"${DAEMONUSER}" "${PIDDIR}"
start-stop-daemon --start --quiet --pidfile ${PIDFILE} \
--chuid ${DAEMONUSER} --background --make-pidfile \
--exec ${DAEMON} -- ${DAEMON_OPTS}
do_stop ()
start-stop-daemon --stop --quiet --pidfile ${PIDFILE} --oknodo
case "${1}" in
start)
log_daemon_msg "Starting ${DESC}" "${NAME}"
do_start
log_end_msg ${?}
;;
stop)
log_daemon_msg "Stopping ${DESC}" "${NAME}"
do_stop
log_end_msg ${?}
;;
restart|force-reload)
log_daemon_msg "Restarting ${DESC}" "${NAME}"
do_stop
sleep 1
do_start
log_end_msg ${?}
status)
status_of_proc -p ${PIDFILE} ${DAEMON} ${NAME} && \
exit 0 || exit ${?}
N="/etc/init.d/${NAME}"
echo "Usage: ${N} {start|stop|restart|force-reload|status}" >&2
exit 1
;;
esac
exit 0

Returning from bash script that invokes a python script

I have a bash script (Controller.sh) that invokes a python script (MyDaemon.py). The latter takes an argument and a command, and can be invoked from the command line like so:
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue start
or
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue stop
or
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue status
I am attempting to get Controller.sh to invoke MyDaemon.py and then exit with a status. The python script should be kicked off and Controller.sh should return. This is my Controller.sh code:
COLOR=$1
COMMAND=$2
DIRNAME=`dirname $0`
RESULT="/tmp/$COLOR.$COMMAND.result"
# remove any old console output
rm -f $RESULT 2>/dev/null
#start with CPU affinity for anything other than CPU 0.
sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p $COLOR $COMMMAND</dev/null >$RESULT 2>&1
STATUS=$?
# print output
cat $RESULT
# check on success
if [ $STATUS -ne 0 ]
then
echo "ERROR: $COLOR $COMMAND failed"
exit 1
fi
Now, if on the command line I invoke Controller.sh blue start it kicks off the python script, but Controller.sh does not return a status. On the other hand, if I run the following it does return:
[nford#myserver]# sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p blue start</dev/null >/tmp/blah.log 2>&1
Started with pid 1326
[nford#myserver]#
I am forced to conclude that there is something about the bash script that is preventing it from returning.
It should be noted that MyDaemon.py does fork processes, which is why I need to redirect output. It should also be noted that I'm lifting the majority of this from another script that does something similar with a php script; some of the meaning I'm fuzzy on (such as STATUS=$?). That said, even if I cut out everything after the sudo taskset invocation line, it still fails to return cleanly. How do I get the bash script to properly execute this command?
Post-Script: I'm a little baffled how this question is 'too specific' and was down-voted/voted to close. In an attempt to be crystal clear; I'm trying to understand the differences in how a forking process script runs in the context of the command line versus a bash script. I've provided a specific example above, but this is a general concept.
UPDATE:
This results when I run the script using bash -x, further showing that it dies on the sudo taskset line. The fact it's left off the start command is confusing.
[nford#myserver]# bash -x Controller.sh Blue start
+ COLOR=Blue
+ COMMAND=start
++ dirname Controller.sh
+ DIRNAME=.
+ RESULT=/tmp/Blue.start.result
+ rm -f /tmp/Blue.start.result
+ sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p Blue
UPDATE:
bash -x reveals the problem: the start command is not being passed through: a typo in the variable name produces as silent bash error. Takeaway: use bash -x for debugging!
Because of your typo - You should use set -u at the top of your scripts, it's a life saver and stops sleepless nights as well as negating the pulling of hair.
set -u would have given you...
myscript.sh: line 11: COMMMAND: unbound variable
Remember you can run scripts like so bash -u myscript.sh arg1 arg2 likewise with -x, they both help it tracking down script issues.

How to source virtualenv activate in a Bash script

How do you create a Bash script to activate a Python virtualenv?
I have a directory structure like:
.env
bin
activate
...other virtualenv files...
src
shell.sh
...my code...
I can activate my virtualenv by:
user#localhost:src$ . ../.env/bin/activate
(.env)user#localhost:src$
However, doing the same from a Bash script does nothing:
user#localhost:src$ cat shell.sh
#!/bin/bash
. ../.env/bin/activate
user#localhost:src$ ./shell.sh
user#localhost:src$
What am I doing wrong?
When you source, you're loading the activate script into your active shell.
When you do it in a script, you load it into that shell which exits when your script finishes and you're back to your original, unactivated shell.
Your best option would be to do it in a function
activate () {
. ../.env/bin/activate
}
or an alias
alias activate=". ../.env/bin/activate"
You should call the bash script using source.
Here is an example:
#!/bin/bash
# Let's call this script venv.sh
source "<absolute_path_recommended_here>/.env/bin/activate"
On your shell just call it like that:
> source venv.sh
Or as #outmind suggested: (Note that this does not work with zsh)
> . venv.sh
There you go, the shell indication will be placed on your prompt.
Although it doesn't add the "(.env)" prefix to the shell prompt, I found this script works as expected.
#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". ../.env/bin/activate; exec /bin/bash -i"
e.g.
user#localhost:~/src$ which pip
/usr/local/bin/pip
user#localhost:~/src$ which python
/usr/bin/python
user#localhost:~/src$ ./shell
user#localhost:~/src$ which pip
~/.env/bin/pip
user#localhost:~/src$ which python
~/.env/bin/python
user#localhost:~/src$ exit
exit
Sourcing runs shell commands in your current shell. When you source inside of a script like you are doing above, you are affecting the environment for that script, but when the script exits, the environment changes are undone, as they've effectively gone out of scope.
If your intent is to run shell commands in the virtualenv, you can do that in your script after sourcing the activate script. If your intent is to interact with a shell inside the virtualenv, then you can spawn a sub-shell inside your script which would inherit the environment.
Here is the script that I use often. Run it as $ source script_name
#!/bin/bash -x
PWD=`pwd`
/usr/local/bin/virtualenv --python=python3 venv
echo $PWD
activate () {
. $PWD/venv/bin/activate
}
activate
You can also do this using a subshell to better contain your usage - here's a practical example:
#!/bin/bash
commandA --args
# Run commandB in a subshell and collect its output in $VAR
# NOTE
# - PATH is only modified as an example
# - output beyond a single value may not be captured without quoting
# - it is important to discard (or separate) virtualenv activation stdout
# if the stdout of commandB is to be captured
#
VAR=$(
PATH="/opt/bin/foo:$PATH"
. /path/to/activate > /dev/null # activate virtualenv
commandB # tool from /opt/bin/ which requires virtualenv
)
# Use the output from commandB later
commandC "$VAR"
This style is especially helpful when
a different version of commandA or commandC exists under /opt/bin
commandB exists in the system PATH or is very common
these commands fail under the virtualenv
one needs a variety of different virtualenvs
What does sourcing the bash script for?
If you intend to switch between multiple virtualenvs or enter one virtualenv quickly, have you tried virtualenvwrapper? It provides a lot of utils like workon venv, mkvirtualenv venv and so on.
If you just run a python script in certain virtualenv, use /path/to/venv/bin/python script.py to run it.
As others already stated, what you are doing wrong is not sourcing the script you created. When you run the script just like you showed, it creates a new shell which activates the virtual environment and then exits, so there are no changes to your original shell from which you ran the script.
You need to source the script, which will make it run in your current shell.
You can do that by calling source shell.sh or . shell.sh
To make sure the script is sourced instead of executed normally, its nice to have some checks in place in the script to remind you, for example the script I use is this:
#!/bin/bash
if [[ "$0" = "$BASH_SOURCE" ]]; then
echo "Needs to be run using source: . activate_venv.sh"
else
VENVPATH="venv/bin/activate"
if [[ $# -eq 1 ]]; then
if [ -d $1 ]; then
VENVPATH="$1/bin/activate"
else
echo "Virtual environment $1 not found"
return
fi
elif [ -d "venv" ]; then
VENVPATH="venv/bin/activate"
elif [-d "env"]; then
VENVPATH="env/bin/activate"
fi
echo "Activating virtual environment $VENVPATH"
source "$VENVPATH"
fi
It's not bulletproof but it's easy to understand and does its job.
You should use multiple commands in one line. for example:
os.system(". Projects/virenv/bin/activate && python Projects/virenv/django-project/manage.py runserver")
when you activate your virtual environment in one line, I think it forgets for other command lines and you can prevent this by using multiple commands in one line.
It worked for me :)
When I was learning venv I created a script to remind me how to activate it.
#!/bin/sh
# init_venv.sh
if [ -d "./bin" ];then
echo "[info] Ctrl+d to deactivate"
bash -c ". bin/activate; exec /usr/bin/env bash --rcfile <(echo 'PS1=\"(venv)\${PS1}\"') -i"
fi
This has the advantage that it changes the prompt.
As stated in other answers, when you run a script, it creates a sub-shell.
When the script exits, all modifications to that shell are lost.
What we need is actually to run a new shell where the virtual environment is active, and not exit from it.
Be aware, this is a new shell, not the one in use before you run your script.
What this mean is, if you type exit in it, it will exit from the subshell, and return to the previous one (the one where you ran the script), it won't close your xterm or whatever, as you may have expected.
The trouble is, when we exec bash, it reads its rc files (/etc/bash.bashrc, ~/.bashrc), which will change the shell environment. The solution is to provide bash with a way to setup the shell as usual, while additionnally activate the virtual environment. To do this, we create a temporary file, recreating the original bash behavior, and adding a few things we need to enable our venv. We then ask bash to use it instead of its usual rc files.
A beneficial side-effect of having a new shell "dedicated" to our venv, is that to deactivate the virtual environment, the only thing needed is to exit the shell.
I use this in the script exposed below to provide a 'deactivate' option, which acts by sending a signal to the new shell (kill -SIGUSR1), this signal is intercepted (trap ...) and provoke the exit from the shell.
Note: i use SIGUSR1 as to not interfere with whatever could be set in the "normal" behavior.
The script i use:
#!/bin/bash
PYTHON=python3
myname=$(basename "$0")
mydir=$(cd $(dirname "$0") && pwd)
venv_dir="${mydir}/.venv/dev"
usage() {
printf "Usage: %s (activate|deactivate)\n" "$myname"
}
[ $# -eq 1 ] || { usage >&2; exit 1; }
in_venv() {
[ -n "$VIRTUAL_ENV" -a "$VIRTUAL_ENV" = "$venv_dir" -a -n "$VIRTUAL_ENV_SHELL_PID" ]
}
case $1 in
activate)
# check if already active
in_venv && {
printf "Virtual environment already active\n"
exit 0
}
# check if created
[ -e "$venv_dir" ] || {
$PYTHON -m venv --clear --prompt "venv: dev" "$venv_dir" || {
printf "Failed to initialize venv\n" >&2
exit 1
}
}
# activate
tmp_file=$(mktemp)
cat <<EOF >"$tmp_file"
# original bash behavior
if [ -f /etc/bash.bashrc ]; then
source /etc/bash.bashrc
fi
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
# activating venv
source "${venv_dir}/bin/activate"
# remove deactivate function:
# we don't want to call it by mistake
# and forget we have an additional shell running
unset -f deactivate
# exit venv shell
venv_deactivate() {
printf "Exitting virtual env shell.\n" >&2
exit 0
}
trap "venv_deactivate" SIGUSR1
VIRTUAL_ENV_SHELL_PID=$$
export VIRTUAL_ENV_SHELL_PID
# remove ourself, don't let temporary files laying around
rm -f "${tmp_file}"
EOF
exec "/bin/bash" --rcfile "$tmp_file" -i || {
printf "Failed to execute virtual environment shell\n" >&2
exit 1
}
;;
deactivate)
# check if active
in_venv || {
printf "Virtual environment not found\n" >&2
exit 1
}
# exit venv shell
kill -SIGUSR1 $VIRTUAL_ENV_SHELL_PID || {
printf "Failed to kill virtual environment shell\n" >&2
exit 1
}
exit 0
;;
*)
usage >&2
exit 1
;;
esac
I simply added this into my .bashrc-personal config file.
function sv () {
if [ -d "venv" ]; then
source "venv/bin/activate"
else
if [ -d ".venv" ]; then
source ".venv/bin/activate"
else
echo "Error: No virtual environment detected!"
fi
fi
}

Celery daemon script not going in background with init script

The celery docs says that
However, in production you probably want to run the worker in the background as a daemon.
I made the init.d script as below
#!/bin/sh
#
# chkconfig: 345 99 15
# description: celery init.d
# Where to chdir at start.
CELERYD_CHDIR="/home/username/django/django_myapp"
# How to call "manage.py celeryd_multi"
CELERYD="/opt/python27/bin/python manage.py celeryd "
#CELERYD_MULTI="$CELERYD_CHDIR/manage.py celeryd_multi"
# Extra arguments to celeryd
CELERYD_OPTS="--time-limit 300 --concurrency=8"
# Name of the celery config module.
CELERY_CONFIG_MODULE="celeryconfig"
# %n will be replaced with the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="root"
CELERYD_GROUP="celery"
# Name of the projects settings module.
export DJANGO_SETTINGS_MODULE="settings"
CELERYD_PIDFILE=/var/run/celery.pid
# Source function library.
. /etc/init.d/functions
# Celery options
CELERYD_OPTS="-B -l info"
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
start () {
cd $CELERYD_CHDIR
daemon --user $CELERYD_USER --pidfile $CELERYD_PIDFILE $CELERYD $CELERYD_OPTS &
}
stop () {
if [[ -s $CELERYD_PIDFILE ]] ; then
echo "Stopping Celery"
killproc -p $CELERYD_PIDFILE python
echo "done!"
rm -f $CELERYD_PIDFILE
else
echo "Celery not running."
fi
}
check_status() {
status -p $CELERYD_PIDFILE python
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
check_status
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
exit 1
;;
esac
When i execute /etc/init.d/celeryd start
Now it runs fine but again in foreground not in background
IS that the case or i am doing it wrong

Categories

Resources