Linux Script to start webiopi service - python

LOG_FILE=/var/log/webiopi
CONFIG_FILE=/etc/webiopi/config
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="WebIOPi"
NAME=webiopi
HOME=/usr/share/webiopi/htdocs
DAEMON=/usr/bin/python
DAEMON_ARGS="-m webiopi -l $LOG_FILE -c $CONFIG_FILE"
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
1)The -x "$DAEMON" only check if python has been installed but it didn't check for the package webiopi, doesn't it?
2)Does Python -m would run the whole package, not just the individual module?
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
3)How does the configuration file /etc/webiopi/config values go into /etc/default/webiopi?From above, I didn't see the command to do that.
#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --chdir $HOME --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
|| return 1
4) above start the python process only but not webiopi? What's point of python test?It didn't specify is it return 0?
start-stop-daemon --start --quiet --chdir $HOME --pidfile $PIDFILE --exec $DAEMON --background --make-pidfile -- \
$DAEMON_ARGS \
|| return 2
5) Above start webiopi by starting python -m webiopi * in the background?

Related

Error response from daemon: open \\.\pipe\docker_engine_linux

I installed the latest version of docker, installed WSL 2 according by the manual. And installed the container with the command docker-compose up. I need to run the tests by command tests/run_tests.sh. But after launching, after a few seconds, the window with the test closes, my container disappears in the docker, and when I try to write the command docker-compose up again, I get an error Error response from daemon: open \\.\pipe\docker_engine_linux: The system cannot find the file specified.
run_tests:
#!/usr/bin/env sh
# To run locally, execute the command NOT in container:
# bash tests/run_tests.sh
set -x
if [ -z "$API_ENV" ]; then
API_ENV=test
fi
if [ "$API_ENV" = "bitbucket_test" ]; then
COMPOSE_FILE="-f docker-compose.test.yml"
fi
docker-compose build connectors
API_ENV=$API_ENV docker-compose ${COMPOSE_FILE} up -d --force-recreate
connectors_container=$(docker ps -f name=connectors -q | tail -n1)
if [ "$API_ENV" = "bitbucket_test" ]; then
mkdir -p artifacts && docker logs --follow ${connectors_container} > ./artifacts/docker_connectors_logs.txt 2>&1 &
pytest_n_processes=100
else
pytest_n_processes=25
fi
# Timeout for the tests. In bitbucket we want to stop the tests a bit before the max time, so that
# artifacts are created and logs can be inspected
timeout_cmd="timeout 3.5m"
if [ "$API_ENV" = "bitbucket_test" ] || [ "$API_ENV" = "test" ]; then
export PYTEST_SENTRY_DSN='http://d07ba0bfff4b41888e311f8398321d14#sentry.windsor.ai/4'
export PYTEST_SENTRY_ALWAYS_REPORT=1
fi
git fetch origin "+refs/heads/master:refs/remotes/origin/master"
# Lint all the files that are modified in this branch
$(dirname "$0")/run_linters.sh &
linting_pid=$!
# bitbucket pipelines have 8 workers, use 6 for tests
#
# WARNING: Tests require gunicorn and is enabled when containers are started with: API_ENV=test docker-compose up -d --force-recreate
# Tests are run in parallel and the cache-locking in threaded flask doesnt work in this case
${timeout_cmd} docker exec ${connectors_container} bash -c \
"PYTEST_SENTRY_DSN=$PYTEST_SENTRY_DSN \
PYTEST_SENTRY_ALWAYS_REPORT=$PYTEST_SENTRY_ALWAYS_REPORT \
pytest \
--cov=connectors --cov=api --cov=base \
--cov-branch --cov-report term-missing --cov-fail-under=71.60 \
--timeout 60 \
-v \
--durations=50 \
-n $pytest_n_processes \
tests || ( \
code=$? `# store the exit code to exit with it` \
&& echo 'TESTS FAILED' \
&& mkdir -p ./artifacts \
&& docker logs ${connectors_container} > ./artifacts/docker_connectors_failure_logs.txt 2>&1 `# Ensure that the logs are complete` \
) "&
# Get the tests pid
tests_pid=$!
# wait for linting to finish
wait $linting_pid
linting_code=$?
echo "Linting code: ${linting_code}"
if [ $linting_code -ne 0 ]; then
echo 'Linting failed'
# kill running jobs on exit in local ubuntu. Some tests were left running by only killing the test_pid.
kill "$(jobs -p)"
# kills the test process explicitly in gitlab pipelines. Was needed because jobs returns empty in gitlab pipelines.
kill $tests_pid
exit 1
fi
# wait for tests to finish
wait $tests_pid
testing_code=$?
echo "Testing code: ${testing_code}"
if [ $testing_code -ne 0 ]; then
echo 'Tests failed'
exit 1
else
echo 'Tests and linting passed'
exit 0
fi

Shell file skips commands when run at boot using init.d?

I'm running Armbian Linux and trying to execute a shell file at boot. The file runs perfectly when I execute it through the command line after boot. However, it skips my Python commands (which are supposed to send animations to an OLED screen) when it runs during boot. It does still, however, turn on and off an LED.
The shell file is placed in /etc/init.d and I ran the following commands.
sudo update-rc.d startup.sh defaults
sudo update-rc.d startup.sh enable
chmod +x /etc/init.d/startup.sh
Here is the shell file.
#!/bin/sh
### BEGIN INIT INFO
# Provides: startup
# Required-Start: $all
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: display to screen
### END INIT INFO
main() {
#GPIO numbers for the RGB-Led Pins
RGB_GPIO_RED=157
RGB_GPIO_GREEN=156
RGB_GPIO_BLUE=154
#OLED settings
OLED_I2C_PORT=2
OLED_ORIENTATION=2
OLED_DISPLAY_TYPE='sh1106'
# turn on the blue led while configuring and updating
cd /sys/class/gpio
sudo sh -c 'echo '$RGB_GPIO_BLUE' > export'
cd gpio$RGB_GPIO_BLUE
sudo sh -c 'echo out > direction'
sudo sh -c 'echo 1 > value'
cd ~
# display cardano animation
python ~/display/cardano-luma/examples/cardano-animation.py --display $OLED_DISPLAY_TYPE --i2c-port $OLED_I2C_PORT --rotate $OLED_ORIENTATION
# turn off blue led and on the green led
cd /sys/class/gpio
sudo sh -c 'echo '$RGB_GPIO_BLUE' > export'
cd gpio$RGB_GPIO_BLUE
sudo sh -c 'echo out > direction'
sudo sh -c 'echo 0 > value'
cd ..
cd /sys/class/gpio
sudo sh -c 'echo '$RGB_GPIO_GREEN' > export'
cd gpio$RGB_GPIO_GREEN
sudo sh -c 'echo out > direction'
sudo sh -c 'echo 1 > value'
cd ~
# display rock pi information
sudo python ~/display/cardano-luma/examples/cardano.py --display $OLED_DISPLAY_TYPE --i2c-port $OLED_I2C_PORT --rotate $OLED_ORIENTATION
}
main "$#" || exit 1

mrjob returned non-zero exit status 256

I'm new to map reduce and I'm trying to run a map reduce job using mrjob package of python. However, I encountered this error:
ERROR:mrjob.launch:Step 1 of 1 failed: Command '['/usr/bin/hadoop', 'jar', '/usr/lib/hadoop-mapreduce/hadoop-streaming.jar', '-files',
'hdfs:///user/hadoop/tmp/mrjob/word_count.hadoop.20180831.035452.437014/files/mrjob.zip#mrjob.zip,
hdfs:///user/hadoop/tmp/mrjob/word_count.hadoop.20180831.035452.437014/files/setup-wrapper.sh#setup-wrapper.sh,
hdfs:///user/hadoop/tmp/mrjob/word_count.hadoop.20180831.035452.437014/files/word_count.py#word_count.py', '-archives',
'hdfs:///user/hadoop/tmp/mrjob/word_count.hadoop.20180831.035452.437014/files/word_count_ccmr.tar.gz#word_count_ccmr.tar.gz', '-D',
'mapreduce.job.maps=4', '-D', 'mapreduce.job.reduces=4', '-D', 'mapreduce.map.java.opts=-Xmx1024m', '-D', 'mapreduce.map.memory.mb=1200', '-D',
'mapreduce.output.fileoutputformat.compress=true', '-D', 'mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec', '-D',
'mapreduce.reduce.java.opts=-Xmx1024m', '-D', 'mapreduce.reduce.memory.mb=1200', '-input', 'hdfs:///user/hadoop/test-1.warc', '-output',
'hdfs:///user/hadoop/gg', '-mapper', 'sh -ex setup-wrapper.sh python word_count.py --step-num=0 --mapper', '-combiner',
'sh -ex setup-wrapper.sh python word_count.py --step-num=0 --combiner', '-reducer', 'sh -ex setup-wrapper.sh python word_count.py --step-num=0 --reducer']'
returned non-zero exit status 256
I've tried running it locally with python ./word_count.py input/test-1.warc > output and it's successful.
I'm using
python 2.7.14
Hadoop 2.8.3-amzn-1
pip 18.0
mrjob 0.6.4
Any ideas? Thanks!
This is my command in running the mapreduce job. I got it from cc-mrjob repository. The file is called run_hadoop.sh and I use chmod +x run_hadoop.sh
#!/bin/sh
JOB="$1"
INPUT="$2"
OUTPUT="$3"
sudo chmod +x $JOB.py
if [ -z "$JOB" ] || [ -z "$INPUT" ] || [ -z "$OUTPUT" ]; then
echo "Usage: $0 <job> <input> <outputdir>"
echo " Run a CommonCrawl mrjob on Hadoop"
echo
echo "Arguments:"
echo " <job> CCJob implementation"
echo " <input> input path"
echo " <output> output path (must not exist)"
echo
echo "Example:"
echo " $0 word_count input/test-1.warc hdfs:///.../output/"
echo
echo "Note: don't forget to adapt the number of maps/reduces and the memory requirements"
exit 1
fi
# strip .py from job name
JOB=${JOB%.py}
# wrap Python files for deployment, cf. below option --setup,
# see for details
# http://pythonhosted.org/mrjob/guides/setup-cookbook.html#putting-your-source-tree-in-pythonpath
tar cvfz ${JOB}_ccmr.tar.gz *.py
# number of maps resp. reduces
NUM_MAPS=4
NUM_REDUCES=4
if [ -n "$S3_LOCAL_TEMP_DIR" ]; then
S3_LOCAL_TEMP_DIR="--s3_local_temp_dir=$S3_LOCAL_TEMP_DIR"
else
S3_LOCAL_TEMP_DIR=""
fi
python $JOB.py \
-r hadoop \
--jobconf "mapreduce.map.memory.mb=1200" \
--jobconf "mapreduce.map.java.opts=-Xmx1024m" \
--jobconf "mapreduce.reduce.memory.mb=1200" \
--jobconf "mapreduce.reduce.java.opts=-Xmx1024m" \
--jobconf "mapreduce.output.fileoutputformat.compress=true" \
--jobconf "mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec" \
--jobconf "mapreduce.job.reduces=$NUM_REDUCES" \
--jobconf "mapreduce.job.maps=$NUM_MAPS" \
--setup 'export PYTHONPATH=$PYTHONPATH:'${JOB}'_ccmr.tar.gz#/' \
--no-output \
--cleanup NONE \
$S3_LOCAL_TEMP_DIR \
--output-dir "$OUTPUT" \
"hdfs:///user/hadoop/$INPUT"
and I run it with ./run_hadoop.sh word_count test-1.warc output
where
word_count is the job (file called word_count.py)
test-1.warc is the input (located in hdfs:///user/hadoop/test-1.warc)
and output is the output dir (located in hdfs:///user/hadoop/output) And I also make sure I always use different output for different job to prevent duplicate folder)
* Update *
I took a look at the syslog in HUE interface. And there's this error
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Could not deallocate container for task attemptId attempt_1536113332062_0001_r_000003_0
is this related to the error I'm getting?
I also got this in one of the stderr of map attempt
/bin/sh: run_prestart: line 1: syntax error: unexpected end of file
and
No module named boto3
However, I installed boto3 using pip install boto3 in my emr. Is the module not available in hadoop?
I got it working by following this blog
http://benjamincongdon.me/blog/2018/02/02/MapReduce-on-Python-is-better-with-MRJob-and-EMR/
Essentially,
you have to include a .conf file for runner in hadoop. e.g. mrjob.conf
inside that file, use this
runners:
hadoop:
setup:
- 'set -e'
- VENV=/tmp/$mapreduce_job_id
- if [ ! -e $VENV ]; then virtualenv $VENV; fi
- . $VENV/bin/activate
- 'pip install boto3'
- 'pip install warc'
- 'pip install https://github.com/commoncrawl/gzipstream/archive/master.zip'
sh_bin: '/bin/bash -x'
and use the conf file by refering it to the run_hadoop.sh
python $JOB.py \
--conf-path mrjob.conf \ <---- OUR CONFIG FILE
-r hadoop \
--jobconf "mapreduce.map.memory.mb=1200" \
--jobconf "mapreduce.map.java.opts=-Xmx1024m" \
--jobconf "mapreduce.reduce.memory.mb=1200" \
--jobconf "mapreduce.reduce.java.opts=-Xmx1024m" \
--jobconf "mapreduce.output.fileoutputformat.compress=true" \
--jobconf "mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.BZip2Codec" \
--jobconf "mapreduce.job.reduces=$NUM_REDUCES" \
--jobconf "mapreduce.job.maps=$NUM_MAPS" \
--setup 'export PYTHONPATH=$PYTHONPATH:'${JOB}'_ccmr.tar.gz#/' \
--cleanup NONE \
$S3_LOCAL_TEMP_DIR \
--output-dir "hdfs:///user/hadoop/$OUTPUT" \
"hdfs:///user/hadoop/$INPUT"
now if you call ./run_hadoop.sh word_count input/test-1.warc output, it should work!

How to make the .sh file should be in running state always

I'm new to shell scripting, i want the command to be in running always.
My .sh file - startscrapy.sh
#!/bin/bash
echo "Scrapyd is started now"
scrapyd
i have changed the permission also chmod +x etc/init.d/startscrapy.sh
I have placed this file in etc/init.d but it is not working.
My understanding as of now " the location etc/init.d is to run the .sh files
whenever the server or system boots up but i want my .sh file to be running state always.
Using crontab you can easily auto start any scripts in ubuntu.
Please do the following steps,
Run the command crontab -e so that you can edit the crontab.
Now add the following line to the crontab editor #reboot sudo
<script> in your case it should be #reboot sudo scrapyd.
Now reboot your system, then you will find scrapyd running.
Hope it Helps.
Take a look at this init.d template and change your one accordingly.
Then you need to register the startup script with your initialisation daemon. Under Ubuntu that would be update-rc.d NAMEofDAEMON default
You want to create a daemon. There are some tutorials on internet to do this , i took this one for you. On the final part, you might use a different way to register the script, this one is for ubuntu.
you need to put the following into a file of the name of your choice (i will take "startscrapy.sh" for now) (you can modify it, obviously, according to your needs)
#!/bin/sh -e
DAEMON="scrapyd" #Command to run
daemon_OPT="" #arguments for your program
DAEMONUSER="user" #Program user
daemon_NAME="scarpyd" #Program name (need to be identical to the executable).
PATH="/sbin:/bin:/usr/sbin:/usr/bin" #don't touch
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
d_start () {
log_daemon_msg "Starting system $daemon_NAME Daemon"
start-stop-daemon --background --name $daemon_NAME --start --quiet --chuid $DAEMONUSER --exec $DAEMON -- $daemon_OPT
log_end_msg $?
}
d_stop () {
log_daemon_msg "Stopping system $daemon_NAME Daemon"
start-stop-daemon --name $daemon_NAME --stop --retry 5 --quiet --name $daemon_NAME
log_end_msg $?
}
case "$1" in
start|stop)
d_${1}
;;
restart|reload|force-reload)
d_stop
d_start
;;
force-stop)
d_stop
killall -q $daemon_NAME || true #replace with an apropriate killing method
sleep 2
killall -q -9 $daemon_NAME || true #replace with an apropriate killing method
;;
status)
status_of_proc "$daemon_NAME" "$DAEMON" "system-wide $daemon_NAME" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$daemon_NAME {start|stop|force-stop|restart|reload|force-reload|status}"
exit 1
;;
esac
exit 0
Then run as root :
chmod +x etc/init.d/startscrapy.sh
chmod 0755 /etc/init.d/startscrapy.sh (modify by your script location)
systemctl daemon-reload
update-rc.d startscrapy.sh defaults
To remove the daemon, run as root :
update-rc.d -f startscrapy.sh remove

How to boot Tryton server automattically

(I have searched and didn't fine what i was looking for)
Recently I installed GNU Health using Ubuntu 14.04.3 following the wikibooks tutorial. Everything worked as expected. But i have to boot up the Tryton server manually every time i start/restart ubuntu. (as given in https://en.wikibooks.org/wiki/GNU_Health/Installation#Booting_up_the_Tryton_Server ).
I was wondering if there any way to make it boot automatically at system startup. A script was found in a site but that seemed to be outdated and didn't work. Is there any application or script to boot the server automatically? so that i can use the machine as server without any screen/keyboard/mouse?
This is not specific tryton question but more ubuntu question. You need setup init script and install it to System-V scripts.
Puts this script to /etc/init.d/tryton-server file, replace DEAMON variable with your trytond path, check other variables. Then run update-rc.d tryton-server defaults command.
#!/bin/sh
### BEGIN INIT INFO
# Provides: tryton-server
# Required-Start: $syslog $remote_fs
# Required-Stop: $syslog $remote_fs
# Should-Start: $network postgresql mysql
# Should-Stop: $network postgresql mysql
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Application Platform
# Description: Tryton is an Application Platform serving as a base for
# a complete ERP software.
### END INIT INFO
PATH="/sbin:/bin:/usr/sbin:/usr/bin"
DAEMON="[REPLACE WITH YOUR trytond PATH]"
test -x "${DAEMON}" || exit 0
NAME="trytond"
DESC="Tryton Application Platform"
DAEMONUSER="tryton"
PIDDIR="/var/run/${NAME}"
PIDFILE="${PIDDIR}/${NAME}.pid"
LOGFILE="/var/log/tryton/${NAME}.log"
DEFAULTS="/etc/default/tryton-server"
CONFIGFILE="/etc/${NAME}.conf"
DAEMON_OPTS="--config=${CONFIGFILE} --logfile=${LOGFILE}"
# Include tryton-server defaults if available
if [ -r "${DEFAULTS}" ]
then
. "${DEFAULTS}"
. /lib/lsb/init-functions
# Make sure trytond is started with configured locale
if [ -n "${LANG}" ]
then
LANG="${LANG}"
export LANG
set -e
do_start ()
if [ ! -d "${PIDDIR}" ]
then
mkdir -p "${PIDDIR}"
chown "${DAEMONUSER}":"${DAEMONUSER}" "${PIDDIR}"
start-stop-daemon --start --quiet --pidfile ${PIDFILE} \
--chuid ${DAEMONUSER} --background --make-pidfile \
--exec ${DAEMON} -- ${DAEMON_OPTS}
do_stop ()
start-stop-daemon --stop --quiet --pidfile ${PIDFILE} --oknodo
case "${1}" in
start)
log_daemon_msg "Starting ${DESC}" "${NAME}"
do_start
log_end_msg ${?}
;;
stop)
log_daemon_msg "Stopping ${DESC}" "${NAME}"
do_stop
log_end_msg ${?}
;;
restart|force-reload)
log_daemon_msg "Restarting ${DESC}" "${NAME}"
do_stop
sleep 1
do_start
log_end_msg ${?}
status)
status_of_proc -p ${PIDFILE} ${DAEMON} ${NAME} && \
exit 0 || exit ${?}
N="/etc/init.d/${NAME}"
echo "Usage: ${N} {start|stop|restart|force-reload|status}" >&2
exit 1
;;
esac
exit 0

Categories

Resources