Error response from daemon: open \\.\pipe\docker_engine_linux - python

I installed the latest version of docker, installed WSL 2 according by the manual. And installed the container with the command docker-compose up. I need to run the tests by command tests/run_tests.sh. But after launching, after a few seconds, the window with the test closes, my container disappears in the docker, and when I try to write the command docker-compose up again, I get an error Error response from daemon: open \\.\pipe\docker_engine_linux: The system cannot find the file specified.
run_tests:
#!/usr/bin/env sh
# To run locally, execute the command NOT in container:
# bash tests/run_tests.sh
set -x
if [ -z "$API_ENV" ]; then
API_ENV=test
fi
if [ "$API_ENV" = "bitbucket_test" ]; then
COMPOSE_FILE="-f docker-compose.test.yml"
fi
docker-compose build connectors
API_ENV=$API_ENV docker-compose ${COMPOSE_FILE} up -d --force-recreate
connectors_container=$(docker ps -f name=connectors -q | tail -n1)
if [ "$API_ENV" = "bitbucket_test" ]; then
mkdir -p artifacts && docker logs --follow ${connectors_container} > ./artifacts/docker_connectors_logs.txt 2>&1 &
pytest_n_processes=100
else
pytest_n_processes=25
fi
# Timeout for the tests. In bitbucket we want to stop the tests a bit before the max time, so that
# artifacts are created and logs can be inspected
timeout_cmd="timeout 3.5m"
if [ "$API_ENV" = "bitbucket_test" ] || [ "$API_ENV" = "test" ]; then
export PYTEST_SENTRY_DSN='http://d07ba0bfff4b41888e311f8398321d14#sentry.windsor.ai/4'
export PYTEST_SENTRY_ALWAYS_REPORT=1
fi
git fetch origin "+refs/heads/master:refs/remotes/origin/master"
# Lint all the files that are modified in this branch
$(dirname "$0")/run_linters.sh &
linting_pid=$!
# bitbucket pipelines have 8 workers, use 6 for tests
#
# WARNING: Tests require gunicorn and is enabled when containers are started with: API_ENV=test docker-compose up -d --force-recreate
# Tests are run in parallel and the cache-locking in threaded flask doesnt work in this case
${timeout_cmd} docker exec ${connectors_container} bash -c \
"PYTEST_SENTRY_DSN=$PYTEST_SENTRY_DSN \
PYTEST_SENTRY_ALWAYS_REPORT=$PYTEST_SENTRY_ALWAYS_REPORT \
pytest \
--cov=connectors --cov=api --cov=base \
--cov-branch --cov-report term-missing --cov-fail-under=71.60 \
--timeout 60 \
-v \
--durations=50 \
-n $pytest_n_processes \
tests || ( \
code=$? `# store the exit code to exit with it` \
&& echo 'TESTS FAILED' \
&& mkdir -p ./artifacts \
&& docker logs ${connectors_container} > ./artifacts/docker_connectors_failure_logs.txt 2>&1 `# Ensure that the logs are complete` \
) "&
# Get the tests pid
tests_pid=$!
# wait for linting to finish
wait $linting_pid
linting_code=$?
echo "Linting code: ${linting_code}"
if [ $linting_code -ne 0 ]; then
echo 'Linting failed'
# kill running jobs on exit in local ubuntu. Some tests were left running by only killing the test_pid.
kill "$(jobs -p)"
# kills the test process explicitly in gitlab pipelines. Was needed because jobs returns empty in gitlab pipelines.
kill $tests_pid
exit 1
fi
# wait for tests to finish
wait $tests_pid
testing_code=$?
echo "Testing code: ${testing_code}"
if [ $testing_code -ne 0 ]; then
echo 'Tests failed'
exit 1
else
echo 'Tests and linting passed'
exit 0
fi

Related

How to make the .sh file should be in running state always

I'm new to shell scripting, i want the command to be in running always.
My .sh file - startscrapy.sh
#!/bin/bash
echo "Scrapyd is started now"
scrapyd
i have changed the permission also chmod +x etc/init.d/startscrapy.sh
I have placed this file in etc/init.d but it is not working.
My understanding as of now " the location etc/init.d is to run the .sh files
whenever the server or system boots up but i want my .sh file to be running state always.
Using crontab you can easily auto start any scripts in ubuntu.
Please do the following steps,
Run the command crontab -e so that you can edit the crontab.
Now add the following line to the crontab editor #reboot sudo
<script> in your case it should be #reboot sudo scrapyd.
Now reboot your system, then you will find scrapyd running.
Hope it Helps.
Take a look at this init.d template and change your one accordingly.
Then you need to register the startup script with your initialisation daemon. Under Ubuntu that would be update-rc.d NAMEofDAEMON default
You want to create a daemon. There are some tutorials on internet to do this , i took this one for you. On the final part, you might use a different way to register the script, this one is for ubuntu.
you need to put the following into a file of the name of your choice (i will take "startscrapy.sh" for now) (you can modify it, obviously, according to your needs)
#!/bin/sh -e
DAEMON="scrapyd" #Command to run
daemon_OPT="" #arguments for your program
DAEMONUSER="user" #Program user
daemon_NAME="scarpyd" #Program name (need to be identical to the executable).
PATH="/sbin:/bin:/usr/sbin:/usr/bin" #don't touch
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
d_start () {
log_daemon_msg "Starting system $daemon_NAME Daemon"
start-stop-daemon --background --name $daemon_NAME --start --quiet --chuid $DAEMONUSER --exec $DAEMON -- $daemon_OPT
log_end_msg $?
}
d_stop () {
log_daemon_msg "Stopping system $daemon_NAME Daemon"
start-stop-daemon --name $daemon_NAME --stop --retry 5 --quiet --name $daemon_NAME
log_end_msg $?
}
case "$1" in
start|stop)
d_${1}
;;
restart|reload|force-reload)
d_stop
d_start
;;
force-stop)
d_stop
killall -q $daemon_NAME || true #replace with an apropriate killing method
sleep 2
killall -q -9 $daemon_NAME || true #replace with an apropriate killing method
;;
status)
status_of_proc "$daemon_NAME" "$DAEMON" "system-wide $daemon_NAME" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$daemon_NAME {start|stop|force-stop|restart|reload|force-reload|status}"
exit 1
;;
esac
exit 0
Then run as root :
chmod +x etc/init.d/startscrapy.sh
chmod 0755 /etc/init.d/startscrapy.sh (modify by your script location)
systemctl daemon-reload
update-rc.d startscrapy.sh defaults
To remove the daemon, run as root :
update-rc.d -f startscrapy.sh remove

Run huey task queue in the background with supervisor on Elastic Beanstalk

I am trying to run a huey task queue on elastic beanstalk that is needed by my Flask app. But there is no built in way to run huey as a daemon process. The author of huey has advised to run huey with supervisor (this link) and since elastic beanstalk already uses supervisor, I thought we could just add the program to be managed by supervisor. But I am not sure how to do this programatically. Currently, I am using the container_commands (ref link) key in the config file to run this but elastic beanstalk gives me a timeout error after sometime as it runs in the foreground. Below is the config file I am using.
packages:
yum:
gcc: []
gcc-c++: []
gcc-gfortran: []
htop: []
make: []
wget: []
atlas-devel: []
lapack-devel: []
commands:
01enable_swap:
command:
- sudo dd if=/dev/zero of=/var/swap1 bs=1M count=1024
- sudo mkswap /var/swap1
- sudo chmod 644 /var/swap1
- sudo swapon /var/swap1
cwd: /home/ec2-user
02install_redis:
command:
- wget "http://download.redis.io/redis-stable.tar.gz"
- tar -xvzf redis-stable.tar.gz
- rm redis-stable.tar.gz
- cd redis-stable
- sudo make
- sudo make install
cwd: /home/ec2-user
container_commands:
01download_nltk_packages:
command: "python install_resources.py"
02run_redis:
command: "redis-server --host 127.0.0.1 --port 6379 --daemonize yes"
03run_huey:
command: "huey_consumer jupiter.huey"
Here's what I want to achieve:
1. huey should run as a background process when my Flask app is deployed.
2. supervisor should handle automatic start/stop of the huey process.
I solved this problem by doing the following in an ebextensions file called 002_supervisor.conf. This is for django but I'm sure it could be adapted for flask.
Create a supervisor config file
Create a supervisor init.d file
Create a huey.conf file to be loaded by supervisor
files:
/usr/local/etc/supervisord.conf:
mode: "000755"
owner: root
group: root
content: |
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /usr/local/etc/*.conf
[inet_http_server]
port = 127.0.0.1:9001
/etc/init.d/supervisord:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=/usr/local/bin/supervisorctl
supervisord=${SUPERVISORD-/usr/local/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/tmp/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /usr/local/etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
"/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_huey.sh" :
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# Get django environment variables
env=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
env=${env%?}
# Create huey configuration script
hueyconf="[program:huey]
; Set full path to celery program if using virtualenv
command=/opt/python/current/app/production.py run_huey
user=nobody
numprocs=1
stdout_logfile=/var/log/huey.log
stderr_logfile=/var/log/huey.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
environment=$env"
# Create the celery supervisord conf script
echo "$hueyconf" | tee /usr/local/etc/huey.conf
# Update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl reread
/usr/local/bin/supervisorctl update
# Start/Restart huey through supervisord
/usr/local/bin/supervisorctl -c /usr/local/etc/supervisord.conf restart huey
commands:
01_start_supervisor:
command: '/etc/init.d/supervisord restart'
leader_only: true

How do i run nikto.pl file from wapiti.py file?

I am working on a project which involves wapiti and nikto web tools. i have managed to produce one report for both these tool with this command
python wapiti.py www.kca.ac.ke ;perl nikto.pl -h www.kca.ac.ke -Display V -F htm -output /root/.wapiti/generated_report/index.html.
But i would like to run a command like
python wapiti.py www.kca.ac.ke
and get both the wapiti and nikto web scan report. How do i achieve this guys?
A shell script would work. Save the following as 'run_wapiti_and_nikto_scans', then run it as:
bash run_wapiti_and_nikto_scans www.my.site.com
Here is the script:
#!/bin/bash
SITE=$1
if [ -n "$SITE" ]; then # -n tests to see if the argument is non empty
echo "Looking to scan $SITE"
echo "Running 'python wapiti.py $SITE'"
python wapiti.py $SITE || echo "Failed to run wapiti!" && exit 1;
echo "Running 'perl nikto.pl -h $SITE -Display V -F htm -output /root/.wapiti/generated_report/index.html'"
perl nikto.pl -h $SITE -Display V -F htm -output /root/.wapiti/generated_report/index.html || echo "Failed to run nikto!" && exit 1;
echo "Done!"
exit 0; # Success
fi
echo "usage: run_wapiti_and_nikto_scans www.my.site.com";
exit 1; # Failure

Why can't init'd Python process see other processes?

I have a Python process that runs on my system and checks to see if other processes are running. I have a problem where on reboot, my Python does not work. I assume it has something to do with the the environment at boot. If I stop my Python script (from boot) and start it as root or as a user of "dataturbine", it works fine. Here is the interesting portion of the init script:
SERVER_HOST=`hostname`
SERVER_PORT='3333'
RBNB_LOG_DIR=/var/log/rbnb
LOG_FILE="${RBNB_LOG_DIR}/dataturbine-rpc.log"
DT_USER=dataturbine
OWNER=${DT_USER}:${DT_USER}
RBNB_RUN_DIR=/var/run/rbnb
PIDFILE=${RBNB_RUN_DIR}/dataturbine-rpc.pid
SCRIPT=/usr/local/rbnb/scripts/dataturbine.py
######################################################################################
start() {
if [ ${USER} == ${DT_USER} ]
then
${SCRIPT} -logfile ${LOG_FILE} -bindport 12000 -bindip 0.0.0.0 > /dev/null 2>&1 & echo $! > ${PIDFILE}
else
su -m -c "${SCRIPT} -logfile ${LOG_FILE} -bindport 12000 -bindip 0.0.0.0 > /dev/null 2>&1 & echo \$"'!'" > ${PIDFILE}" ${DT_USER}
fi
}
Here is the Python command that gets run to check for a running process. It comes back with a 0 from the "WC" command on init, even though the process is running:
output = subprocess.check_output("/bin/ps -ef | /bin/grep 'DaqToRbnb' | /bin/grep -v grep | /usr/bin/wc -l", shell=True)

Linux Script to start webiopi service

LOG_FILE=/var/log/webiopi
CONFIG_FILE=/etc/webiopi/config
PATH=/sbin:/usr/sbin:/bin:/usr/bin
DESC="WebIOPi"
NAME=webiopi
HOME=/usr/share/webiopi/htdocs
DAEMON=/usr/bin/python
DAEMON_ARGS="-m webiopi -l $LOG_FILE -c $CONFIG_FILE"
PIDFILE=/var/run/$NAME.pid
SCRIPTNAME=/etc/init.d/$NAME
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
1)The -x "$DAEMON" only check if python has been installed but it didn't check for the package webiopi, doesn't it?
2)Does Python -m would run the whole package, not just the individual module?
# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
3)How does the configuration file /etc/webiopi/config values go into /etc/default/webiopi?From above, I didn't see the command to do that.
#
# Function that starts the daemon/service
#
do_start()
{
# Return
# 0 if daemon has been started
# 1 if daemon was already running
# 2 if daemon could not be started
start-stop-daemon --start --quiet --chdir $HOME --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
|| return 1
4) above start the python process only but not webiopi? What's point of python test?It didn't specify is it return 0?
start-stop-daemon --start --quiet --chdir $HOME --pidfile $PIDFILE --exec $DAEMON --background --make-pidfile -- \
$DAEMON_ARGS \
|| return 2
5) Above start webiopi by starting python -m webiopi * in the background?

Categories

Resources