running multiple terminal comands on vm in sequence - python

nohup scrapy crawl test -o fed.csv &
nohup scrapy crawl test -o feder.csv &
nohup scrapy crawl fullrun -o dez.csv &
Hello, How can I run the following commands in sequence (when the first one finishes,the next one gets executed) on Virtual machine (Ubuntu terminal)
I wasn't sure if to ask this question on askubuntu.com or here, I hope this is correct place to ask

If you want to run commands one by one and only when the previous one completed successfully you can use :
command1 && command2
if your commands are independent of each other then you can use:
command1; command2

Related

bash script in rc.local different result than executing it in terminal

I'm trying to run a python script from a headless raspberrypi that uses the pynput library.
In order to use pynput on a headleass machine it requires you to do the following steps on every bootup:
export DISPLAY=:0
DISPLAY=:0 python -c 'import pynput'
If I don't do this I get the following error:
ImportError: this platform is not supported: ('failed to acquire X
connection: >
Try one of the following resolutions:
Please make sure that you have an X server running, and that the
DISPLAY env>
If I now try to automaticate this process through a bash script that contains the following lines:
#!/bin/bash
sleep 60
export DISPLAY=:0
sudo chmod 666 /dev/hidg0
DISPLAY=:0 python -c 'import pynput'
python3 /home/pi/Hid.py |& tee -a /home/pi/test.log
exit 0
And execute the file normally like this from terminal:
bash /home/pi/bash.sh
it works perfectly fine.
But when I mention the script in the rc.local file the same way I executed it before, the script is getting executed on boot, but the python script throws the error that was shown at the beginning.
How can there be two different outcomes, when it's getting executed by hand and when it's getting executed by a bash script at boot via. rc.local?
Any help is appreciated!
All commands from rc.local are getting executed by user root.
To execute them from a different user use:
sudo su USERNAME -c 'SCRIPT/PATH'

Check scrapy result in bash

I have multiple spiders that i run in a bash script like so:
pipenv run scrapy runspider -o output-a.json a.py
pipenv run scrapy runspider -o output-b.json b.py
Since they should run for a long time I'd like to have a simple way of monitoring their success rate; my plan was to ping https://healtchecks.io when both scrapers run successfully (i.e. they don't have any error messages). I've sprinkled some assert statements over the code to be reasonably confident about this.
pipenv run scrapy runspider -o output-a.json a.py
result_a=$?
pipenv run scrapy runspider -o output-b.json b.py
result_b=$?
if [ $result_a -eq 0 && $result_b -eq 0]; then
curl $url
fi
My problem is that each scrapy runspider command always returns 0 no matter what. That means I can't really check whether they have been succesful.
Is there a way to influence this behavior? Some command line flag I haven't found? If not, how would I run the two spiders from a python script and save their output to a defined location? I found this link but it doesn't mention how to handle the returned items.
The way I eventually solved this was assigning the log output to a variable and grepping it for ERROR: Spider error processing. Scrapy has the very nice behavior of not failing unnecessarily early; if I exited the python script myself I would have lost that. This way I could just run one scraper after another and handle the errors in the end, so I could still collect as much as possible while being notified in the case that something doesn't run 100% smoothly.

ec2 run scripts every boot

I have followed a few posts on here trying to run either a python or shell script on my ec2 instance after every boot not just the first boot.
I have tried the:
[scripts-user, always] to /etc/cloud/cloud.cfg file
Added script to ./scripts/per-boot folder
and
adding script to /etc/rc.local
Yes the permissions were changed to 755 for /etc/rc.local
I am attempting to pipe the output of the file into a file located in the /home/ubuntu/ directory and the file does not contain anything after boot.
If I run the scripts (.sh or .py) manually they work.
Any suggestions or request for additional info to help?
So the current solution appears to be a method I wrote off in my initial question post as I may have not performed the setup exactly as outline in the link below...
This link -->
How do I make cloud-init startup scripts run every time my EC2 instance boots?
The link shows how to modify the /etc/cloud/cloud.cfg file to update scripts-user to [scripts-user, always]
Also that link says to add your *.sh file to /var/lib/cloud/scripts/per-boot directory.
Once you reboot your system your script should have executed and you can verify this in: sudo cat /var/log/cloud-init.log
if your script still fails to execute try to erase the instance state of your server with the following command: sudo rm -rf /var/lib/cloud/instance/*
--NOTE:--
It appears print commands from a python script do not pipe (>>) as expected but echo commands pipe easily
Fails to pipe
sudo python test.py >> log.txt
Pipes successfully
echo "HI" >> log.txt
Is this something along the lines that you want?
It copies the script to the instance, connects to the instance, and runs the script right away.
ec2 scp ~/path_to_script.py : instance_name -y && ec2 ssh instance_name -yc "python script_name.py" 1>/dev/null
I read that the use of rc.local is getting deprecated. One thing to try is a line in /etc/crontab like this:
#reboot full-path-of-script
If there's a specific user you want to run the script as, you can list it after #reboot.

Have python script run in background of unix

I have a python script that I want to execute in the background on my unix server. The catch is that I need the python script to wait for the previous step to finish before moving onto the next task, yet I want my job to continue to run after I exit.
I think I can set up as follows but would like confirmation:
An excerpt of the script looks like this where command 2 is dependent on the output from command 1 since it outputs an edited executable file in same directory. I would like to point out that commands 1 and 2 do not have the nohup/& included.
subprocess.call('unix command 1 with options', shell=True)
subprocess.call('unix command 2 with options', shell=True)
If when I initiate my python script like so:
% nohup python python_script.py &
Will my script run in the background since I explicitly did not put nohup/& in my scripted unix commands but instead ran the python script in the background?
yes, by running your python script with nohup (no hangup), your script won't keel over when the network is severed and the trailing & symbol will run your script in the background.
You can still view the output of your script, nohup will pipe the stdout to the nohop.out file. You can babysit the output in real time by tailing that output file:
$ tail -f nohop.out
quick note about the nohup.out file...
nohup.out The output file of the nohup execution if
standard output is a terminal and if the
current directory is writable.
or append the command with & to run the python script as a deamon and tail the logs.
$ nohup python python_script.py > my_output.log &
$ tail -f my_output.log
You can use nohup
chomd +x /path/to/script.py
nohup python /path/to/script.py &
Or
Instead of closing your terminal, use logout It is not SIGHUP when you do logout thus the shell won't send a SIGHUP to any of its children.children.

cron job doesn't output to nohup.out

i have start.sh bash script that is running though CRON JOB on ubuntu server
start.sh contains bellow mentioned lines of code
path of start.sh is /home/ubuntu/folder1/folder2/start.sh
#!/bin/bash
crawlers(){
nohup scrapy crawl first &
nohup scrapy crawl 2nd &
wait $!
nohup scrapy crawl 3rd &
nohup scrapy crawl 4th &
wait
}
cd /home/ubuntu/folder1/folder2/
PATH=$PATH:/usr/local/bin
export PATH
python init.py &
wait $!
crawlers
python final.py
my issue is if i run start.sh my myself on command line it outputs in nohup.out file
but when it executes this bash file through cronjob (although scripts are running fine) its not producing nohup.out
how can i get output of this cronjob in nohup.out ?
Why are you using nohup? nohup is a command that tells the running terminal to ignore the hangup signal. cron, however, has no hangup signal, because it is not linked to a terminal session.
In this case, instead of:
nohup scrapy crawl first &
You probably want:
scrapy crawl first > first.txt &
The last example also works in a terminal, but when you close the terminal, the hangup signal (hup) is sent, which ends the program.

Categories

Resources