I have many python scripts and it is a pain to run each one of them individually by clicking them. How to make a batch file to run them all at once?
just make a script like this backgrounding each task (on windows):
start /B python script1.py
start /B python script2.py
start /B python script3.py
on *nix:
python script1.py &
python script2.py &
python script3.py &
Assuming non of your script requires human interaction to run
Use the start command to initiate a process.
#echo off
start "" foo.py
start "" bar.py
start "" baz.py
Re comment: “is there way to start these minimized?”
You can always ask about how a command works by typing the command name followed by a /?. In this case, start /? tells us its command-line options include:
MIN Start window minimized.
Hence, to start the application minimized, use:
start "" /MIN quux.py
Multiprocessing .py files simultaneously
Run as many .py files simultaneously as you want. Create for each .py a .bat to start the python file. Define all the .bat files in the list of lists. The second parameter in the list is a delay to start the .bat file. Don't use zero for the delay. It works fine. On this way You leave parallelism to the operating system which is very fast and stable. For every .bat you start opens a command window to interact with the User.
from apscheduler.schedulers.background import BackgroundScheduler
import datetime as dt
from os import system
from time import sleep
parallel_tasks = [["Drive:\YourPath\First.bat", 1], ["Drive:\YourPath\Second.bat", 3]]
def DatTijd():
Nu = dt.datetime.now()
return Nu
def GetStartTime(Nu, seconds):
StartTime = (Nu + dt.timedelta(seconds=seconds)).strftime("%Y-%m-%d %H:%M:%S")
return StartTime
len_li = len(parallel_tasks)
sleepTime = parallel_tasks[len_li - 1][1] + 3
Nu = DatTijd()
for x in range(0, len_li):
parallel_tasks[x][0] = 'start cmd /C ' + parallel_tasks[x][0]
# if you want the command window stay open after the tasks are finished use: cmd /k in the line above
delta = parallel_tasks[x][1]
parallel_tasks[x][1] = GetStartTime(Nu, delta)
JobShedul = BackgroundScheduler()
JobShedul.start()
for x in range(0, len_li):
JobShedul.add_job(system, 'date', run_date=parallel_tasks[x][1], misfire_grace_time=3, args=[parallel_tasks[x][0]])
sleep(sleepTime)
JobShedul.shutdown()
exit()
Example.bat
echo off
Title Python is running [Your Python Name]
cls
echo "[Your Python Name] is starting up ..."
cd Drive:\YourPathToPythonFile
python YourPyFile.py
Related
I'm running a python script where I'm using subprocess to call serious of rclone copy operations. Since rclone isn't a native command, I'm defining it from a shell script run automatically by my .bashrc file. I can confirm that works since subprocess.run("rclone") properly pulls the rclone menu.
The issue is when I run my script, I don't get any errors or exceptions. Instead my terminal window shows the following:
I understand the issue is related to the Linux subprocess being backgrounded. However, this solution
didn't seem to fix my issue, and I can't find anything about how to prevent this process from pausing. I can confirm it is distro independent as I have run on RedHat and on Amazon EC2.
Last key piece of info: I am calling the subprocess as bash rather than sh to load the alias via my bashrc file. Here is the minimum reproducible code:
start_date = datetime.strptime(datetime.now(timezone.utc).strftime("%Y%m%d"), "%Y%m%d")
# For good measure, double check the day before for more files if the date just changed
time = datetime.utcnow().strftime("%H")
if int(time) <= 3:
start_date = start_date - timedelta(days = 1)
end_date = start_date + timedelta(days = 2)
else:
# End tomorrow
end_date = start_date + timedelta(days = 1)
# Force python to use the bash shell
def bash_command(cmd):
subprocess.Popen(['/bin/bash', '-i', '-c', cmd])
for dt in daterange(start_date, end_date):
cmd = 'rclone copy "/home/test.png" "AWS test:"'
bash_command(cmd)
I have a python script, runned by cron:
"*/5 * * * * python /home/alex/scripts/checker > /dev/null &";
It has several purposes, one of them is to check certain programs in ps list and run them if they are not there. The problem is that script when runned by cron not executed programs in backgroung correctly, all of them are in ps list look like:
/usr/bin/python /home/alex/exec/runnable
So they look like python scripts. When I launch my python script manually it seems that it executes runnable in background corretcly, but with cron nothing works.
Here's the example of code:
def exec(file):
file = os.path.abspath(file)
os.system("chmod +x " + file)
cmd = file
#os.system(cmd)
#subprocess.Popen([cmd])
subprocess.call([cmd])
I tried different approaches but nothing seems to work right.
Some code update:
pids = get_pids(program)
if pids == None:
exec(program)
print 'Restarted'
How to run sequentially 20 - 30 scripts one-by-one and after the last one is executed - run the first one again and run this iteration on a hourly basis?
I tried to implement it by using crontab, but it's a bulky way. I want to guarantee that only one script for every moment is running. The time of execution for each script is about 1 minute.
I wrote a bash script for such a goal and think to run it every hour by using cron:
if ps ax | grep $0 | grep -v $$ | grep bash | grep -v grep
then
echo "The script is already running."
exit 1
else
python script1.py
python script2.py
python script3.py
...
python script30.py
fi
but is it a good way?
From this question, I assume you only want to run the next program when the older one has finished.
I suggest subprocess.call, it will only return to the call of the function when the program that is called has finished executing.
Here's an example. It will run script1, and then script2, when script1 has finished.
import subprocess
program_list = ['script1.py', 'script2.py']
for program in program_list:
subprocess.call(['python', 'program'])
print("Finished:" + program)
Correction to #twaxter's:
import subprocess
program_list = ['script1.py', 'script2.py']
for program in program_list:
subprocess.call(['python', program])
print("Finished:" + program)
You may use a for-loop:
scripts = "script1.py script2.py script3.py"
for s in $scripts
do
python $s
done
You can also use the exec command:
program_list = ["script1.py", "script2.py"]
for program in program_list:
exec(open(program).read())
print("\nFinished: " + program)
If your files match a glob pattern
files=( python*.py )
for f in "${files[#]}"
do
python "$f"
done
I have a series of time consuming independent bash scripts to be run in parallel on 7 CPU cores from python master script. I tried to implement this using multiprocessing.Pool.map() function iterating over numbers from xrange(1, 300) sequence, where every number is used to define the name of a directory containing bash script to be executed. The issue is that the following script spawns 7 processes for bash script runs and finishes right after they are completed.
import multiprocessing
import os
a= os.getcwd() #gets current path
def run(x):
b = a + '/' + 'dir%r' % (x) # appends the name of targeted folder to path
os.chdir(b) #switches to the targeted directory
os.system('chmod +x run.sh')
os.system('./run.sh') # runs the time consuming script
if __name__ == "__main__":
procs = 7
p = multiprocessing.Pool(procs)
p.map(run, xrange(1, 300))
print "====DONE===="
I expect other 292 shell scripts to be run as well, so what fix or alternative implementation could help me?
Thank you!
I am writing a python script that is used for continuous integration and test that will be called by bitten. Our unit tests use the google test framework. Each software component has a bash script that runs configuration and other required services and runs the gtest executable. The python script walks the repository looking for the bash scripts, and calls each script using the os.popen() command.
Python script (UnitTest.py)
#!/usr/bin/python
import os
import fnmatch
import sys
import subprocess
repository_location = '/home/actuv/workspace/eclipse/iccs/'
unit_test_script_name = 'RunUnitTests.sh'
def file_locator(repo, script_name):
# Function for determining all unit test scripts
test_location = []
for root, dirnames, filenames in os.walk(repo):
for filename in fnmatch.filter(filenames, script_name):
test_location.append(os.path.join(root))
return test_location
def run_tests(test_locations, script_name):
# Runs test scripts located at each test location
for tests in test_locations:
cmd = 'cd ' + tests + ';./' + script_name
print 'Running Unit Test at: ' + tests
os.popen(cmd)
################ MAIN ################
# Find Test Locations
script_locations = file_locator(repository_location, unit_test_script_name)
# Run tests located at each location
run_tests(script_locations)
# End of tests
sys.exit(0)
Bash Script
#!/bin/sh
echo "Running unit tests..."
# update the LD_LIBRARY_PATH to include paths to our shared libraries
# start the test server
# Run the tests
# wait to allow all processes to end before terminating the server
sleep 10s
When I run the bash script manually from a terminal window, it runs fine. When I have the python script call the bash script I get a segmentation fault on the TestSingleClient and TestMultiClientLA lines of the bash script.
Try replacing
os.popen(cmd)
with
proc = subprocess.Popen('./scriptname', shell = True,
cwd = tests)
proc.communicate()
Definitely check out subprocess module - specifically look at subprocess.call() convenience method. I threw in a os.path check to make sure your tests directory exists too.
def run_tests(test_locations, script_name):
# Runs test scripts located at each test location
for tests in test_locations:
# Make sure tests directory exists and is a dir
if os.path.isdir(tests):
print 'Running Unit Test at: ' + tests
subprocess.call(script_name, shell=True, cwd=tests)
Also - You're correct in you observations about stdout and stderr causing issues, especially when there's lots of data. I use temp file(s) for stdout/stderr when there is a large or unknown amount of output.
Ex.
def execute_some_command(cmd="arbitrary_exe"):
""" Execute some command using subprocess.call()"""
# open/create temportary file for stdout
tmp_out=file('.tmp','w+')
# Run command, pushing stdout to tmp_out file handle
retcode = subprocess.call(cmd, stdout=tmp_out, shell=True)
if retcode != 0:
# do something useful (like bailout) based on the OS return code
print "FAILED"
# Flush any queued data
tmp_out.flush()
# Jump to the beginning
tmp_out.seek(0)
# Parse output
for line in tmp_out.readlines():
# do something useful with output
# cleanup
tmp_out.close()
os.remove(_out.name)
return
Check out the methods on the python file object for how to process the your stdout data from _out file handle.
Good hunting.