I am trying my very first formal python program using Threading and Multiprocessing on a windows machine. I am unable to launch the processes though, with python giving the following message. The thing is, I am not launching my threads in the main module. The threads are handled in a separate module inside a class.
EDIT: By the way this code runs fine on ubuntu. Not quite on windows
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
My original code is pretty long, but I was able to reproduce the error in an abridged version of the code. It is split in two files, the first is the main module and does very little other than import the module which handles processes/threads and calls a method. The second module is where the meat of the code is.
testMain.py:
import parallelTestModule
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
parallelTestModule.py:
import multiprocessing
from multiprocessing import Process
import threading
class ThreadRunner(threading.Thread):
""" This class represents a single instance of a running thread"""
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run(self):
print self.name,'\n'
class ProcessRunner:
""" This class represents a single instance of a running process """
def runp(self, pid, numThreads):
mythreads = []
for tid in range(numThreads):
name = "Proc-"+str(pid)+"-Thread-"+str(tid)
th = ThreadRunner(name)
mythreads.append(th)
for i in mythreads:
i.start()
for i in mythreads:
i.join()
class ParallelExtractor:
def runInParallel(self, numProcesses, numThreads):
myprocs = []
prunner = ProcessRunner()
for pid in range(numProcesses):
pr = Process(target=prunner.runp, args=(pid, numThreads))
myprocs.append(pr)
# if __name__ == 'parallelTestModule': #This didnt work
# if __name__ == '__main__': #This obviously doesnt work
# multiprocessing.freeze_support() #added after seeing error to no avail
for i in myprocs:
i.start()
for i in myprocs:
i.join()
On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.
Modified testMain.py:
import parallelTestModule
if __name__ == '__main__':
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
Try putting your code inside a main function in testMain.py
import parallelTestModule
if __name__ == '__main__':
extractor = parallelTestModule.ParallelExtractor()
extractor.runInParallel(numProcesses=2, numThreads=4)
See the docs:
"For an explanation of why (on Windows) the if __name__ == '__main__'
part is necessary, see Programming guidelines."
which say
"Make sure that the main module can be safely imported by a new Python
interpreter without causing unintended side effects (such a starting a
new process)."
... by using if __name__ == '__main__'
Though the earlier answers are correct, there's a small complication it would help to remark on.
In case your main module imports another module in which global variables or class member variables are defined and initialized to (or using) some new objects, you may have to condition that import in the same way:
if __name__ == '__main__':
import my_module
As #Ofer said, when you are using another libraries or modules, you should import all of them inside the if __name__ == '__main__':
So, in my case, ended like this:
if __name__ == '__main__':
import librosa
import os
import pandas as pd
run_my_program()
hello here is my structure for multi process
from multiprocessing import Process
import time
start = time.perf_counter()
def do_something(time_for_sleep):
print(f'Sleeping {time_for_sleep} second...')
time.sleep(time_for_sleep)
print('Done Sleeping...')
p1 = Process(target=do_something, args=[1])
p2 = Process(target=do_something, args=[2])
if __name__ == '__main__':
p1.start()
p2.start()
p1.join()
p2.join()
finish = time.perf_counter()
print(f'Finished in {round(finish-start,2 )} second(s)')
you don't have to put imports in the if __name__ == '__main__':, just running the program you wish to running inside
In yolo v5 with python 3.8.5
if __name__ == '__main__':
from yolov5 import train
train.run()
In my case it was a simple bug in the code, using a variable before it was created. Worth checking that out before trying the above solutions. Why I got this particular error message, Lord knows.
The below solution should work for both python multiprocessing and pytorch multiprocessing.
As other answers mentioned that the fix is to have if __name__ == '__main__': but I faced several issues in identifying where to start because I am using several scripts and modules. When I can call my first function inside main then everything before it started to create multiple processes (not sure why).
Putting it at the very first line (even before the import) worked. Only calling the first function return timeout error. The below is the first file of my code and multiprocessing is used after calling several functions but putting main in the first seems to be the only fix here.
if __name__ == '__main__':
from mjrl.utils.gym_env import GymEnv
from mjrl.policies.gaussian_mlp import MLP
from mjrl.baselines.quadratic_baseline import QuadraticBaseline
from mjrl.baselines.mlp_baseline import MLPBaseline
from mjrl.algos.npg_cg import NPG
from mjrl.algos.dapg import DAPG
from mjrl.algos.behavior_cloning import BC
from mjrl.utils.train_agent import train_agent
from mjrl.samplers.core import sample_paths
import os
import json
import mjrl.envs
import mj_envs
import time as timer
import pickle
import argparse
import numpy as np
# ===============================================================================
# Get command line arguments
# ===============================================================================
parser = argparse.ArgumentParser(description='Policy gradient algorithms with demonstration data.')
parser.add_argument('--output', type=str, required=True, help='location to store results')
parser.add_argument('--config', type=str, required=True, help='path to config file with exp params')
args = parser.parse_args()
JOB_DIR = args.output
if not os.path.exists(JOB_DIR):
os.mkdir(JOB_DIR)
with open(args.config, 'r') as f:
job_data = eval(f.read())
assert 'algorithm' in job_data.keys()
assert any([job_data['algorithm'] == a for a in ['NPG', 'BCRL', 'DAPG']])
job_data['lam_0'] = 0.0 if 'lam_0' not in job_data.keys() else job_data['lam_0']
job_data['lam_1'] = 0.0 if 'lam_1' not in job_data.keys() else job_data['lam_1']
EXP_FILE = JOB_DIR + '/job_config.json'
with open(EXP_FILE, 'w') as f:
json.dump(job_data, f, indent=4)
# ===============================================================================
# Train Loop
# ===============================================================================
e = GymEnv(job_data['env'])
policy = MLP(e.spec, hidden_sizes=job_data['policy_size'], seed=job_data['seed'])
baseline = MLPBaseline(e.spec, reg_coef=1e-3, batch_size=job_data['vf_batch_size'],
epochs=job_data['vf_epochs'], learn_rate=job_data['vf_learn_rate'])
# Get demonstration data if necessary and behavior clone
if job_data['algorithm'] != 'NPG':
print("========================================")
print("Collecting expert demonstrations")
print("========================================")
demo_paths = pickle.load(open(job_data['demo_file'], 'rb'))
########################################################################################
demo_paths = demo_paths[0:3]
print (job_data['demo_file'], len(demo_paths))
for d in range(len(demo_paths)):
feats = demo_paths[d]['features']
feats = np.vstack(feats)
demo_paths[d]['observations'] = feats
########################################################################################
bc_agent = BC(demo_paths, policy=policy, epochs=job_data['bc_epochs'], batch_size=job_data['bc_batch_size'],
lr=job_data['bc_learn_rate'], loss_type='MSE', set_transforms=False)
in_shift, in_scale, out_shift, out_scale = bc_agent.compute_transformations()
bc_agent.set_transformations(in_shift, in_scale, out_shift, out_scale)
bc_agent.set_variance_with_data(out_scale)
ts = timer.time()
print("========================================")
print("Running BC with expert demonstrations")
print("========================================")
bc_agent.train()
print("========================================")
print("BC training complete !!!")
print("time taken = %f" % (timer.time() - ts))
print("========================================")
# if job_data['eval_rollouts'] >= 1:
# score = e.evaluate_policy(policy, num_episodes=job_data['eval_rollouts'], mean_action=True)
# print("Score with behavior cloning = %f" % score[0][0])
if job_data['algorithm'] != 'DAPG':
# We throw away the demo data when training from scratch or fine-tuning with RL without explicit augmentation
demo_paths = None
# ===============================================================================
# RL Loop
# ===============================================================================
rl_agent = DAPG(e, policy, baseline, demo_paths,
normalized_step_size=job_data['rl_step_size'],
lam_0=job_data['lam_0'], lam_1=job_data['lam_1'],
seed=job_data['seed'], save_logs=True
)
print("========================================")
print("Starting reinforcement learning phase")
print("========================================")
ts = timer.time()
train_agent(job_name=JOB_DIR,
agent=rl_agent,
seed=job_data['seed'],
niter=job_data['rl_num_iter'],
gamma=job_data['rl_gamma'],
gae_lambda=job_data['rl_gae'],
num_cpu=job_data['num_cpu'],
sample_mode='trajectories',
num_traj=job_data['rl_num_traj'],
num_samples= job_data['rl_num_samples'],
save_freq=job_data['save_freq'],
evaluation_rollouts=job_data['eval_rollouts'])
print("time taken = %f" % (timer.time()-ts))
I ran into the same problem. #ofter method is correct because there are some details to pay attention to. The following is the successful debugging code I modified for your reference:
if __name__ == '__main__':
import matplotlib.pyplot as plt
import numpy as np
def imgshow(img):
img = img / 2 + 0.5
np_img = img.numpy()
plt.imshow(np.transpose(np_img, (1, 2, 0)))
plt.show()
dataiter = iter(train_loader)
images, labels = dataiter.next()
imgshow(torchvision.utils.make_grid(images))
print(' '.join('%5s' % classes[labels[i]] for i in range(4)))
For the record, I don't have a subroutine, I just have a main program, but I have the same problem as you. This demonstrates that when importing a Python library file in the middle of a program segment, we should add:
if __name__ == '__main__':
I tried the tricks mentioned above on the following very simple code. but I still cannot stop it from resetting on any of my Window machines with Python 3.8/3.10. I would very much appreciate it if you could tell me where I am wrong.
print('script reset')
def do_something(inp):
print('Done!')
if __name__ == '__main__':
from multiprocessing import Process, get_start_method
print('main reset')
print(get_start_method())
Process(target=do_something, args=[1]).start()
print('Finished')
output displays:
script reset
main reset
spawn
Finished
script reset
Done!
Update:
As far as I understand, you guys are not preventing either the script containing the __main__ or the .start() from resetting (which doesn't happen in Linux), rather you are suggesting workarounds so that we don't see the reset. One has to make all imports minimal and put them in each function separately, but it is still, relative to Linux, slow.
I have a_1.py~a_10.py
I want to run 10 python programs in parallel.
I tried:
from multiprocessing import Process
import os
def info(title):
I want to execute python program
def f(name):
for i in range(1, 11):
subprocess.Popen(['python3', f'a_{i}.py'])
if __name__ == '__main__':
info('main line')
p = Process(target=f)
p.start()
p.join()
but it doesn't work
How do I solve this?
I would suggest using the subprocess module instead of multiprocessing:
import os
import subprocess
import sys
MAX_SUB_PROCESSES = 10
def info(title):
print(title, flush=True)
if __name__ == '__main__':
info('main line')
# Create a list of subprocesses.
processes = []
for i in range(1, MAX_SUB_PROCESSES+1):
pgm_path = f'a_{i}.py' # Path to Python program.
command = f'"{sys.executable}" "{pgm_path}" "{os.path.basename(pgm_path)}"'
process = subprocess.Popen(command, bufsize=0)
processes.append(process)
# Wait for all of them to finish.
for process in processes:
process.wait()
print('Done')
If you just need to call 10 external py scripts (a_1.py ~ a_10.py) as a separate processes - use subprocess.Popen class:
import subprocess, sys
for i in range(1, 11):
subprocess.Popen(['python3', f'a_{i}.py'])
# sys.exit() # optional
It's worth to look at a rich subprocess.Popen signature (you may find some useful params/options)
You can use a multiprocessing pool to run them concurrently.
import multiprocessing as mp
def worker(module_name):
""" Executes a module externally with python """
__import__(module_name)
return
if __name__ == "__main__":
max_processes = 5
module_names = [f"a_{i}" for i in range(1, 11)]
print(module_names)
with mp.Pool(max_processes) as pool:
pool.map(worker, module_names)
The max_processes variable is the maximum number of workers to have working at any given time. In other words, its the number of processes spawned by your program. The pool.map(worker, module_names) uses the available processes and calls worker on each item in your module_names list. We don't include the .py because we're running the module by importing it.
Note: This might not work if the code you want to run in your modules is contained inside if __name__ == "__main__" blocks. If that is the case, then my recommendation would be to move all the code in the if __name__ == "__main__" blocks of the a_{} modules into a main function. Additionally, you would have to change the worker to something like:
def worker(module_name):
module = __import__(module_name) # Kind of like 'import module_name as module'
module.main()
return
I'm trying to write code that create sub-process using another module(demo_2.py),
and exit program if i get wanted value on sub-processes.
But result looks like this.
It seems that demo_1 makes two sub-process that run demo_1 and load demo_2.
I want to make sub-process only runs demo_2.
What did i missed?
demo_1.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from multiprocessing import Process,Queue
import sys
import demo_2 as A
def multi_process():
print ("Function multi_process called!")
process_status_A = Queue()
process_status_B = Queue()
A_Process = Process(target = A.process_A, args = (process_status_A,))
B_Process = Process(target = A.process_A, args = (process_status_B,))
A_Process.start()
B_Process.start()
while True:
process_status_output_A = process_status_A.get()
process_status_output_B = process_status_B.get()
if process_status_output_A == 'exit' and process_status_output_B == 'exit':
print ("Success!")
break
process_status_A.close()
process_status_B.close()
A_Process.join()
B_Process.join()
sys.exit()
print ("demo_1 started")
if __name__ == "__main__":
multi_process()
demo_2.py
class process_A(object):
def __init__(self, process_status):
print ("demo_2 called!")
process_status.put('exit')
def call_exit(self):
pass
if process_status_A == 'exit' and process_status_B == 'exit':
should be
if process_status_A_output == 'exit' and process_status_B_output == 'exit':
Conclusion: The naming of variables is important.
Avoid long variable names which are almost the same (such as process_status_A and process_status_A_output).
Placing the distinguishing part of the variable name first helps clarify the meaning of the variable.
So instead of
process_status_A_output
process_status_B_output
perhaps use
output_A
output_B
Because Windows lacks os.fork,
on Windows every time a new subprocess is spawned, a new Python interpreter is started and the calling module is imported.
Therefore, code that you do not wish to be run in the spawned subprocess must be "protected" inside the if-statement (see in particular the section entitled "Safe importing of main module"):
Thus use
if __name__ == "__main__":
print ("demo_1 started")
multi_process()
to avoid printing the extra "demo_1 started" messages.
I am trying to process files in over 1000 directories. However, as the size of the directory is very big, i have to run the process concurrently to save time. The dir [:100] is so that that function has only 100 directories to process.However, the 2nd Thread does not start until the first Thread has finished.
def sort(start,end):
dir = os.list(filepath)
dir = dir [start:end]
for file in dir:
~process~
print result
if __name__ == '__main__':
Thread(target = Sort(0,100)).start()
Thread(target = Sort(100,200)).start()
I have tried duplicating the Sort function as Sort2(): but this yields the same result
Thread(target = Sort(0,100)).start()
Thread(target = Sort2(100,200)).start()
Any help or direction is greatly apprenticed
The problem is that Sort and Sort2 are being evaluated before you start your threads. When you say Sort(...), the () is an operator that executes the Sort function.
This answer could say more about your problem.
You'll have to use the args parameter for the Thread constructor. For example:
Thread(target=Sort, args=(0,100)).start()
import multiprocessing
def worker_sort():
# includes the code of your sorting function
pass
if __name__ == '__main__':
jobs = []
for i in range(2):
p = multiprocessing.Process(target=worker_sort, args=(0,100))
jobs.append(p)
p.start()
I'd recommend some reading about the multiprocessing module in the official documentation
I'm trying to create a script which is using multiprocessing module with python. The script (lets call it myscript.py) will get the input from another script with pipe.
Assume that I call the scripts like this;
$ python writer.py | python myscript.py
And here is the codes;
// writer.py
import time, sys
def main():
while True:
print "test"
sys.stdout.flush()
time.sleep(1)
main()
//myscript.py
def get_input():
while True:
text = sys.stdin.readline()
print "hello " + text
time.sleep(3)
if __name__ == '__main__':
p1 = Process(target=get_input, args=())
p1.start()
this is clearly not working, since the sys.stdin objects are different for main process and p1. So I have tried this to solve it,
//myscript.py
def get_input(temp):
while True:
text = temp.readline()
print "hello " + text
time.sleep(3)
if __name__ == '__main__':
p1 = Process(target=get_input, args=(sys.stdin,))
p1.start()
but I come across with this error;
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "in.py", line 12, in get_input
text = temp.readline()
ValueError: I/O operation on closed file
So, I guess that main's stdin file closed and I can't read from it. At this conjunction, how can I pass main's stdin file to another process? If passing stdin is not possible, how can I use main's stdin from another process?
update:
Okay, I need to clarify my question since people think using multiprocessing is not really necessary.
consider myscript.py like this;
//myscript.py
def get_input():
while True:
text = sys.stdin.readline()
print "hello " + text
time.sleep(3)
def do_more_things():
while True:
#// some code here
time.sleep(60*5)
if __name__ == '__main__':
p1 = Process(target=get_input, args=())
p1.start()
do_more_things()
so, I really need to run get_input() function parallelly with main function (or other sub processes).
Sorry for the conflicts, I have a decent English, and I guess I couldn't be clear on this question. I would appreciate if you guys can tell me if i can use the main processes STDIN object in another process.
thanks in advance.
The simplest thing is to swap get_input() and do_more_things() i.e., read sys.stdin in the parent process:
def get_input(stdin):
for line in iter(stdin.readline, ''):
print("hello", line, end='')
stdin.close()
if __name__ == '__main__':
p1 = mp.Process(target=do_more_things)
p1.start()
get_input(sys.stdin)
The next best thing is to use a Thread() instead of a Process() for get_input():
if __name__ == '__main__':
t = Thread(target=get_input, args=(sys.stdin,))
t.start()
do_more_things()
If the above doesn't help you could try os.dup():
newstdin = os.fdopen(os.dup(sys.stdin.fileno()))
try:
p = Process(target=get_input, args=(newstdin,))
p.start()
finally:
newstdin.close() # close in the parent
do_more_things()
Each new process created with the multiprocessing module gets its own PID, and therefore it's own standard input device and output devices, even if they're both writing to the same terminal, hence the need for locks.
You're already creating two processes by separating the content into two scripts, and creating a third process with get_input(). get_input could read the standard input if it was a thread instead of a process. Then, no need to have a sleep function in the reader.
## reader.py
from threading import Thread
import sys
def get_input():
text = sys.stdin.readline()
while len(text) != 0:
print 'hello ' + text
text = sys.stdin.readline()
if __name__ == '__main__':
thread = Thread(target=get_input)
thread.start()
thread.join()
This will only be a partial answer - as I'm unclear about subsequent parts of the question.
You start by saying that you anticipate calling your scripts:
$ python writer.py | python myscript.py
If you're going to do that, writer needs to write to standard out and myscript needs to read from standard input. The second script would look like this:
def get_input():
while True:
text = sys.stdin.readline()
print "hello " + text
time.sleep(3)
if __name__ == '__main__':
get_input()
There's no need for the multiprocessing.Process object... you're firing off two processes from the command line already - and you're using the shell to connect them with an (anonymous) pipe (the "|" character) that connects standard output from the first script to standard input from the second script.
The point of the Process object is to manage launch of a second process from the first. You'd need to define a process; then start it - then you'd probably want to wait until it has terminated before exiting the first process... (calling p1.join() after p1.start() would suffice for this).
If you want to communicate between a pair of processes under python control, you'll probably want to use the multiprocess.Pipe object to do so. You can then easily communicate between the inital and the subordinate spawned process by reading and writing to/from the Pipe object rather than standard input and standard output. If you really want to re-direct standard input and standard output, this is probably possible by messing with low-level file-descriptors and/or by overriding/replacing the sys.stdin and sys.stdout objects... but, I suspect, you probably don't want (or need) to do this.
To read the piped in input use fileinput:
myscript.py
import fileinput
if __name__ == '__main__':
for line in fileinput.input():
#do stuff here
process_line(line)