Python overwriting lines in console - python

I am attempting to learn about concurrent programming with python. I have a simple script made as an example to show you what I am attempting to do here. Basically what is happening is some of the output is overwriting lines in the console so I am missing part of the output.
Here is my code:
import multiprocessing
lock = multiprocessing.Lock()
def dostuff(th):
for x in range(8):
lock.acquire()
print(th, ": loop", x)
lock.release()
def run():
pool = multiprocessing.Pool()
inputs = [1, 2, 3, 4]
pool.map(dostuff, inputs)
print('end')
Any help would be greatly appreciated.
Here is some of the output:
>>> run()
2 : loop 0
2 : loop 1
2 : loop 2
2 : loop 3
>>> run()
3 : loop 0
3 : loop 1
3 : loop 2
3 : loop 3
>>> run()
end
Here is some of the expected output:
>>> run()
2 : loop 0
3 : loop 1
1 : loop 1
4 : loop 2
>>> run()
end
Basically I want to show concurrency. Thanks

Ok problem solved. Executed the script through Geany and it seems to be working perfectly. Thanks for all your support.

Related

how can i call a def to be executed simultaneously?

I want to call a function that takes a long time to execute(variable, from 0 to 300 sec.), it has to be executed as many times as there are lists in a json. How can I do it so that it executes three in parallel, for example?
To make it simpler the code in the explanation would be to call a function, this...
def f(x, j):
print(str(x*x)+" "+" "+str(j))
time.sleep(2)
j = 0
for i in range(10):
j += 1
f(i,j)
result
thread 1 : 0, 1
thread 2 : 1, 2
thread 3 : 4, 3
thread 1 : 9, 4
thread 2 : 16, 5
thread 3 : 25, 6
thread 1 : 36, 7
thread 2: 49, 8
thread 3: 64, 9
thread 1: 81, 10
Thread three could end before thread one, but I always want three runs.
Take a look at ProcessPoolExecutor and ThreadPoolExecutor, which are API's that allow the execution of concurrent tasks:
from concurrent.futures import ProcessPoolExecutor
def f(x, j):
print(str(x*x)+" "+" "+str(j))
time.sleep(2)
j = 0
with ProcessPoolExecutor() as e:
for i in range(10):
e.submit(f, i, j)
You can also use the .map() method in this case. Read the documentation for more information.

How come using multiprocessing isn't any faster? am i doing it wrong?

I made a python script to solve a complex math problem and write the result to a text file. but it takes a long time so I wanted to make it utilize more of my i7-7700K because it only uses 18%. so I tried using multiprocessing. but its not faster. Am I doing it wrong?
(side note) when I run it with multiprocessing the text file is blank when it finishes. and I don't know why.
I watched a bunch of youtube videos on how to use multiprocessing.
import time
import multiprocessing
solve = open("solve.txt", "w")
def per1(n, steps=0):
if steps == 0:
print(n)
if len(str(n))==1:
print("Total steps " + str(steps))
return "Done"
steps += 1
digits = [int(i) for i in str (n)]
result = 1
for j in digits:
result *= j
# print(result)
per1(result, steps)
S = 2
solve = open("solve.txt", "r")
if 'steps 1' in open('solve.txt').read():
S = 2
if 'steps 2' in open('solve.txt').read():
S = 3
if 'steps 3' in open('solve.txt').read():
S = 4
if 'steps 4' in open('solve.txt').read():
S = 5
if 'steps 5' in open('solve.txt').read():
S = 6
if 'steps 6' in open('solve.txt').read():
S = 7
if 'steps 7' in open('solve.txt').read():
S = 8
if steps == S:
solve = open("solve.txt", "a")
solve.write("# steps ")
solve.write(str(steps))
solve.write("\n")
solve.write("x = ")
solve.write(str(x))
# solve.write(" results ")
# solve.write(str(result))
solve.write("\n")
x = 1
y = 2
# with multiprocessing
if __name__ == "__main__":
while x <= 2000:
# while x <= 277777788888899:
p1 = multiprocessing.Process(target=per1(x)) #works ish
# p1 = multiprocessing.Process(target=per1, args=(x,)) #does not work
print("P1")
p2 = multiprocessing.Process(target=per1(y)) #works ish
# p2 = multiprocessing.Process(target=per1, args=(y,)) #does not work
print("P2")
# per1(x)
x += 2
y += 2
p1.start()
p2.start()
p1.join()
p2.join()
# normal way
# while x <= 2000:
# # while x <= 277777788888899:
# per1(x)
# x += 1
print("My program took", time.time() - start_time, "to run")
solve = open("solve.txt", "a")
solve.write(str(time.time() - start_time))
solve.close()
I split it into 2 processes one tests the odd numbers the other tests even numbers, and it works but it's not any faster than the normal way.
I was expecting it to go twice as fast with multiprocessing.
You are calling your function in place and giving the result to the multiprocessing. You must hand in the function as a callable and the arguments separately:
multiprocessing.Process(target=per1, args=(x,))
Firstly will note some insight into what your maths problem would be helpful to give some optimizations which may otherwise speed up the execution. On brief look I can determine there are a few things including:
you check if "steps 1" is written in solve.txt, this can never be the case since S is at least 2 (or more) and you check if steps == S
to access step N you must have visited all step 2 through to step N-1 before. This means it is valid to write something like the code below. This will also save a lot of un-necessary checking (perhaps less can be done knowing what the problem is, or saving time checking in memory):
# since once 'S' isn't found, n >= S will not be in solve.txt
S = 2
while ("steps " + str(S)) in open('solve.txt').read() and S < 8:
S += 1
solve.txt is opened at least 2 times during the program, maybe more depending how many calls in the stack there are to per1
Klaus D has shown the right way to call the function but also note you only call .start() when x = 2001, and y = 2002. This immediately causes result to become 0, and hence no results are saved (as length of 0 is 1), returning without any IO. You would be looking to do something like this (untested):
def driver(start, end, step):
for i in range(start, end+1, step):
per1(i)
p1 = multiprocessing.Process(target=driver, args=(1, 2000, 2))
p2 = multiprocessing.Process(target=driver, args=(0, 2000, 2))
p1.start()
p2.start()
p1.join()
p2.join()
Also note a multiprocess is expensive to start, but is then efficient once it has started, so should only be used if you know your program will take a long time to run for each process. Hope this helps :)

Multiprocessing and Queues

`This code is an attempt to use a queue to feed tasks to a number worker processes.
I wanted to time the difference in speed between different number of process and different methods for handling data.
But the output is not doing what I thought it would.
from multiprocessing import Process, Queue
import time
result = []
base = 2
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 23, 45, 76, 4567, 65423, 45, 4, 3, 21]
# create queue for new tasks
new_tasks = Queue(maxsize=0)
# put tasks in queue
print('Putting tasks in Queue')
for i in data:
new_tasks.put(i)
# worker function definition
def f(q, p_num):
print('Starting process: {}'.format(p_num))
while not q.empty():
# mimic some process being done
time.sleep(0.05)
print(q.get(), p_num)
print('Finished', p_num)
print('initiating processes')
processes = []
for i in range(0, 2):
if __name__ == '__main__':
print('Creating process {}'.format(i))
p = Process(target=f, args=(new_tasks, i))
processes.append(p)
#record start time
start = time.time()
# start process
for p in processes:
p.start()
# wait for processes to finish processes
for p in processes:
p.join()
#record end time
end = time.time()
# print time result
print('Time taken: {}'.format(end-start))
I expect this:
Putting tasks in Queue
initiating processes
Creating process 0
Creating process 1
Starting process: 1
Starting process: 0
1 1
2 0
3 1
4 0
5 1
6 0
7 1
8 0
9 1
10 0
11 1
23 0
45 1
76 0
4567 1
65423 0
45 1
4 0
3 1
21 0
Finished 1
Finished 0
Time taken: <some-time>
But instead I actually get this:
Putting tasks in Queue
initiating processes
Creating process 0
Creating process 1
Time taken: 0.01000523567199707
Putting tasks in Queue
Putting tasks in Queue
initiating processes
Time taken: 0.0
Starting process: 1
initiating processes
Time taken: 0.0
Starting process: 0
1 1
2 0
3 1
4 0
5 1
6 0
7 1
8 0
9 1
10 0
11 1
23 0
45 1
76 0
4567 1
65423 0
45 1
4 0
3 1
21 0
Finished 0
There seem to be two major problems, I am not sure how related they are:
The print statements such as:
Putting tasks in Queue
initiating processes
Time taken: 0.0
are repeated systematically though out the code - I say systematically becasue they repeat exactly every time.
The second process never finishes, it never recognizes the queue is empty and therefore fails to exit
1) I cannot reproduce this.
2) Look at the following code:
while not q.empty():
time.sleep(0.05)
print(q.get(), p_num)
Each line can be run in any order by any proces. Now consider q having a single item and two processes A and B. Now consider the following order of execution:
# A runs
while not q.empty():
time.sleep(0.05)
# B runs
while not q.empty():
time.sleep(0.05)
# A runs
print(q.get(), p_num) # Removes and prints the last element of q
# B runs
print(q.get(), p_num) # q is now empty so q.get() blocks forever
Swapping the order of time.sleep and q.get removes the blocking in all of my runs, but it's still possible to have more than one processes enter the loop with a single item left.
The way to fix this is using a non-blocking get call and catching the queue.Empty exception:
import queue
while True:
time.sleep(0.05)
try:
print(q.get(False), p_num)
except queue.Empty:
break
Your worker threads should be like this:
def f(q, p_num):
print('Starting process: {}'.format(p_num))
while True:
value = q.get()
if value is None:
break
# mimic some process being done
time.sleep(0.05)
print(value, p_num)
print('Finished', p_num)
And the queue should be filled with markers after the real data:
for i in data:
new_tasks.put(i)
for _ in range(num_of_threads):
new_tasks.put(None)

Multiprocessing and Queue with Dataframe

I have some troubles with exchange of the object (dataframe) between 2 processes through the Queue.
First process get the data from a queue, second put data into a queue.
The put-process is faster, so the get-process should clear the queue with reading all object.
I've got strange behaviour, because my code works perfectly and as expected but only for 100 rows in dataframe, for 1000row the get-process takes always only 1 object.
import multiprocessing, time, sys
import pandas as pd
NR_ROWS = 1000
i = 0
def getDf():
global i, NR_ROWS
myheader = ["name", "test2", "test3"]
myrow1 = [ i, i+400, i+250]
df = pd.DataFrame([myrow1]*NR_ROWS, columns = myheader)
i = i+1
return df
def f_put(q):
print "f_put start"
while(1):
data = getDf()
q.put(data)
print "P:", data["name"].iloc[0]
sys.stdout.flush()
time.sleep(1.55)
def f_get(q):
print "f_get start"
while(1):
data = pd.DataFrame()
while not q.empty():
data = q.get()
print "get"
if not data.empty:
print "G:", data["name"].iloc[0]
else:
print "nothing new"
time.sleep(5.9)
if __name__ == "__main__":
q = multiprocessing.Queue()
p = multiprocessing.Process(target=f_put, args=(q,))
p.start()
while(1):
f_get(q)
p.join()
Output for 100rows dataframe, get-process takes all objects
f_get start
nothing new
f_put start
P: 0 # put 1.object into the queue
P: 1 # put 2.object into the queue
P: 2 # put 3.object into the queue
P: 3 # put 4.object into the queue
get # get-process takes all 4 objects from the queue
get
get
get
G: 3
P: 4
P: 5
P: 6
get
get
get
G: 6
P: 7
P: 8
Output for 1000rows dataframe, get-process takes only one object.
f_get start
nothing new
f_put start
P: 0 # put 1.object into the queue
P: 1 # put 2.object into the queue
P: 2 # put 3.object into the queue
P: 3 # put 4.object into the queue
get <-- #!!! get-process takes ONLY 1 object from the queue!!!
G: 1
P: 4
P: 5
P: 6
get
G: 2
P: 7
P: 8
P: 9
P: 10
get
G: 3
P: 11
Any idea what I am doing wrong and how to pass also the bigger dataframe through?
At the risk of not being completely able to provide a fully functional example, here is what goes wrong.
First of all, its a timing issue.
I tried your code again with larger DataFrames (10000 or even 100000) and I start to see the same things as you do. This means you see this behaviour as soon as the size of the arrays crosses a certain threshold that will be system(CPU?) dependent.
I modified your code a bit to make it easier to see what happens. First, 5 DataFrames are put into the queue without any custom time.sleep. In the f_get function I added a counter (and a time.sleep(0), see below) to the loop (while not q.empty()).
The new code:
import multiprocessing, time, sys
import pandas as pd
NR_ROWS = 10000
i = 0
def getDf():
global i, NR_ROWS
myheader = ["name", "test2", "test3"]
myrow1 = [ i, i+400, i+250]
df = pd.DataFrame([myrow1]*NR_ROWS, columns = myheader)
i = i+1
return df
def f_put(q):
print "f_put start"
j = 0
while(j < 5):
data = getDf()
q.put(data)
print "P:", data["name"].iloc[0]
sys.stdout.flush()
j += 1
def f_get(q):
print "f_get start"
while(1):
data = pd.DataFrame()
loop = 0
while not q.empty():
data = q.get()
print "get (loop: %s)" %loop
time.sleep(0)
loop += 1
time.sleep(1.)
if __name__ == "__main__":
q = multiprocessing.Queue()
p = multiprocessing.Process(target=f_put, args=(q,))
p.start()
while(1):
f_get(q)
p.join()
Now, if you run this for different number of rows, you will see something like this:
N=100:
f_get start
f_put start
P: 0
P: 1
P: 2
P: 3
P: 4
get (loop: 0)
get (loop: 1)
get (loop: 2)
get (loop: 3)
get (loop: 4)
N=10000:
f_get start
f_put start
P: 0
P: 1
P: 2
P: 3
P: 4
get (loop: 0)
get (loop: 1)
get (loop: 0)
get (loop: 0)
get (loop: 0)
What does this tell us?
As long as the DataFrame is small, your assumption that the put process is faster than the get seems true, we can fetch all 5 items within one loop of while not q.empty().
But, as the number of rows increases, something changes. The while-condition q.empty() evaluates to True (the queue is empty) and the outer while(1) cycles.
This could mean that put is now slower than get and we have to wait. But if we set the sleep time for the whole f_get to something like 15, we still get the same behaviour.
On the other hand, if we change the time.sleep(0) in the inner q.get() loop to 1,
while not q.empty():
data = q.get()
time.sleep(1)
print "get (loop: %s)" %loop
loop += 1
we get this:
f_get start
f_put start
P: 0
P: 1
P: 2
P: 3
P: 4
get (loop: 0)
get (loop: 1)
get (loop: 2)
get (loop: 3)
get (loop: 4)
This looks right! And it means that actually get does something strange. It seems that while it is still processing a get, the queue state is empty, and after the get is done the next item is available.
I'm sure there is a reason for that, but I'm not familiar enough with multiprocessing to see that.
Depending on your application, you could just add the appropriate time.sleep to your inner loop and see if thats enough.
Or, if you want to solve it (instead of using a workaround as the time.sleep method), you could look into multiprocessing and look for information on blocking, non-blocking or asynchronous communication - I think the solution will be found there.

How to see the number of finished or remaining map_async jobs?

I use iPython's parallel-processing facility for a big map operation. While waiting for the map operation to finish, I'd like to display to the user how many of the jobs have finished, how many are running, and how many are remaining. How can I find that information?
Here is what I do. I create a profile that uses a local engine and start two workers. In the shell:
$ ipython profile create --parallel --profile=local
$ ipcluster start --n=2 --profile=local
Here is the client Python script:
#!/usr/bin/env python
def meat(i):
import numpy as np
import time
import sys
seconds = np.random.randint(2, 15)
time.sleep(seconds)
return seconds
import time
from IPython.parallel import Client
c = Client(profile='local')
dview = c[:]
ar = dview.map_async(meat, range(4))
elapsed = 0
while True:
print 'After %d s: %d running' % (elapsed, len(c.outstanding))
if ar.ready():
break
time.sleep(1)
elapsed += 1
print ar.get()
Example output from the script:
After 0 s: 2 running
After 1 s: 2 running
After 2 s: 2 running
After 3 s: 2 running
After 4 s: 2 running
After 5 s: 2 running
After 6 s: 2 running
After 7 s: 2 running
After 8 s: 2 running
After 9 s: 2 running
After 10 s: 2 running
After 11 s: 2 running
After 12 s: 2 running
After 13 s: 2 running
After 14 s: 1 running
After 15 s: 1 running
After 16 s: 1 running
After 17 s: 1 running
After 18 s: 1 running
After 19 s: 1 running
After 20 s: 1 running
After 21 s: 1 running
After 22 s: 1 running
After 23 s: 1 running
[9, 14, 10, 3]
As you can see, I can get the number of currently running jobs, but not the number of jobs that have completed (or are remaining). How can I tell how many of map_async's jobs have finished?
the AsyncResult has a msg_ids attribute. The outstanding jobs are the intersection of that with rc.outstanding, and the completed jobs are the difference:
msgset = set(ar.msg_ids)
completed = msgset.difference(rc.outstanding)
pending = msgset.intersection(rc.outstanding)

Categories

Resources