So I'm trying to build a robot that can drive autonomously. For that I need the robot to drive forward and check distance at the same time. And if distance is less than preferred distance, stop forward movement. By now I've written this code below, but it doesn't seem to run simultaneously and they also don't interact. How can I make it that these two functions do indeed interact. If there's anymore information needed I'm happy to supply you. Thanks!
from multiprocessing import Process
from TestS import distance
import Robot
import time
constant1 = True
min_distance = 15
def forward():
global constant1:
robot.forward(150) #forward movement, speed 150
time.sleep(2)
def distance_check():
global constant1
while constant1:
distance() #checking distance
dist = distance()
return dist
time.sleep(0.3)
if dist < min_distance:
constant1 = False
print 'Something in the way!'
break
def autonomy(): #autonomous movement
while True:
p1 = Process(target=forward)
p2 = Process(target=distance_check)
p1.start() #start up 2 processes
p2.start()
p2.join() #wait for p2 to finish
So, there's some serious problems with the code you posted. First, you don't want the distance_check process to finish, because it's running a while loop. You should not do p2.join(), nor should you be starting new processes all the time in your while loop. You're mixing too many ways of doing things here - either the two children run forever, or they each run once, not a mix.
However, the main problem is that the original processes can't communicate with the original process, even via global (unless you do some more work). Threads are much more suited to this problem.
You also have a return inside your distance_check() function, so no code below that statement gets executed (including the sleep, and the setting of constant1 (which should really have a better name).
In summary, I think you want something like this:
from threading import Thread
from TestS import distance
import Robot
import time
can_move_forward = True
min_distance = 15
def move_forward():
global can_move_forward
while can_move_forward:
robot.forward(150)
time.sleep(2)
print('Moving forward for two seconds!')
def check_distance():
global can_move_forward
while True:
if distance() < min_distance:
can_move_forward = False
print('Something in the way! Checking again in 0.3 seconds!')
time.sleep(0.3)
def move_forward_and_check_distance():
p1 = Thread(target = move_forward)
p2 = Thread(target = check_distance)
p1.start()
p2.start()
Since you specified python-3.x in your tags, I've also corrected your print.
Obviously I can't check that this will work as you want it to because I don't have your robot, but I hope that this is at least somewhat helpful.
One issue with your multiprocessing solution is that distance_check returns and stops
dist = distance()
return dist # <------
time.sleep(0.3)
if dist < min_distance:
....
It seems like you are trying to exchange information between the processes: which is typically done using Queues or Pipes.
I read between the lines of your question and came up with the following specs:
a robot moves if its speed is greater than zero
continually check for obstacles in front of the robot
stop the robot if it gets to close to something.
I think you can achieve your goal without using multiprocessing. Here is a solution that uses generators/coroutines.
For testing purposes, I have written my own versions of a robot and an obstacle sensor - trying to mimic what I see in your code
class Robot:
def __init__(self, name):
self.name = name
def forward(self, speed):
print('\tRobot {} forward speed is {}'.format(self.name, speed))
if speed == 0:
print('\tRobot {} stopped'.format(speed))
def distance():
'''User input to simulate obstacle sensor.'''
d = int(input('distance? '))
return d
Decorator to start a coroutine/generator:
def consumer(func):
def wrapper(*args,**kw):
gen = func(*args, **kw)
next(gen)
return gen
wrapper.__name__ = func.__name__
wrapper.__dict__ = func.__dict__
wrapper.__doc__ = func.__doc__
return wrapper
A producer to continually check to see if it is safe to move
def can_move(target, min_distance = 15):
'''Continually check for obstacles'''
while distance() > min_distance:
target.send(True)
print('check distance')
target.close()
A generator/coroutine that consumes safe-to-move signals and changes the robot's speed as needed.
#consumer
def forward():
try:
while True:
if (yield):
robot.forward(150)
except GeneratorExit as e:
# stop the robot
robot.forward(0)
The robot's speed should change as fast as the obstacle sensor can produce distances. The robot will move forward till it gets close to something and just stop and it all shuts down. By tweaking the logic a bit in forward and can_move you could change the behaviour so that the generators/coroutines keep running but send a zero speed command as long as something is in front of it then when the thing gets out of the way (or the robot turns) it will start moving again.
Usage:
>>>
>>> robot = Robot('Foo')
>>> can_move(forward())
distance? 100
Foo forward speed is 150
check distance
distance? 50
Foo forward speed is 150
check distance
distance? 30
Foo forward speed is 150
check distance
distance? 15
Foo forward speed is 0
Robot 0 stopped
>>>
While this works in Python 3.6, it is based on a possibly outdated notion/understanding of generators and coroutines. There may be a different way to do this with some of the async additions to Python 3+.
Related
I'm trying to learn something a little new in each mini-project I do. I've made a Game of Life( https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life ) program.
This involves a numpy array where each point in the array (a "cell") has an integer value. To evolve the state of the game, you have to compute for each cell the sum of all its neighbour values (8 neighbours).
The relevant class in my code is as follows, where evolve() takes in one of the xxx_method methods. It works fine for conv_method and loop_method, but I want to use multiprocessing (which I've identified should work, unlike multithreading?) on loop_method to see any performance increases. I feel it should work as each calculation is independent. I've tried a naive approach, but don't really understand the multiprocessing module well enough. Could I also use it within the evolve() method, as again I feel that each calculation within the double for loops are independent.
Any help appreciated, including general code comments.
Edit - I'm getting a RuntimeError, which I'm half-expecting as my understanding of multiprocessing isnt good enough. What needs to be done to the code to get it work?
class GoL:
""" Game Engine """
def __init__(self, size):
self.size = size
self.grid = Grid(size) # Grid is another class ive defined
def evolve(self, neigbour_sum_func):
new_grid = np.zeros_like(self.grid.cells) # start with everything dead, only need to test for keeping/turning alive
neighbour_sum_array = neigbour_sum_func()
for i in range(self.size):
for j in range(self.size):
cell_sum = neighbour_sum_array[i,j]
if self.grid.cells[i,j]: # already alive
if cell_sum == 2 or cell_sum == 3:
new_grid[i,j] = 1
else: # test for dead coming alive
if cell_sum == 3:
new_grid[i,j] = 1
self.grid.cells = new_grid
def conv_method(self):
""" Uses 2D convolution across the entire grid to work out the neighbour sum at each cell """
kernel = np.array([
[1,1,1],
[1,0,1],
[1,1,1]],
dtype=int)
neighbour_sum_grid = correlate2d(self.grid.cells, kernel, mode='same')
return neighbour_sum_grid
def loop_method(self, partition=None):
""" Also works out neighbour sum for each cell, using a more naive loop method """
if partition is None:
cells = self.grid.cells # no multithreading, just work on entire grid
else:
cells = partition # just work on a set section of the grid
neighbour_sum_grid = np.zeros_like(cells) # copy
for i, row in enumerate(cells):
for j, cell_val in enumerate(row):
neighbours = cells[i-1:i+2, j-1:j+2]
neighbour_sum = np.sum(neighbours) - cell_val
neighbour_sum_grid[i,j] = neighbour_sum
return neighbour_sum_grid
def multi_loop_method(self):
cores = cpu_count()
procs = []
slices = []
if cores == 2: # for my VM, need to impliment generalised method for more cores
half_grid_point = int(SQUARES / 2)
slices.append(self.grid.cells[0:half_grid_point])
slices.append(self.grid.cells[half_grid_point:])
else:
Exception
for sl in slices:
proc = Process(target=self.loop_method, args=(sl,))
proc.start()
procs.append(proc)
for proc in procs:
proc.join()
I want to use multiprocessing (which I've identified should work, unlike multithreading?)
Multithreading would not work because it would run on a single processor which is your current bottleneck. Multithreading is for things where you are awaiting for an API to answer. In that meantime you can do other calculations. But in Conway's Game of Life your program is constantly running.
Getting multiprocessing right is hard. If you have 4 processors you can define a quadrant for each of your processor. But you need to share the result between your processors. And with this you are getting a performance hit. They need to be synchronized/running on the same clock speed/have the same tick rate for updating and the result needs to be shared.
Multiprocessing starts being feasible when your grid is very big/there is a ton to calculate.
Since the question is very broad and complicated I cannot give you a better answer. There is a paper on getting parallel processing on Conway's Game of Life: http://www.shodor.org/media/content/petascale/materials/UPModules/GameOfLife/Life_Module_Document_pdf.pdf
I have been looking around for some time, but haven't had luck finding an example that could solve my problem. I have added an example from my code. As one can notice this is slow and the 2 functions could be done separately.
My aim is to print every second the latest parameter values. At the same time the slow processes can be calculated in the background. The latest value is shown and when any process is ready the value is updated.
Can anybody recommend a better way to do it? An example would be really helpful.
Thanks a lot.
import time
def ProcessA(parA):
# imitate slow process
time.sleep(5)
parA += 2
return parA
def ProcessB(parB):
# imitate slow process
time.sleep(10)
parB += 5
return parB
# start from here
i, parA, parB = 1, 0, 0
while True: # endless loop
print(i)
print(parA)
print(parB)
time.sleep(1)
i += 1
# update parameter A
parA = ProcessA(parA)
# update parameter B
parB = ProcessB(parB)
I imagine this should do it for you. This has the benefit of you being able to add extra parallel funcitons up to a total equal to the number of cores you have. Edits are welcome.
#import time module
import time
#import the appropriate multiprocessing functions
from multiprocessing import Pool
#define your functions
#whatever your slow function is
def slowFunction(x):
return someFunction(x)
#printingFunction
def printingFunction(new,current,timeDelay):
while new == current:
print current
time.sleep(timeDelay)
#set the initial value that will be printed.
#Depending on your function this may take some time.
CurrentValue = slowFunction(someTemporallyDynamicVairable)
#establish your pool
pool = Pool()
while True: #endless loop
#an asynchronous function, this will continue
# to run in the background while your printing operates.
NewValue = pool.apply_async(slowFunction(someTemporallyDynamicVairable))
pool.apply(printingFunction(NewValue,CurrentValue,1))
CurrentValue = NewValue
#close your pool
pool.close()
I need to implement something like this
def turnOn(self):
self.isTurnedOn = True
while self.isTurnedOn:
updateThread = threading.Thread(target=self.updateNeighborsList, args=())
updateThread.daemon = True
updateThread.start()
time.sleep(1)
def updateNeighborsList(self):
self.neighbors=[]
for candidate in points:
distance = math.sqrt((candidate.X-self.X)**2 + (candidate.Y-self.Y)**2)
if distance <= maxDistance and candidate!=self and candidate.isTurnedOn:
self.neighbors.append(candidate)
print self.neighbors
print points
This is a class member function from which updateNeighborsList function should be called every second until self.isTurnedOn == True.
When I create class object and call turnOn function, all following statements are not being executed, it takes the control and stacks on that while loop, but I need a lot of objects of class.
What is the correct way to do this kind of thing?
I think you'd be better off creating a single Thread when turnOn is called, and have the looping happen inside that thread:
def turnOn(self):
self.isTurnedOn = True
self.updateThread = threading.Thread(target=self.updateNeighborsList, args=())
self.updateThread.daemon = True
self.updateThread.start()
def updateNeighborsList(self):
while self.isTurnedOn:
self.neighbors=[]
for candidate in points:
distance = math.sqrt((candidate.X-self.X)**2 + (candidate.Y-self.Y)**2)
if distance <= maxDistance and candidate!=self and candidate.isTurnedOn:
self.neighbors.append(candidate)
print self.neighbors
print points
time.sleep(1)
Note, though, that doing mathematical calculations inside of a thread will not improve performance at all using CPython, because of the Global Interpreter Lock. In order to utilize multiple cores in parallel, you'll need to use the multiprocessing module instead. However, if you're just trying to prevent your main thread from blocking, feel free to stick with threads. Just know that only one thread will ever actually be running at a time.
I've got this program:
import multiprocessing
import time
def timer(sleepTime):
time.sleep(sleepTime)
fooProcess.terminate()
fooProcess.join() #line said to "cleanup", not sure if it is required, refer to goo.gl/Qes6KX
def foo():
i=0
while 1
print i
time.sleep(1)
i
if i==4:
#pause timerProcess for X seconds
fooProcess = multiprocessing.Process(target=foo, name="Foo", args=())
timer()
fooProcess.start()
And as you can see in the comment, under certain conditions (in this example i has to be 4) the timer has to stop for a certain X time, while foo() keeps working.
Now, how do I implement this?
N.B.: this code is just an example, the point is that I want to pause a process under certain conditions for a certain amount of time.
I am think you're going about this wrong for game design. Games always (no exceptions come to mind) use a primary event loop controlled in software.
Each time through the loop you check the time and fire off all the necessary events based on how much time has elapsed. At the end of the loop you sleep only as long as necessary before you got the next timer or event or refresh or ai check or other state change.
This gives you the best performance regarding lag, consistency, predictability, and other timing features that matter in games.
roughly:
get the current timestamp at the time start time (time.time(), I presume)
sleep with Event.wait(timeout=...)
wake up on an Event or timeout.
if on Event: get timestamp, subtract initial on, subtract result from timer; wait until foo() stops; repeat Event.wait(timeout=[result from 4.])
if on timeout: exit.
Here is an example, how I understand, what your Programm should do:
import threading, time, datetime
ACTIVE = True
def main():
while ACTIVE:
print "im working"
time.sleep(.3)
def run(thread, timeout):
global ACTIVE
thread.start()
time.sleep(timeout)
ACTIVE = False
thread.join()
proc = threading.Thread(target = main)
print datetime.datetime.now()
run(proc, 2) # run for 2 seconds
print datetime.datetime.now()
In main() it does a periodic task, here printing something. In the run() method you can say, how long main should do the task.
This code producess following output:
2014-05-25 17:10:54.390000
im working
im working
im working
im working
im working
im working
im working
2014-05-25 17:10:56.495000
please correct me, if I've understood you wrong.
I would use multiprocessing.Pipe for signaling, combined with select for timing:
#!/usr/bin/env python
import multiprocessing
import select
import time
def timer(sleeptime,pipe):
start = time.time()
while time.time() < start + sleeptime:
n = select.select([pipe],[],[],1) # sleep in 1s intervals
for conn in n[0]:
val = conn.recv()
print 'got',val
start += float(val)
def foo(pipe):
i = 0
while True:
print i
i += 1
time.sleep(1)
if i%7 == 0:
pipe.send(5)
if __name__ == '__main__':
mainpipe,foopipe = multiprocessing.Pipe()
fooProcess = multiprocessing.Process(target=foo,name="Foo",args=(foopipe,))
fooProcess.start()
timer(10,mainpipe)
fooProcess.terminate()
# since we terminated, mainpipe and foopipe are corrupt
del mainpipe, foopipe
# ...
print 'Done'
I'm assuming that you want some condition in the foo process to extend the timer. In the sample I have set up, every time foo hits a multiple of 7 it extends the timer by 5 seconds while the timer initially counts down 10 seconds. At the end of the timer we terminate the process - foo won't finish nicely at all, and the pipes will get corrupted, but you can be certain that it'll die. Otherwise you can send a signal back along mainpipe that foo can listen for and exit nicely while you join.
Let's say I've got two functions:
def moveMotorToPosition(position,velocity)
#moves motor to a particular position
#does not terminate until motor is at that position
and
def getMotorPosition()
#retrieves the motor position at any point in time
In practice what I want to be able to have the motor oscillating back and forth (by having a loop that calls moveMotorToPosition twice; once with a positive position and one with a negative position)
While that 'control' loop is iterating, I want a separate while loop to be pulling data at some frequency by calling getMotorPositionnd. I would then set a timer on this loop that would let me set the sampling frequency.
In LabView (the motor controller supplies a DLL to hook into) I achieve this with 'parallel' while loops. I've never done anything with parallel and python before, and not exactly sure which is the most compelling direction to head.
To point you a little closer to what it sounds like you're wanting:
import threading
def poll_position(fobj, seconds=0.5):
"""Call once to repeatedly get statistics every N seconds."""
position = getMotorPosition()
# Do something with the position.
# Could store it in a (global) variable or log it to file.
print position
fobj.write(position + '\n')
# Set a timer to run this function again.
t = threading.Timer(seconds, poll_position, args=[fobj, seconds])
t.daemon = True
t.start()
def control_loop(positions, velocity):
"""Repeatedly moves the motor through a list of positions at a given velocity."""
while True:
for position in positions:
moveMotorToPosition(position, velocity)
if __name__ == '__main__':
# Start the position gathering thread.
poll_position()
# Define `position` and `velocity` as it relates to `moveMotorToPosition()`.
control_loop([first_position, second_position], velocity)