Multi task operation using threading very slow - python

I try to make an python application which is responsible for reading and writting with plc through OPC.
The Problem which I am facing right now, I have to read some data from plc and according to that I have to write some data.
This is continuous process. So, I am using multithreading concept to handle this.
For an example:-
def callallsov(self):
while True:
self.allsov1sobjects.setup(self.sovelementlist)
def callallmotor(self):
while True:
self.allmotor1dobjects.setup(self.motorelementlist)
def callallanalog(self):
while True:
self.allanalogobjects.setup(self.analogelementlist)
self.writsovthread = threading.Thread(target=callallsov)
self.writemotorthread = threading.Thread(target=callallmotor)
self.writemotorthread = threading.Thread(target=callallanalog)
self.writsovthread.start()
self.writemotorthread.start()
self.writeanalogthread.start()
calling structure:
def setup(self,elementlist):
try:
n =0
self.listofmotor1D.clear()
while n< len(self.df.index):
self.df.iloc[n, 0] = Fn_Motor1D(self.com, self.df, elementlist, n)
self.listofmotor1D.append(self.df.iloc[n,0])
n = n + 1
read and write plc tag:
if tagname == self.cmd:
if tagvalue == self.gen.readgeneral.readnodevalue(self.cmd):
# sleep(self.delaytime)
self.gen.writegeneral.writenodevalue(self.runingFB, 1)
else:
self.gen.writegeneral.writenodevalue(self.runingFB, 0)
break
now each elementlist have 100 device. So I make type of each element then create runtime objects for each element.
like in case of motor, if motor command is present we need to high it's runfb.
I have to follow the same process for 100 motors device.
Thus why I use while true here to check continuously data from plc.
Due to 100 motors (large amout of data) it takes long time to write in plc.plc scan cycle is very fast.
so my fist question is I haven't use join here ....is it correct?
secondly how I can avoid this sluggishness.

Related

Parallelize slow functions that needs to be run only every X iterations in order to not slow the loop

The project
I am conducting a project where I need to both detect faces (bounding boxes and landmarks) and perform face recognition (identify a face). The detection is really fast (it takes not even a few milliseconds on my laptop) but the recognition can be really slow (about 0.4 seconds on my laptop). I am using the face_recognition Python library to do so. After a few tests, I discovered that it is the embedding of the image that is slow.
Here is an example code to try it out for yourself :
# Source : https://pypi.org/project/face-recognition/
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)
face_landmarks_list = face_recognition.face_landmarks(image)
unknown_encoding = face_recognition.face_encodings(image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)
The problem
What I need to do is to process a video (30 FPS), therefore 0.4s of computation is unacceptable. The idea that I have is that the recognition will only need to be run a few times and not every frame since from one frame to another, if there are no cuts in the video, a given head will be close to its previous position. Therefore, the first time the head appears, we run the recognition which is very slow but then for the next X frames, we won't have to since we'll detect that the position is close to the previous one, therefore it must be the same person that moved. Of course, this approach is not perfect but seems to be a good compromise and I would like to try it.
The only problem is that by doing so the video is smooth until a head appears, then the video freezes because of the recognition and then becomes smooth again. This is where I would like to introduce multiprocessing, I would like to be able to compute the recognition in parallel of looping through the frame of the video. If I manage to do so, I will then only have to process a few frames in advance so that when a face shows up it already computed its recognition a few seconds ago during several frames so that we did not see a reduced frame rate.
Simple formulation
Therefore here is what I have (in python pseudo code so that it is clearer):
def slow_function(image):
# This function takes a lot of time to compute and would normally slow down the loop
return Recognize(image)
# Loop that we need to maintain at a given speed
person_name = "unknown"
frame_index = -1
while True:
frame_index += 1
frame = new_frame() # this is not important and therefore not detailes
# Every ten frames, we run a heavy function
if frame_index % 10 == 0:
person_name = slow_function(image)
# each frame we use the person_name even if we only compute it every so often
frame.drawText(person_name)
And I would like to do something like this :
def slow_function(image):
# This function takes a lot of time to compute and would normally slow down the loop
return Recognize(image)
# Loop that we need to maintain at a given speed
person_name = "unknown"
frame_index = -1
while True:
frame_index += 1
frame = new_frame() # this is not important and therefore not detailes
# Every ten frames, we run a heavy function
if frame_index % 10 == 0:
DO slow_function(image) IN parallel WITH CALLBACK(person_name = result)
# each frame we use the person_name even if we only compute it every so often
frame.drawText(person_name)
The goal is to compute a slow function over several iterations of a loop.
What I have tried
I looked up multiprocessing and Ray but I did not find examples of what I wanted to do. Most of the time I found people using multiprocessing to compute at the same time the result of a function for different inputs. This is not what I want. I want to have in parallel a loop and a process that accepts data from the loop (a frame), do some computation, and returns a value to the loop without interrupting or slowing down the loop (or at least, spreading the slow down rather than having one really slow iteration and 9 fast ones).
I think I found pretty much how to do what I want. Here is an example:
from multiprocessing import Pool
import time
# This seems to me more precise than time.sleep()
def sleep(duration, get_now=time.perf_counter):
now = get_now()
end = now + duration
while now < end:
now = get_now()
def myfunc(x):
time.sleep(1)
return x
def mycallback(x):
print('Callback for i = {}'.format(x))
if __name__ == '__main__':
pool=Pool()
# Approx of 5s in total
# Without parallelization, this should take 15s
t0 = time.time()
titer = time.time()
for i in range(100):
if i% 10 == 0: pool.apply_async(myfunc, (i,), callback=mycallback)
sleep(0.05) # 50ms
print("- i =", i, "/ Time iteration:", 1000*(time.time()-titer), "ms")
titer = time.time()
print("\n\nTotal time:", (time.time()-t0), "s")
t0 = time.time()
for i in range(100):
sleep(0.05)
print("\n\nBenchmark sleep time time:", 10*(time.time()-t0), "ms")
Of course, I will need to add flags so that I do not write a value with the callback at the same time that I read it in the loop.

Any way to make this more efficient? Tenable API report call

I have a script that does a bunch of data manipulation, but it is getting bottlenecked by this function.
The length of the Tenable generator array ips is always around 1000, give or take. The length of ips[row] is 5.
Are there any improvements that I can make here to make things more efficient? I feel like this takes far longer than it should.
def get_ten(sc):
now = time.time()
ips = [sc.analysis.vulns(('ip', '=', ip), tool='sumseverity', sortDirection='desc') for ip in [x[15] for x in csv.reader(open('full.csv', 'r'))
if x[15] != 'PrivateIpAddress']]
row = 0
while row < len(ips):
scan_data = []
scan_count = 0
for scan in ips[row]:
count = scan['count']
scan_data.append(count)
scan_count += int(count)
row += 1
print(time.time() - now)
Output: 2702.747463464737
Thanks!
I would suggest you try inversing your logic.
From looking at your code, it looks like you currently:
Read a CSV to get IPs.
Call the sc.analysis API for each IP
Process the results
My best guess is that the majority of the time is taken sending out the API calls and then waiting for the results. Instead I suggest you try:
Call sc.analysis API (without filtering ip) and read the results to get all IPs (as a set?)
Read CSV and get IPs as another set.
Do set union operation to find vulnerable IP addresses.

How can I read a string from a file, convert it to int, store the value in memory and then access the value and print it on screen?

I need to read a temperature reading from a DS18B20 sensor using Raspberry Pi 3 and Python.
The problem is the refresh rate of the sensor (~1 sec)
I need to read from sys/bus/w1/devices/28-041670f43bff/w1_slave and use the integer i get to display a temperature on a 7 segment display connected directly to my GPIOs (not using any hardware multiplexing - i2c....etc)
In order to display a two digit temperature, I need to turn on and off the digits really fast (faster than the sensor refreshes)
This is the small piece of code used to get the integer temperature:
def temperature():
with open ("/sys/bus/w1/devices/28-041670f43bff/w1_slave") as q:
r=q.read()
temp=r[69:71]
t=int (temp)
return t
But i need to call this function many times per second in order to get a good display on the 7 segment display.
This is how i thought of doing it:
#the temperature() function returns a two digit int
while True:
GPIO.output(31,0)
GPIO.output(temp[temperature()/10], 1) # temp is a dictionary used to know which segments to light up to show numbers
time.sleep(0.0005)
GPIO.output(31,1)
GPIO.output(37,0)
GPIO.output(temp[temperature()%10], 1)
time.sleep(0.0005)
GPIO.output(37,1)
But this code just makes one digit light up, wait ~1sec, light up the other digit, wait ~1sec.....and so on.
Any ideas of how to do this are very appreciated.
Rather than implement this functionality on your own, you should instead use the libraries out there that address this particular bit of your code inherently. In this case, I'd suggest you use W1ThermSensor. You can find the documentation at:
https://github.com/timofurrer/w1thermsensor
and you can install it using:
pip install w1thermsensor
It does support the DS18B20, and offers an exact analogue to your use case in the README.
From the docs for the package:
from w1thermsensor import W1ThermSensor
sensor = W1ThermSensor()
temperature_in_celsius = sensor.get_temperature()
temperature_in_fahrenheit = sensor.get_temperature(W1ThermSensor.DEGREES_F)
temperature_in_all_units = sensor.get_temperatures([
W1ThermSensor.DEGREES_C,
W1ThermSensor.DEGREES_F,
W1ThermSensor.KELVIN
])
In many cases, particularly for popular hardware devices, you'll find that there are libraries already available to use within python, and that will all you to quickly move on to writing the bits of code unique to your own particular needs.
Note: According to the technical discussion in the following link, if the DS18B20 is set to 12-bit temperature resolution, the temperature conversion will take 750 ms, or 3/4 of a second. If you set the hardware to do 9-bit resolution, the conversion time in hardware is 93.75 ms. I suspect this is the root of your once-per-second issue.
https://www.maximintegrated.com/en/app-notes/index.mvp/id/4377
There is some discussion of this issue in this Question:
https://raspberrypi.stackexchange.com/questions/14278/how-to-change-ds18b20-reading-resolution
See the second Answer, regarding the configDS18B20 utility.
With the resolution set to 9-bit, you may be able to adjust the w1thermsensor RETRY_DELAY_SECONDS / RETRY_ATTEMPTS value combination in the source code and get what you need. It's unclear to me if the retry delay has any affect on the actual polling of the device. It looks like it is there for device finding. Though, as I said, that interval may impact polling a single device. I simply didn't read through the source code enough to see when and where it comes into play.
Happy New Year!
I'd throw the display routine into its own thread so that you don't have to think about it in your main loop. The code below should demonstrate this concept. Set "testing" to False to see if it works with your hardware.
#!/usr/bin/python
import time
import threading
import Queue
import random
# Set this to False to read the temperature from a real sensor and display it on a 7-digit display.
testing = True
def temperature_read(q):
# Read the temperature at one second intervals.
while True:
if testing:
r = '-' * 69 + '%02d' % (random.randrange(100)) + 'blahblah' * 4
else:
r = open('/sys/bus/w1/devices/28-041670f43bff/w1_slave', 'r').read()
print r
# The temperature is represented as two digits in a long string.
# Push the digits into the queue as a tuple of integers (one per digit).
q.put((int(r[69]), int(r[70])))
# Wait for next reading.
# (Will w1_slave block until the next reading? If so, this could be eliminated.)
time.sleep(1.0)
def temperature_display(q):
# Display the temperature.
# Temperature is two digits, stored separately (high/low) for more efficient handling.
temperature_h = temperature_l = 0
while True:
# Is there a new temperature reading waiting for us?
if not q.empty():
temperature = q.get()
# If it's None, we're done.
if temperature is None:
break
# Load the two digits (high and low) representing the temperature.
(temperature_h, temperature_l) = temperature
if testing:
print 'displayH', temperature_h
time.sleep(0.05)
print 'displayL', temperature_l
time.sleep(0.05)
else:
GPIO.output(31,0)
GPIO.output(temperature_h, 1) # temp is a dictionary used to know which segments to light up to show numbers
time.sleep(0.0005)
GPIO.output(31,1)
GPIO.output(37,0)
GPIO.output(temperature_l, 1)
time.sleep(0.0005)
GPIO.output(37,1)
# Clean up here. Turn off all pins?
# Make a queue to communicate with the display thread.
temperature_queue = Queue.Queue()
# Run the display in a separate thread.
temperature_display_thread = threading.Thread(target=temperature_display, args=(temperature_queue,))
temperature_display_thread.start()
# Run the reader.
try:
temperature_read(temperature_queue)
except:
# An uncaught exception happened. (It could be a keyboard interrupt.)
None
# Tell the display thread to stop.
temperature_queue.put(None)
# Wait for the thread to end.
temperature_display_thread.join()
To support another reading (transmission), I just put it in the read loop rather than adding another thread for it. I changed the queue so that you could easily move it to another thread but I suspect you'll add more inputs so this is probably a reasonable way to do it unless the read frequency of one needs to be much different. (Even then, you could do things with counters in the loop.)
#!/usr/bin/python
import time
import threading
import Queue
import random
# Set this to False to read the temperature from a real sensor and display it on a 7-digit display.
testing = True
def observe(q):
while True:
# Make a temperature reading.
if testing:
r = '-' * 69 + '%02d' % (random.randrange(100)) + 'blahblah' * 4
else:
r = open('/sys/bus/w1/devices/28-041670f43bff/w1_slave', 'r').read()
print 'temperature ->', r
# The temperature is represented as two digits in a long string.
# Push the digits into the queue as a tuple of integers (one per digit).
q.put(('temperature', int(r[69]), int(r[70])))
# Make a transmission reading.
if testing:
r = random.randrange(1,6)
else:
r = 0 # Put your transmission reading code here.
print 'transmission ->', r
q.put(('transmission', r))
# Wait for next reading.
# (Will w1_slave block until the next reading? If so, this could be eliminated.)
time.sleep(1.0)
def display(q):
# Display the temperature.
# Temperature is two digits, stored separately (high/low) for more efficient handling.
temperature_h = temperature_l = transmission = 0
while True:
# Is there a new temperature reading waiting for us?
if not q.empty():
reading = q.get()
# If it's None, we're done.
if reading is None:
break
elif reading[0] == 'temperature':
# Load the two digits (high and low) representing the temperature.
(x, temperature_h, temperature_l) = reading
elif reading[0] == 'transmission':
(x, transmission) = reading
if testing:
print 'displayH', temperature_h
time.sleep(0.05)
print 'displayL', temperature_l
time.sleep(0.05)
print 'transmission', transmission
time.sleep(0.05)
else:
GPIO.output(31,0)
GPIO.output(temperature_h, 1) # temp is a dictionary used to know which segments to light up to show numbers
time.sleep(0.0005)
GPIO.output(31,1)
GPIO.output(37,0)
GPIO.output(temperature_l, 1)
time.sleep(0.0005)
GPIO.output(37,1)
# Clean up here. Turn off all pins?
# Make a queue to communicate with the display thread.
readings_queue = Queue.Queue()
# Run the display in a separate thread.
display_thread = threading.Thread(target=display, args=(readings_queue,))
display_thread.start()
# Observe the inputs.
try:
observe(readings_queue)
except:
# An uncaught exception happened. (It could be a keyboard interrupt.)
None
# Tell the display thread to stop.
readings_queue.put(None)
# Wait for the thread to end.
display_thread.join()
Here's a version which only reads the temperature every tenth time but reads the transmission every time. I think you'll see how to easily tweak this to meet your needs.
I would make separate threads for each reader but it would complicate the thread management quite a bit.
#!/usr/bin/python
import time
import threading
import Queue
import random
# Set this to False to read the temperature from a real sensor and display it on a 7-digit display.
testing = True
def observe(q):
count = 0
while True:
# Only read the temperature every tenth time.
if (count % 10 == 0):
# Make a temperature reading.
if testing:
r = '-' * 69 + '%02d' % (random.randrange(100)) + 'blahblah' * 4
else:
r = open('/sys/bus/w1/devices/28-041670f43bff/w1_slave', 'r').read()
print 'temperature ->', r
# The temperature is represented as two digits in a long string.
# Push the digits into the queue as a tuple of integers (one per digit).
q.put(('temperature', int(r[69]), int(r[70])))
# Make a transmission reading.
if testing:
r = random.randrange(1,6)
else:
r = 0 # Put your transmission reading code here.
print 'transmission ->', r
q.put(('transmission', r))
# Wait for next reading.
if testing:
time.sleep(0.5)
else:
time.sleep(0.1)
count += 1
def display(q):
# Display the temperature.
# Temperature is two digits, stored separately (high/low) for more efficient handling.
temperature_h = temperature_l = transmission = 0
while True:
# Is there a new temperature reading waiting for us?
if not q.empty():
reading = q.get()
# If it's None, we're done.
if reading is None:
break
elif reading[0] == 'temperature':
# Load the two digits (high and low) representing the temperature.
(x, temperature_h, temperature_l) = reading
elif reading[0] == 'transmission':
(x, transmission) = reading
if testing:
print 'displayH', temperature_h
time.sleep(0.05)
print 'displayL', temperature_l
time.sleep(0.05)
print 'transmission', transmission
time.sleep(0.05)
else:
GPIO.output(31,0)
GPIO.output(temperature_h, 1) # temp is a dictionary used to know which segments to light up to show numbers
time.sleep(0.0005)
GPIO.output(31,1)
GPIO.output(37,0)
GPIO.output(temperature_l, 1)
time.sleep(0.0005)
GPIO.output(37,1)
# Clean up here. Turn off all pins?
# Make a queue to communicate with the display thread.
readings_queue = Queue.Queue()
# Run the display in a separate thread.
display_thread = threading.Thread(target=display, args=(readings_queue,))
display_thread.start()
# Observe the inputs.
try:
observe(readings_queue)
except:
# An uncaught exception happened. (It could be a keyboard interrupt.)
None
# Tell the display thread to stop.
readings_queue.put(None)
# Wait for the thread to end.
display_thread.join()

How to prevent race condition when using redis to implement flow control?

We have a server that gets cranky if it gets too many users logging in at the same time (meaning less than 7 seconds apart). Once the users are logged in, there is no problem (one or two logging in at the same time is also not a problem, but when 10-20 try the entire server goes into a death spiral sigh).
I'm attempting to write a page that will hold onto users (displaying an animated countdown etc.) and let them through 7 seconds apart. The algorithm is simple
fetch the timestamp (t) when the last login happened
if t+7 is in the past start the login and store now() as the new timestamp
if t+7 is in the future, store it as the new timestamp, wait until t+7, then start the login.
A straight forward python/redis implementation would be:
import time, redis
SLOT_LENGTH = 7 # seconds
now = time.time()
r = redis.StrictRedis()
# lines below contain race condition..
last_start = float(r.get('FLOWCONTROL') or '0.0') # 0.0 == time-before-time
my_start = last_start + SLOT_LENGTH
r.set('FLOWCONTROL', max(my_start, now))
wait_period = max(0, my_start - now)
time.sleep(wait_period)
# .. login
The race condition here is obvious, many processes can be at the my_start = line simultaneously. How can I solve this using redis?
I've tried the redis-py pipeline functionality, but of course that doesn't get an actual value until in the r.get() call...
I'll document the answer in case anyone else finds this...
r = redis.StrictRedis()
with r.pipeline() as p:
while 1:
try:
p.watch('FLOWCONTROL') # --> immediate mode
last_slot = float(p.get('FLOWCONTROL') or '0.0')
p.multi() # --> back to buffered mode
my_slot = last_slot + SLOT_LENGTH
p.set('FLOWCONTROL', max(my_slot, now))
p.execute() # raises WatchError if anyone changed TCTR-FLOWCONTROL
break # break out of while loop
except WatchError:
pass # someone else got there before us, retry.
a little more complex than the original three lines...

Updating database with callback in Parallel Python

I'm trying to do some text processing on around 200,000 entries in a SQlite database which I'm accessing using SQLAlchemy. I'd like to parallelize it (I'm looking at Parallel Python), but I'm not sure how exactly to do it.
I want to commit the session each time an entry is processed, so that if I need to stop the script I won't lose the work it's already done. However, when I try to pass the session.commit() command to the callback function, it does not seem to work.
from assignDB import *
from sqlalchemy.orm import sessionmaker
import pp, sys, fuzzy_substring
def matchIng(rawIng, ingreds):
maxScore = 0
choice = ""
for (ingred, parentIng) in ingreds.iteritems():
score = len(ingred)/(fuzzy_substring(ingred,rawIng)+1)
if score > maxScore:
maxScore = score
choice = ingred
refIng = parentIng
return (refIng, choice, maxScore)
def callbackFunc(match, session, inputTuple):
print inputTuple
match.refIng_id = inputTuple[0]
match.refIng_name = inputTuple[1]
match.matchScore = inputTuple[2]
session.commit()
# tuple of all parallel python servers to connect with
ppservers = ()
#ppservers = ("10.0.0.1",)
if len(sys.argv) > 1:
ncpus = int(sys.argv[1])
# Creates jobserver with ncpus workers
job_server = pp.Server(ncpus, ppservers=ppservers)
else:
# Creates jobserver with automatically detected number of workers
job_server = pp.Server(ppservers=ppservers)
print "Starting pp with", job_server.get_ncpus(), "workers"
ingreds = {}
for synonym, parentIng in session.query(IngSyn.synonym, IngSyn.parentIng):
ingreds[synonym] = parentIng
jobs = []
for match in session.query(Ingredient).filter(Ingredient.refIng_id == None):
rawIng = match.ingredient
jobs.append((match, job_server.submit(matchIng,(rawIng,ingreds), (fuzzy_substring,),callback=callbackFunc,callbackargs=(match,session))))
The session is imported from assignDB. I'm not getting any error, it's just not updating the database.
Thanks for your help.
UPDATE
Here is the code for fuzzy_substring
def fuzzy_substring(needle, haystack):
"""Calculates the fuzzy match of needle in haystack,
using a modified version of the Levenshtein distance
algorithm.
The function is modified from the levenshtein function
in the bktree module by Adam Hupp"""
m, n = len(needle), len(haystack)
# base cases
if m == 1:
return not needle in haystack
if not n:
return m
row1 = [0] * (n+1)
for i in range(0,m):
row2 = [i+1]
for j in range(0,n):
cost = ( needle[i] != haystack[j] )
row2.append( min(row1[j+1]+1, # deletion
row2[j]+1, #insertion
row1[j]+cost) #substitution
)
row1 = row2
return min(row1)
which I got from here: Fuzzy Substring. In my case, "needle" is one of ~8000 possible choices, while haystack is the raw string I'm trying to match. I loop over all possible "needles" and choose the one with the best score.
Without looking at your specific code, it can be fairly said that:
Using serverless SQLite and
Seeking increased write performance through paralleism
are mutually incompatible desires. Quoth the SQLite FAQ:
… However, client/server database engines (such as PostgreSQL, MySQL,
or Oracle) usually support a higher level of concurrency and allow
multiple processes to be writing to the same database at the same
time. This is possible in a client/server database because there is
always a single well-controlled server process available to coordinate
access. If your application has a need for a lot of concurrency, then
you should consider using a client/server database. But experience
suggests that most applications need much less concurrency than their
designers imagine. …
And that's even without whatever gating and ordering SQLAlchemy uses. It is also not clear at all when — if at all — the Parallel Python jobs are completing.
My suggestion: get it working correctly first and then look for optimizations. Especially when the pp secret sauce might not be buying you much at all even if it was working perfectly.
added in response to comment:
If fuzzy_substring matching is the bottleneck it appears completely decoupled from the database access and you should keep that in mind. Without seeing what fuzzy_substring is doing, a good starting assumption is that you can make algorithmic improvements which may make the single-threaded programming computationally feasible. Approximate string matching is a very well studied problem and choosing the right algorithm is often far better than "throw more processors at it".
Far better in this sense is that you have cleaner code, don't waste the overhead of segmenting and reassembling the problem, have a more extensible and debuggable program at the end.
#msw has provided an excellent overview of the problem, giving a general way to think about parallelization.
Notwithstanding these comments, here is what I got to work in the end:
from assignDB import *
from sqlalchemy.orm import sessionmaker
import pp, sys, fuzzy_substring
def matchIng(rawIng, ingreds):
maxScore = 0
choice = ""
for (ingred, parentIng) in ingreds.iteritems():
score = len(ingred)/(fuzzy_substring(ingred,rawIng)+1)
if score > maxScore:
maxScore = score
choice = ingred
refIng = parentIng
return (refIng, choice, maxScore)
# tuple of all parallel python servers to connect with
ppservers = ()
#ppservers = ("10.0.0.1",)
if len(sys.argv) > 1:
ncpus = int(sys.argv[1])
# Creates jobserver with ncpus workers
job_server = pp.Server(ncpus, ppservers=ppservers)
else:
# Creates jobserver with automatically detected number of workers
job_server = pp.Server(ppservers=ppservers)
print "Starting pp with", job_server.get_ncpus(), "workers"
ingreds = {}
for synonym, parentIng in session.query(IngSyn.synonym, IngSyn.parentIng):
ingreds[synonym] = parentIng
rawIngredients = session.query(Ingredient).filter(Ingredient.refIng_id == None)
numIngredients = session.query(Ingredient).filter(Ingredient.refIng_id == None).count()
stepSize = 30
for i in range(0, numIngredients, stepSize):
print i
print numIngredients
if i + stepSize > numIngredients:
stop = numIngredients
else:
stop = i + stepSize
jobs = []
for match in rawIngredients[i:stop]:
rawIng = match.ingredient
jobs.append((match, job_server.submit(matchIng,(rawIng,ingreds), (fuzzy_substring,))))
job_server.wait()
for match, job in jobs:
inputTuple = job()
print match.ingredient
print inputTuple
match.refIng_id = inputTuple[0]
match.refIng_name = inputTuple[1]
match.matchScore = inputTuple[2]
session.commit()
Essentially, I've chopped the problem into chunks. After matching 30 substrings in parallel, the results are returned and committed to the database. I chose 30 somewhat arbitrarily, so there might be gains to be had in optimizing that number. It seems to have sped up a fair bit, as I'm using all 3(!) of the cores in my processor now.

Categories

Resources