I'm currently writing code in Python 2.7, which involves creating an object, in which I have two class methods and other regular methods. I need to use this specific combination of methods because of the larger context of the code I am writing- it's not relevant to this question, so I won't go into depth.
Within my __init__ function, I am creating a Pool (a multiprocessing object). In the creation of that, I call a setup function. This setup function is a #classmethod. I define a few variables in this setup function by using the cls.variablename syntax. As I mentioned, I call this setup function within my init function (inside the Pool creation), so these variables should be getting created, based on what I understand.
Later in my code, I call a few other functions, which eventually leads to me calling another #classmethod within the same object I was talking about earlier (same object as the first #classmethod). Within this #classmethod, I try to access the cls.variables I created in the first #classmethod. However, Python is telling me that my object doesn't have an attribute "cls.variable" (using general names here, obviously my actual names are specific to my code).
ANYWAYS...I realize that's probably pretty confusing. Here's some (very) generalized code example to illustrate the same idea:
class General(object):
def __init__(self, A):
# this is correct syntax based on the resources I'm using,
# so the format of argument isn't the issue, in case anyone
# initially thinks that's the issue
self.pool = Pool(processes = 4, initializer=self._setup, initargs= (A, )
#classmethod
def _setup(cls, A):
cls.A = A
#leaving out other functions here that are NOT class methods, just regular methods
#classmethod
def get_results(cls):
print cls.A
The error I'm getting when I get to the equivalent of the print cls.A line is this:
AttributeError: type object 'General' has no attribute 'A'
edit to show usage of this code:
The way I'm calling this in my code is as such:
G = General(5)
G.get_results()
So, I'm creating an instance of the object (in which I create the Pool, which calls the setup function), and then calling get_results.
What am I doing wrong?
The reason General.A does not get defined in the main process is that multiprocessing.Pool only runs General._setup in the subprocesses. This means that it will not be called in the main process (where you call Pool).
You end up with 4 processes where in each of them there is General.A is defined, but not in the main process. You don't actually initialize a Pool like that (see this answer to the question How to use initializer to set up my multiprocess pool?)
You want an Object Pool which is not natively impemented in Python. There's a Python Implementation of the Object Pool Design Pattern question here on StackOverflow, but you can find a bunch by just searching online.
Related
I have been developing a full-stack application that checks for files and uploads them to a cloud. However, I have come across an interesting problem that I was not able to solve.
I have a problem with instantiating a class, as you will see below:
class UploadFastq:
def __int__(self,
some_list, some_str, some_obj, **kwargs):
self.some_list = some_list
self.some_obj = some_obj
self.some_str = some_str
def process(self):
self.some_methods_calling_processes()
...
As you can imagine, I have trimmed the original code for privacy concerns (company dictates, sorry). This class is to handle some-backend related processes, and arguments only contain back related variables. Also, this class is on the different py script, which imports again back-related functions.
Now, the problem is, when I import to another script and try to call and instantiate the class, something funny happens...
from lib.some_back_related_script import UploadFastq
uploads = UploadFastq(some_list=the_list,some_str=the_str,some_obj=the_obj)
uploads.process
OUTPUT:
TypeError: UploadFastq() takes no arguments
I have looked if there are indentation problems, I could not find any. (I am using PyCharm as IDE, and reformatting the file also did not solve)
I have also tried this on an another script(the gui script) and could partially solve it as:
from lib.some_back_related_script import UploadFastq
uploader = UploadFastq()
uploader.__int__( ##TODO how is this possible???)
some_list=the_list,some_str=the_str,some_obj=the_obj
)
However, on the script the class suppose to be called, "__init__" method did not solve the case, and produced this error:
TypeError: UploadFastq.__init__() takes exactly one argument (the instance to initialize)
At this point I am clueless about what is going on and how to solve it. I have experiencing something like this for first time. I also could not find this kind of problem on the internet. soo, I would be much grateful if you could explain how to approach the problem.
P.S.: I work as a bioinformatician/python developer for a quite time and I found many many solutions on this platform. But, this is actually my first question on the stackoverflow!!!
Cheers!
You mispelled the constructor name __init__ as __int__ :
def __int__(self, some_list, some_str, some_obj, **kwargs):
Thus the default constructor (which takes as arguments only the "instance to initialize") was called, and the interpreter is complaining about the given arguments.
TypeError: UploadFastq.__init__() takes exactly one argument (the instance to initialize)
I have an alogirithm that I am trying to parallelize, because of very long run times in serial. However, the function that needs to be parallelized is inside a class. multiprocessing.Pool seems to be the best and fastest way to do this, but there is a problem. It's target function can not be a function of an object instance. Meaning this; you declare a Pool in the following way:
import multiprocessing as mp
cpus = mp.cpu_count()
poolCount = cpus*2
pool = mp.Pool(processes = poolCount, maxtasksperchild = 2)
And then actually use it as so:
pool.map(self.TargetFunction, args)
But this throws an error, because object instances cannot be pickled, as the Pool function does to pass information to all of its child processes. But I have to use self.TargetFunction
So I had an idea, I would create a new Python file named parallel and simply write a couple of functions without putting them in a class, and call those functions from within my original class (of whose function I want to parallelize)
So I tried this:
import multiprocessing as mp
def MatrixHelper(args):
WM = args[0][0]
print(WM.CreateMatrixMp(*args))
return WM.CreateMatrixMp(*args)
def Start(sigmaI, sigmaX, numPixels, WM):
cpus = mp.cpu_count()
poolCount = cpus * 2
args = [(WM, sigmaI, sigmaX, i) for i in range(numPixels)]
print('Number of cpu\'s to process WM:%d'%cpus)
pool = mp.Pool(processes = poolCount, maxtasksperchild = 2)
tempData = pool.map(MatrixHelper, args)
return tempData
These functions are not part of a class, using MatrixHelper in Pools map function works fine. But I realized while doing this that it was no way out. The function in need of parallelization (CreateMatrixMp) expects an object to be passed to it (it is declared as def CreateMatrixMp(self, sigmaI, sigmaX, i))
Since it is not being called from within its class, it doesn't get a self passed to it. To solve this, I passed the Start funtion the calling object itself. As in, I say parallel.Start(sigmaI, sigmaX, self.numPixels, self). The object self then becomes WM so that I will be able to finally call the desired function as WM.CreateMatrixMp().
I'm sure that that is a very sloppy way of coding, but I just wanted to see if it would work. But nope, pickling error again, the map function cannot handle any objects instances at all.
So my question is, why is it designed this way? It seems useless, it seems to be completely disfunctional in any program that uses classes at all.
I tried using Process rather than Pool, but this requires the array that I am ultimately writing to to be shared, which requires processes waiting for eachother. If I don't want it to be shared, then I have each process write its own smaller array, and do one big write at the end. But both of these result in slower run times than when I was doing this serially! Pythons builtin multiprocessing seems absolutely useless!
Can someone please give me some guidance as to how to actually save time with multiprocessing, in the context of my tagret function being inside a class? I have read on posts here to use pathos.multiprocessing instead, but I am on Windows, and am working on this project with multiple people who all have different set ups. Having everyone try to install it would be inconveinient.
I was having a similar issue with trying to use multiprocessing within a class. I was able to solve it with a relatively easy workaround I found online. Basically you use a function outside of your class that unwraps/unpacks the method inside your function that you're trying to parallelize. Here are the two websites I found that explain how to do it.
Website 1 (joblib example)
Website 2 (multiprocessing module example)
For both, the idea is to do something like this:
rom multiprocessing import Pool
import time
def unwrap_self_f(arg, **kwarg):
return C.f(*arg, **kwarg)
class C:
def f(self, name):
print 'hello %s,'%name
time.sleep(5)
print 'nice to meet you.'
def run(self):
pool = Pool(processes=2)
names = ('frank', 'justin', 'osi', 'thomas')
pool.map(unwrap_self_f, zip([self]*len(names), names))
if __name__ == '__main__':
c = C()
c.run()
The essence of how multiprocessing works is that it spawns sub-processes that receive parameters to run a certain function. In order to pass these arguments, it needs that they are, well, passable: non-exclusive to the main process, s.a. sockets, file descriptors and other low-level, OS related stuff.
This translates into "need to be pickleable or serializable".
On the same topic, parallel processing works best when you (can) have self-contained divisions of a problem. I can tell you want to share some sort of input/stream/database source, but this will probably create a bottleneck that you'll have to tackle at some point (at least, from the "python script" side, rather than the "OS/database" side. Fortunately, you have to tackle it early now.
You can re-code your classes to spawn/create these non-pickable resources when neeeded rather than at start
def targetFunction(self, range_params):
if not self.ready():
self._init_source()
#rest of the code
You kinda tackled the problem the other way around (initialized an object based on params). And yes, parallel processing comes with a cost.
You can see the multiprocessing programming guidelines for an even more thorough insight on this matter.
this is an old post but it still is one of the top results when you search for the topic. Some good info for this question can be found at this stack overflow: python subclassing multiprocessing.Process
I tried some workarounds to try calling pool.starmap from inside of a class to another function in the class. Making it a staticmethod or having a function on the outside call it didn't work and gave the same error. A class instance just can't be pickled so we need to create the instance after we start the multiprocessing.
What I ended up doing that worked for me was to separate my class into two classes. Basically, the function you are calling the multiprocessing on needs to be called right after you instantiate a new object for the class it belongs to.
Something like this:
from multiprocessing import Pool
class B:
...
def process_feature(idx, feature):
# do stuff in the new process
pass
...
def multiprocess_feature(process_args):
b_instance = B()
return b_instance.process_feature(*process_args)
class A:
...
def process_stuff():
...
with Pool(processes=num_processes, maxtasksperchild=10) as pool:
results = pool.starmap(
multiprocess_feature,
[
(idx, feature)
for idx, feature in enumerate(features)
],
chunksize=100,
)
...
...
...
I'm just starting to learn Python and I have the following problem.
Using a package with method "bind", the following code works:
def callback(data):
print data
channel.bind(callback)
but when I try to wrap this inside a class:
class myclass:
def callback(data):
print data
def register_callback:
channel.bind(self.callback)
the call_back method is never called. I tried both "self.callback" and just "callback". Any ideas?
It is not clear to me how your code works, as (1) you did not post the implementation of channel.bind, and (2) your second example is incorrect in the definition of register_callback (it is using a self argument that is not part of the list of parameters of the method, and it lacks parentheses).
Nevertheless, remember that methods usually require a "self" parameter, which is implicitly passed every time you run self.function(), as this is converted internally to a function call with self as its first parameter: function(self, ...). Since your callback has just one argument data, this is probably the problem.
You cannot declare a method bind that is able to accept either a function or a class method (the same problem happens with every OOP language I know: C++, Pascal...).
There are many ways to do this, but, again, without a self-contained example that can be compiled, it is difficult to give suggestions.
You need to pass the self object as well:
def register_callback(self):
channel.bind(self.callback)
What you're doing is entirely possible, but I'm not sure exactly what your issue is, because your sample code as posted is not even syntactically valid. (The second method has no argument list whatsoever.)
Regardless, you might find the following sample code helpful:
def send_data(callback):
callback('my_data')
def callback(data):
print 'Free function callback called with data:', data
# The follwing prints "Free function callback called with data: my_data"
send_data(callback)
class ClassWithCallback(object):
def callback(self, data):
print 'Object method callback called with data:', data
def apply_callback(self):
send_data(self.callback)
# The following prints "Object method callback called with data: my_data"
ClassWithCallback().apply_callback()
# Indeed, the following does the same
send_data(ClassWithCallback().callback)
In Python it is possible to use free functions (callback in the example above) or bound methods (self.callback in the example above) in more or less the same situations, at least for simple tasks like the one you've outlined.
I have created the following constructor:
class Analysis:
def __init__(self, file_list, tot_col, tot_rows):
self.file_list = file_list
self.tot_col = tot_col
self.tot_rows = tot_rows
I then have the method full_analysis() call calc_total_rows() from the same file:
def full_analysis(self):
"""Currently runs all the analysis methods"""
print('Analysing file...\n' +
'----------------------------\n')
calc_total_rows()
From another file I am calling the full_analysis() however errors occur saying that calc_total_rows() is not defined, and the method is just below it.
I'm inexperienced with Python however I tried to rearrange the code and add 'self' in various places to no avail.
The other file does meet the requirements of the constructor, and if I remove the calc_total_rows() method, the print line runs. I however do not wish to call each method individually, and would like to call a single method which runs them all.
If calc_total_rows is an instance method as your question implies, then you need to call self.calc_total_rows() from within full_analysis. Unlike some other languages, Python does not have implicit instance references within method scope; you have to explicitly retrieve the member method from self.
I wish I had found this sooner.
In order to solve this, I had to use self in front of the method.
In my example:
def full_analysis(self):
"""Currently runs all the analysis methods"""
print('Analysing file...\n' +
'----------------------------\n')
self.calc_total_rows()
This works.
I have 8 CPU core and 200 tasks to done. Each tasks are isolate. There is no need to wait or share the result. I'm looking for a way to run 8 tasks/processes at a time (Maximum) and when one of them finished. The remaining task will automatic start process.
How to know when the child process was done and start a new child process. First I'm trying to use process(multiprocessing) and it's hard to figure out. Then I try to use pool and face with the pickle problem cause I need to use dynamic instantiate.
Edited : Adding my code of Pool
class Collectorparallel():
def fire(self,obj):
collectorController = Collectorcontroller()
collectorController.crawlTask(obj)
def start(self):
log_to_stderr(logging.DEBUG)
pluginObjectList = []
for pluginName in self.settingModel.getAllCollectorName():
name = pluginName.capitalize()
#Get plugin class and instanitiate object
module = __import__('plugins.'+pluginName,fromlist=[name])
pluginClass = getattr(module,name)
pluginObject = pluginClass()
pluginObjectList.append(pluginObject)
pool = Pool(8)
jobs = pool.map(self.fire,pluginObjectList)
pool.close()
print pluginObjectList
pluginObjectList got something like
[<plugins.name1.Name1 instance at 0x1f54290>, <plugins.name2.Name2 instance at 0x1f54f38>]
PicklingError: Can't pickle : attribute lookup builtin.instancemethod failed
but the Process version work fine
Warning this is kinda subjective to deployment and situation but my current setup is as follows
I have a worker program, I fire up 6 copies (I have 6 cores).
Each worker does the following;
Connect to a Redis instance
Try and pop some work of a specific list
Pushes back logging information
Either idles or terminates on a lack of work in the 'queue'
Then each program is essentially standalone while still doing the work you require with a separate queuing system. As you have no go-between on your processes, this might be a solution to your problem.
I'm not an expert in multiprocessing in Python but I tried some fiew things with this help http://www.tutorialspoint.com/python/python_multithreading.htm and this one too http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/1/ .
You can for example use this method isAlive which answering your question.
The solution to your problem is trivial. First of all, note that methods cannot be pickled. In fact only the types listed in pickle's documentation can be pickled:
None, True, and False
integers, long integers, floating point numbers, complex numbers
normal and Unicode strings
tuples, lists, sets, and dictionaries containing only picklable objects
functions defined at the top level of a module
built-in functions defined at the top level of a module
classes that are defined at the top level of a module
instances of such classes whose __dict__ or the result of calling __getstate__() is picklable (see section The pickle protocol
for details).
[...]
Note that functions (built-in and user-defined) are pickled by
“fully qualified” name reference, not by value. This means that
only the function name is pickled, along with the name of the module the function is defined in. Neither the function’s code, nor any of
its function attributes are pickled. Thus the defining module must be
importable in the unpickling environment, and the module must contain
the named object, otherwise an exception will be raised. [4]
Similarly, classes are pickled by named reference, so the same
restrictions in the unpickling environment apply. Note that none of
the class’s code or data is pickled[...]
Clearly a method isn't a function defined at the top level of a module, hence it cannot be pickled.(read carefully that part of the documentation to avoid future problems with pickle!) But it is absolutely trivial to replace the method with a global function and passing self as additional parameter:
import itertools as it
def global_fire(argument):
self, obj = argument
self.fire(obj)
class Collectorparallel():
def fire(self,obj):
collectorController = Collectorcontroller()
collectorController.crawlTask(obj)
def start(self):
log_to_stderr(logging.DEBUG)
pluginObjectList = []
for pluginName in self.settingModel.getAllCollectorName():
name = pluginName.capitalize()
#Get plugin class and instanitiate object
module = __import__('plugins.'+pluginName,fromlist=[name])
pluginClass = getattr(module,name)
pluginObject = pluginClass()
pluginObjectList.append(pluginObject)
pool = Pool(8)
jobs = pool.map(global_fire, zip(it.repeat(self), pluginObjectList))
pool.close()
print pluginObjectList
Note that, since Pool.map calls the given function with only one argument, we have to "pack together" both self and the actual argument. To do this I have zipped it.repeat(self) and the original iterable.
If you do not care about the order in which the calls are done then using pool.imap_unordered might provide better performances. However it returns an iterable and not a list, so if you want the list of results you'll have to do jobs = list(pool.imap_unordered(...)).
I believe that this code will remove all pickling problems.
class Collectorparallel():
def __call__(self,cNames):
for pluginName in cNames:
name = pluginName.capitalize()
#Get plugin class and instanitiate object
module = __import__('plugins.'+pluginName,fromlist=[name])
pluginClass = getattr(module,name)
pluginObject = pluginClass()
pluginObjectList.append(pluginObject)
collectorController = Collectorcontroller()
collectorController.crawlTask(obj)
def start(self):
log_to_stderr(logging.DEBUG)
pool = Pool(8)
jobs = pool.map(self,self.settingModel.getAllCollectorName())
pool.close()
What has happened here is that Collectorparallel has been turned into a callable. The list of plugin names is used as the iterable for the pool, the actual determination of the plugins and their instantiation is done in each of the worker processes, and the class instance object is used as the callable for each worker process.