Numexpr Error: "a = global_dict[name]" - python

I'm trying to use Numexpr to make a fast Vector Norm function to compare with Numpy's. I try the following:
import numexpr as ne
import numpy as np
def L2_Norm(vector_in):
vector1 = ne.evaluate("abs(vector_in)")
vector2 = ne.evaluate("vector1**2")
vector3 = ne.evaluate("sum(vector2)")
vector_out = ne.evaluate("sqrt(vector3)")
return vector_out`
ve = np.arange(10)
L2_Norm(ve)
and I get this:
File "C:\Folder1\Folder2\src\test.py", line 11, in L2_Norm
vector3 = ne.evaluate("sum(vector2)")<br>
File "C:\Python27\lib\site-packages\numexpr\necompiler.py", line 701, in evaluate
a = global_dict[name]<br>
KeyError: 'a'
I basically followed the same steps on their User Guide (which seems to be the only reference around). The only clue i have is this:
umexpr's principal routine is this:
evaluate(ex, local_dict=None, global_dict=None, **kwargs)
where ex is a string forming an expression, like "2*a+3*b". The values
for a and b will by default be taken from the calling function's frame
(through the use of sys._getframe()). Alternatively, they can be
specified using the local_dict or global_dict arguments, or passed as
keyword arguments
... which I don't really understand - I assume the author kept it simple because the package is simple. What have I overlooked?

Turns out the "local_dict=None, global_dict=None" parameters aren't default afterall. You need to specifically add them into your numexpr.evaluate function.

Related

Insert newline after equals sign in self documenting f-string in python3.8

With python3.8, a new feature is self documenting format strings. Where one would normally do this:
>>> x = 10.583005244
>>> print(f"x={x}")
x=10.583005244
>>>
One can now do this, with less repetition:
>>> x = 10.583005244
>>> print(f"{x=}")
x=10.583005244
>>>
This works very well for one line string representations. But consider the following scenario:
>>> import numpy as np
>>> some_fairly_long_named_arr = np.random.rand(4,2)
>>> print(f"{some_fairly_long_named_arr=}")
some_fairly_long_named_arr=array([[0.05281443, 0.06559171],
[0.13017109, 0.69505908],
[0.60807431, 0.58159127],
[0.92113252, 0.4950851 ]])
>>>
Here, the first line does not get aligned, which is (arguably) not desirable. I would rather prefer the output of the following:
>>> import numpy as np
>>> some_fairly_long_named_arr = np.random.rand(4,2)
>>> print(f"some_fairly_long_named_arr=\n{some_fairly_long_named_arr!r}")
some_fairly_long_named_arr=
array([[0.06278696, 0.04521056],
[0.33805303, 0.17155518],
[0.9228059 , 0.58935207],
[0.80180669, 0.54939958]])
>>>
Here, the first line of the output is aligned as well, but it defeats the purpose of not repeating the variable name twice in the print statement.
The example is a numpy array, but it could have been a pandas dataframe etc. as well.
Hence, my question is: Can a newline character be inserted after the = sign in self documenting strings?
I tried to add it like this, but it does not work:
>>> print(f"{some_fairly_long_named_arr=\n}")
SyntaxError: f-string expression part cannot include a backslash
I read the docs on format-specification-mini-language, but most of the formatting there only works for simple data types like integers, and I was not able to achieve what I wanted using those that work.
Sorry for the long write-up.
Wouldn't recommend this at all, but for possibility's sake:
import numpy as np
_old_array2string = np.core.arrayprint._array2string
def _array2_nice_string(*args, **kwargs):
non_nice_string = _old_array2string(*args, **kwargs)
dimension_strings = non_nice_string.split("\n")
if len(dimension_strings) > 1:
dimension_string = dimension_strings[1]
dimension_indent = len(dimension_string) - len(dimension_string.lstrip())
return "\n" + " " * dimension_indent + non_nice_string
return non_nice_string
np.core.arrayprint._array2string = _array2_nice_string
Outputs for:
some_fairly_long_named_arr = np.random.rand(2, 2)
print(f"{some_fairly_long_named_arr=}")
some_fairly_long_named_arr=array(
[[0.95900608, 0.79367873],
[0.58616975, 0.17757661]])
and
some_fairly_long_named_arr = np.random.rand(1, 2)
print(f"{some_fairly_long_named_arr=}")
some_fairly_long_named_arr=array([[0.62492772, 0.80453153]]).
I made it so if if the first dimension is 1, it is kept on the same line.
There is a non-internal method np.array2string that I tried to re-assign, but I never got that working. If someone could find a way to re-assign that public function instead of this internally used one, I'd imagine that'd make this solution a lot cleaner.
I figured out a way to accomplish what I wanted, after reading through the CPython source:
import numpy as np
some_fairly_long_named_arr = np.random.rand(4, 2)
print(f"""{some_fairly_long_named_arr =
}""")
Which produces:
some_fairly_long_named_arr =
array([[0.23560777, 0.96297907],
[0.18882751, 0.40712246],
[0.61351814, 0.1981144 ],
[0.27115495, 0.72303859]])
I would rather prefer a solution that worked in a single line, but this seems to be the only way for now. Perhaps another way will be implemented in a later python version.
However note that the indentation on the continuation line has to be removed for the above mentioned method, as such:
# ...some code with indentation...
print(f"""{some_fairly_long_named_arr =
}""")
# ...more code with indentation...
Otherwise, the alignment of the first line is broken again.
I tried using inspect.cleandoc and textwrap.dedent to alleviate this, but could not manage to fix the indentation issue. But perhaps this is the subject of another question.
Edit: After reading this article, I found a single line solution:
f_str_nl = lambda object: f"{chr(10) + str(object)}" # add \n directly
# f_str_nl = lambda object: f"{os.linesep + str(object)}" # add \r\n on windows
print(f"{f_str_nl(some_fairly_long_named_arr) = !s}")
which outputs:
f_str_nl(some_fairly_long_named_arr) =
[[0.26616956 0.59973262]
[0.86601261 0.10119292]
[0.94125617 0.9318651 ]
[0.10401072 0.66893025]]
The only caveat is that the name of the object gets prepended by the name of the custom lambda function, f_str_nl.
I also found that a similar question was already asked here.

Difference between a numpy.array and numpy.array[:]

Me again... :)
I tried finding an answer to this question but again I was not fortunate enough. So here it is.
What is the difference between calling a numpy array (let's say "iris") and the whole group of data in this array (by using iris[:] for instance).
I´m asking this because of the error that I get when I run the first example (below), while the second example works fine.
Here is the code:
At this first part I load the library and import the dataset from the internet.
import statsmodels.api as sm
iris = sm.datasets.get_rdataset(dataname='iris',
package='datasets')['data']
If I run this code I get an error:
iris.columns.values = [iris.columns.values[x].lower() for x in range( len( iris.columns.values ) ) ]
print(iris.columns.values)
Now if I run this code it works fine:
iris.columns.values[:] = [iris.columns.values[x].lower() for x in range( len( iris.columns.values ) ) ]
print(iris.columns.values)
Best regards,
The difference is that when you do iris.columns.values = ... you try to replace the reference of the values property in iris.columns which is protected (see pandas implementation of pandas.core.frame.DataFrame) and when you do iris.columns.values[:] = ... you access the data of the np.ndarray and replace it with new values. In the second assignment statement you do not overwrite the reference to the numpy object. The [:] is a slice object that is passed to the __setitem__ method of the numpy array.
EDIT:
The exact implementation (there are multiple, here is the pd.Series implementation) of such property is:
#property
def values(self):
""" return the array """
return self.block.values
thus you try to overwrite a property that is constructed with a decorator #property followed by a getter function, and cannot be replaced since it is only provided with a getter and not a setter. See Python's docs on builtins - property()
iris.columns.values = val
calls
type(iris.columns).__setattr__(iris.columns, 'values', val)
This is running pandas' code, because type(iris.columns) is pd.Series
iris.columns.values[:] = val
calls
type(iris.columns.value).__setitem__(iris.columns.value, slice(None), val)
This is running numpy's code, because type(iris.columns.value) is np.ndarray

How can I call an instance on a former instance from the same class?

I apologize in advance if there is an obvious solution to this question or it is a duplicate.
I have a class as follows:
class Kernel(object):
""" creates kernels with the necessary input data """
def __init__(self, Amplitude, random = None):
self.Amplitude = Amplitude
self.random = random
if random != None:
self.dims = list(random.shape)
def Gaussian(self, X, Y, sigmaX, sigmaY, muX=0.0, muY=0.0):
""" return a 2 dimensional Gaussian kernel """
kernel = np.zeros([X, Y])
theta = [self.Amplitude, muX, muY, sigmaX, sigmaY]
for i in range(X):
for j in range(Y):
kernel[i][j] = integrate.dblquad(lambda x, y: G2(x + float(i) - (X-1.0)/2.0, \
y + float(j) - (Y-1.0)/2.0, theta), \
-0.5, 0.5, lambda y: -0.5, lambda y: 0.5)[0]
return kernel
It just basically creates a bunch of convolution kernels (I've only included the first).
I want to add an instance (method?) to this class so that I can use something like
conv = Kernel(1.5)
conv.Gaussian(9, 9, 2, 2).kershow()
and have the array pop up using Matplotlib. I know how to write this instance and plot it with Matplotlib, but I don't know how to write this class so that for each method I would like to have this additional ability (i.e. .kershow()), I may call it in this manner.
I think I could use decorators ? But I've never used them before. How can I do this?
The name of the thing you're looking for is function or method chaining.
Strings are a really good example of this in Python. Because a string is immutable, each string method returns a new string. So you can call string methods on the return values, rather than storing the intermediate value. For example:
lower = ' THIS IS MY NAME: WAYNE '.lower()
without_left_padding = lower.lstrip()
without_right_padding = without_left_padding.rstrip()
title_cased = without_right_padding.title()
Instead you could write:
title_cased = ' THIS IS MY NAME: WAYNE '.lower().lstrip().rstrip().title()
Of course really you'd just do .strip().title(), but this is an example.
So if you want a .kernshow() option, then you'll need to include that method on whatever you return. In your case, numpy arrays don't have a .kernshow method, so you'll need to return something that does.
Your options are mostly:
A subclass of numpy arrays
A class that wraps the numpy array
I'm not sure what is involved with subclassing the numpy array, so I'll stick with the latter as an example. Either you can use the kernel class, or create a second class.
Alex provided an example of using your kernel class, but alternatively you could have another class like this:
class KernelPlotter(object):
def __init__(self, kernel):
self.kernel = kernel
def kernshow(self):
# do the plotting here
Then you would pretty much follow your existing code, but rather than return kernel you would do return KernelPlotter(kernel).
Which option you choose really depends on what makes sense for your particular problem domain.
There's another sister to function chaining called a fluent interface that's basically function chaining but with the goal of making the interface read like English. For example you might have something like:
Kernel(with_amplitude=1.5).create_gaussian(with_x=9, and_y=9, and_sigma_x=2, and_sigma_y=2).show_plot()
Though obviously there can be some problems when writing your code this way.
Here's how I would do it:
class Kernel(object):
def __init__ ...
def Gaussian(...):
self.kernel = ...
...
return self # not kernel
def kershow(self):
do_stuff_with(self.kernel)
Basically the Gaussian method doesn't return a numpy array, it just stores it in the Kernel object to be used elsewhere in the class. In particular kershow can now use it. The return self is optional but allows the kind of interface you wanted where you write
conv.Gaussian(9, 9, 2, 2).kershow()
instead of
conv.Gaussian(9, 9, 2, 2)
conv.kershow()

Python: Best way to deal with functions with long list of arguments?

I've found various detailed explanations on how to pass long lists of arguments into a function, but I still kinda doubt if that's proper way to do it.
In other words, I suspect that I'm doing it wrong, but I can't see how to do it right.
The problem: I have (not very long) recurrent function, which uses quite a number of variables and needs to modify some content in at least some of them.
What I end up with is sth like this:
def myFunction(alpha, beta, gamma, zeta, alphaList, betaList, gammaList, zetaList):
<some operations>
myFunction(alpha, beta, modGamma, zeta, modAlphaList, betaList, gammaList, modZetaList)
...and I want to see the changes I did on original variables (in C I would just pass a reference, but I hear that in Python it's always a copy?).
Sorry if noob, I don't know how to phrase this question so I can find relevant answers.
You could wrap up all your parameters in a class, like this:
class FooParameters:
alpha = 1.0
beta = 1.0
gamma = 1.0
zeta = 1.0
alphaList = []
betaList = []
gammaList = []
zetaList = []
and then your function takes a single parameter instance:
def myFunction(params):
omega = params.alpha * params.beta + exp(params.gamma)
# more magic...
calling like:
testParams = FooParameters()
testParams.gamma = 2.3
myFunction(testParams)
print params.zetaList
Because the params instance is passed by reference, changes in the function are preserved.
This is commonly used in matplotlib, for example. They pass the long list of arguments using * or **, like:
def function(*args, **kwargs):
do something
Calling function:
function(1,2,3,4,5, a=1, b=2, b=3)
Here 1,2,3,4,5 will go to args and a=1, b=2, c=3 will go to kwargs, as a dictionary. So that they arrive at your function like:
args = [1,2,3,4,5]
kwargs = {a:1, b:2, c:3}
And you can treat them in the way you want.
I don't know where you got the idea that Python copies values when passing into a function. That is not at all true.
On the contrary: each parameter in a function is an additional name referring to the original object. If you change the value of that object in some way - for example, if it's a list and you change one of its members - then the original will also see that change. But if you rebind the name to something else - say by doing alpha = my_completely_new_value - then the original remains unchanged.
You may be tempted to something akin to this:
def myFunction(*args):
var_names = ['alpha','beta','gamma','zeta']
locals().update(zip(var_names,args))
myFunction(alpha,beta,gamma,zeta)
However, this 'often' won't work. I suggest introducing another namespace:
from collections import OrderedDict
def myFunction(*args):
var_names = ['alpha','beta','gamma','zeta']
vars = OrderedDict(zip(var_names,args))
#get them all via vars[var_name]
myFunction(*vars.values()) #since we used an orderedDict we can simply do *.values()
you can capture the non-modfied values in a closure:
def myFunction(alpha, beta, gamma, zeta, alphaList, betaList, gammaList, zetaList):
def myInner(g=gamma, al, zl):
<some operations>
myInner(modGamma, modAlphaList, modZetaList)
myInner(al=alphaList, zl=zetaList)
(BTW, this is about the only way to write a truly recursive function in Python.)
You could pass in a dictionary and return a new dictionary. Or put your method in a class and have alpha, beta etc. be attributes.
You should put myFunction in a class. Set up the class with the appropriate attributes and call the appropriate functions. The state is then well contained in the class.

How does functools partial do what it does?

I am not able to get my head on how the partial works in functools.
I have the following code from here:
>>> sum = lambda x, y : x + y
>>> sum(1, 2)
3
>>> incr = lambda y : sum(1, y)
>>> incr(2)
3
>>> def sum2(x, y):
return x + y
>>> incr2 = functools.partial(sum2, 1)
>>> incr2(4)
5
Now in the line
incr = lambda y : sum(1, y)
I get that whatever argument I pass to incr it will be passed as y to lambda which will return sum(1, y) i.e 1 + y.
I understand that. But I didn't understand this incr2(4).
How does the 4 gets passed as x in partial function? To me, 4 should replace the sum2. What is the relation between x and 4?
Roughly, partial does something like this (apart from keyword args support etc):
def partial(func, *part_args):
def wrapper(*extra_args):
args = list(part_args)
args.extend(extra_args)
return func(*args)
return wrapper
So, by calling partial(sum2, 4) you create a new function (a callable, to be precise) that behaves like sum2, but has one positional argument less. That missing argument is always substituted by 4, so that partial(sum2, 4)(2) == sum2(4, 2)
As for why it's needed, there's a variety of cases. Just for one, suppose you have to pass a function somewhere where it's expected to have 2 arguments:
class EventNotifier(object):
def __init__(self):
self._listeners = []
def add_listener(self, callback):
''' callback should accept two positional arguments, event and params '''
self._listeners.append(callback)
# ...
def notify(self, event, *params):
for f in self._listeners:
f(event, params)
But a function you already have needs access to some third context object to do its job:
def log_event(context, event, params):
context.log_event("Something happened %s, %s", event, params)
So, there are several solutions:
A custom object:
class Listener(object):
def __init__(self, context):
self._context = context
def __call__(self, event, params):
self._context.log_event("Something happened %s, %s", event, params)
notifier.add_listener(Listener(context))
Lambda:
log_listener = lambda event, params: log_event(context, event, params)
notifier.add_listener(log_listener)
With partials:
context = get_context() # whatever
notifier.add_listener(partial(log_event, context))
Of those three, partial is the shortest and the fastest.
(For a more complex case you might want a custom object though).
partials are incredibly useful.
For instance, in a 'pipe-lined' sequence of function calls (in which the returned value from one function is the argument passed to the next).
Sometimes a function in such a pipeline requires a single argument, but the function immediately upstream from it returns two values.
In this scenario, functools.partial might allow you to keep this function pipeline intact.
Here's a specific, isolated example: suppose you want to sort some data by each data point's distance from some target:
# create some data
import random as RND
fnx = lambda: RND.randint(0, 10)
data = [ (fnx(), fnx()) for c in range(10) ]
target = (2, 4)
import math
def euclid_dist(v1, v2):
x1, y1 = v1
x2, y2 = v2
return math.sqrt((x2 - x1)**2 + (y2 - y1)**2)
To sort this data by distance from the target, what you would like to do of course is this:
data.sort(key=euclid_dist)
but you can't--the sort method's key parameter only accepts functions that take a single argument.
so re-write euclid_dist as a function taking a single parameter:
from functools import partial
p_euclid_dist = partial(euclid_dist, target)
p_euclid_dist now accepts a single argument,
>>> p_euclid_dist((3, 3))
1.4142135623730951
so now you can sort your data by passing in the partial function for the sort method's key argument:
data.sort(key=p_euclid_dist)
# verify that it works:
for p in data:
print(round(p_euclid_dist(p), 3))
1.0
2.236
2.236
3.606
4.243
5.0
5.831
6.325
7.071
8.602
Or for instance, one of the function's arguments changes in an outer loop but is fixed during iteration in the inner loop. By using a partial, you don't have to pass in the additional parameter during iteration of the inner loop, because the modified (partial) function doesn't require it.
>>> from functools import partial
>>> def fnx(a, b, c):
return a + b + c
>>> fnx(3, 4, 5)
12
create a partial function (using keyword arg)
>>> pfnx = partial(fnx, a=12)
>>> pfnx(b=4, c=5)
21
you can also create a partial function with a positional argument
>>> pfnx = partial(fnx, 12)
>>> pfnx(4, 5)
21
but this will throw (e.g., creating partial with keyword argument then calling using positional arguments)
>>> pfnx = partial(fnx, a=12)
>>> pfnx(4, 5)
Traceback (most recent call last):
File "<pyshell#80>", line 1, in <module>
pfnx(4, 5)
TypeError: fnx() got multiple values for keyword argument 'a'
another use case: writing distributed code using python's multiprocessing library. A pool of processes is created using the Pool method:
>>> import multiprocessing as MP
>>> # create a process pool:
>>> ppool = MP.Pool()
Pool has a map method, but it only takes a single iterable, so if you need to pass in a function with a longer parameter list, re-define the function as a partial, to fix all but one:
>>> ppool.map(pfnx, [4, 6, 7, 8])
short answer, partial gives default values to the parameters of a function that would otherwise not have default values.
from functools import partial
def foo(a,b):
return a+b
bar = partial(foo, a=1) # equivalent to: foo(a=1, b)
bar(b=10)
#11 = 1+10
bar(a=101, b=10)
#111=101+10
Partials can be used to make new derived functions that have some input parameters pre-assigned
To see some real world usage of partials, refer to this really good blog post here
A simple but neat beginner's example from the blog, covers how one might use partial on re.search to make code more readable. re.search method's signature is:
search(pattern, string, flags=0)
By applying partial we can create multiple versions of the regular expression search to suit our requirements, so for example:
is_spaced_apart = partial(re.search, '[a-zA-Z]\s\=')
is_grouped_together = partial(re.search, '[a-zA-Z]\=')
Now is_spaced_apart and is_grouped_together are two new functions derived from re.search that have the pattern argument applied(since pattern is the first argument in the re.search method's signature).
The signature of these two new functions(callable) is:
is_spaced_apart(string, flags=0) # pattern '[a-zA-Z]\s\=' applied
is_grouped_together(string, flags=0) # pattern '[a-zA-Z]\=' applied
This is how you could then use these partial functions on some text:
for text in lines:
if is_grouped_together(text):
some_action(text)
elif is_spaced_apart(text):
some_other_action(text)
else:
some_default_action()
You can refer the link above to get a more in depth understanding of the subject, as it covers this specific example and much more..
In my opinion, it's a way to implement currying in python.
from functools import partial
def add(a,b):
return a + b
def add2number(x,y,z):
return x + y + z
if __name__ == "__main__":
add2 = partial(add,2)
print("result of add2 ",add2(1))
add3 = partial(partial(add2number,1),2)
print("result of add3",add3(1))
The result is 3 and 4.
This answer is more of an example code. All the above answers give good explanations regarding why one should use partial. I will give my observations and use cases about partial.
from functools import partial
def adder(a,b,c):
print('a:{},b:{},c:{}'.format(a,b,c))
ans = a+b+c
print(ans)
partial_adder = partial(adder,1,2)
partial_adder(3) ## now partial_adder is a callable that can take only one argument
Output of the above code should be:
a:1,b:2,c:3
6
Notice that in the above example a new callable was returned that will take parameter (c) as it's argument. Note that it is also the last argument to the function.
args = [1,2]
partial_adder = partial(adder,*args)
partial_adder(3)
Output of the above code is also:
a:1,b:2,c:3
6
Notice that * was used to unpack the non-keyword arguments and the callable returned in terms of which argument it can take is same as above.
Another observation is:
Below example demonstrates that partial returns a callable which will take the
undeclared parameter (a) as an argument.
def adder(a,b=1,c=2,d=3,e=4):
print('a:{},b:{},c:{},d:{},e:{}'.format(a,b,c,d,e))
ans = a+b+c+d+e
print(ans)
partial_adder = partial(adder,b=10,c=2)
partial_adder(20)
Output of the above code should be:
a:20,b:10,c:2,d:3,e:4
39
Similarly,
kwargs = {'b':10,'c':2}
partial_adder = partial(adder,**kwargs)
partial_adder(20)
Above code prints
a:20,b:10,c:2,d:3,e:4
39
I had to use it when I was using Pool.map_async method from multiprocessing module. You can pass only one argument to the worker function so I had to use partial to make my worker function look like a callable with only one input argument but in reality my worker function had multiple input arguments.
Also worth to mention, that when partial function passed another function where we want to "hard code" some parameters, that should be rightmost parameter
def func(a,b):
return a*b
prt = partial(func, b=7)
print(prt(4))
#return 28
but if we do the same, but changing a parameter instead
def func(a,b):
return a*b
prt = partial(func, a=7)
print(prt(4))
it will throw error,
"TypeError: func() got multiple values for argument 'a'"
Adding couple of case from machine learning where the functional programming currying with functools.partial can be quite useful:
Build multiple models on the same dataset
the following example shows how linear regression, support vector machine and random forest regression models can be fitted on the same diabetes dataset, to predict the target and compute the score.
The (partial) function classify_diabetes() is created from the function classify_data() by currying (using functools.partial()). The later function does not require the data to be passed anymore and we can straightaway pass only the instances of the classes for the models.
from functools import partial
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import load_diabetes
def classify_data(data, model):
reg = model.fit(data['data'], data['target'])
return model.score(data['data'], data['target'])
diabetes = load_diabetes()
classify_diabetes = partial(classify_data, diabetes) # curry
for model in [LinearRegression(), SVR(), RandomForestRegressor()]:
print(f'model {type(model).__name__}: score = {classify_diabetes(model)}')
# model LinearRegression: score = 0.5177494254132934
# model SVR: score = 0.2071794500005485
# model RandomForestRegressor: score = 0.9216794155402649
Setting up the machine learning pipeline
Here the function pipeline() is created with currying which already uses StandardScaler() to preprocess (scale / normalize) the data prior to fitting the model on it, as shown in the next example:
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
pipeline = partial(make_pipeline, StandardScaler()) # curry
for model in [LinearRegression(), SVR(), RandomForestRegressor()]:
print(f"model {type(model).__name__}: " \
f"score = {pipeline(model).fit(diabetes['data'], diabetes['target'])\
.score(diabetes['data'], diabetes['target'])}")
# model LinearRegression: score = 0.5177494254132934
# model SVR: score = 0.2071794500005446
# model RandomForestRegressor: score = 0.9180227193805106

Categories

Resources