scipy.optimize.minimize keep best solution - python

I want to optimize a function with scipy.optimize.minimize, that calls a function that uses at every call a new random set by np.random.normal(m,s,N). Therefore at every call the function results in slightly different results. result.success is finally False.
If I track the optimization calls inside my function I see, that there is a best solution. Is it possible to keep the best found solution and get it from scipy.otpimize.minimize?

Related

How can I speed up a loop that queries a kd-tree?

The following section of my code is taking ages to run (it's the only loop in the function, so it's the most likely culprit):
tree = KDTree(x_rest)
for i in range(len(x_lost)):
_, idx = tree.query([x_lost[i]], k=int(np.sqrt(len(x_rest))), p=1)
y_lost[i] = mode(y_rest[idx][0])[0][0]
Is there a way to speed this up? I have a few suggestions from Stack Overflow:
This answer suggests using Cython. I'm not particularly familiar with it, but I'm not very against it either.
This answer uses a multiprocessing Pool. I'm not sure how useful this will be: my current execution takes over 12h to run, and I'm hoping for at least a 5-10x speedup (though that may not be possible).
Here are a few notes about how you could speed this up:
This code loops over x_rest, and calls tree.query() with one point
from x_rest at a time. However, query() supports querying multiple
points at once. The loop inside query() is implemented in Cython,
so I would expect it to be much faster than a loop written in
Python. If you call it like this, it will return an array of matches.
The query() function supports a parameter called workers,
which if set to a value larger than one, runs your query in
parallel. Since workers is implemented using threads, it will likely be faster than a solution using multiprocessing.Pool, since it avoids pickling. See the documentation.
The code above doesn't define the mode() function, but I'm assuming
it's scipy.stats.mode(). If that's the case, rather than calling mode() repeatedly, you can use the axis argument, which would let you take the mode of nearby points for multiple queries at once.

How to Use Scipy Optimize Vectorized Parameter

I found in the release notes from scipy to version 1.9.0 the following about the optimisation module, in the section "scipy.optimize improvements", 4th point:
Add a vectorized parameter to call a vectorized objective function only once per iteration.
However, I already checked out the documentation for such parameters (for the minisation function and minize_sclar) and couldn't find any hint for such a parameter. While searching in the internet I only found some posts concerning some suggestions or GitHub-issuses to implement such a thing (or workarounds for that).
Where is this parameter to find and can I use it?
Those notes have a more specific note for scipy.optimize.differential_evolution. That parameter is explained there. I've also come across it in other SO questions, but I don't recall which functions use it.
Basically for functions that allow it, you can write an objective function, or other callable (jacobian, boundary?), in a way that takes a 2d array of values. Normally the function just takes a 1d array, the current "state". But with "vectorized=True", the function should be prepared to accept a set of "state" array, and return a value for each.
So instead of calling the objective k times to get a range of value, such as when calculating a gradient, it can call it one, with a (n,k) argument, and get back all k results with one call.
I tried to explain how solve_ivp uses this at
scipy.integrate.solve_ivp vectorized

Any (fast) way to check a function is constant/almost constant?

I encounter this problem when I want to adopt lazy object pattern, and at some point when some function (one of user inputs) might be constant function. I want to check if the function is constant before feeding it into the loop.
My current solution is a somewhat ugly workaround using np.allclose:
def is_constant(func, arr):
return np.allclose(fun(arr), func(arr[0]))
you can also use things like np.maximum == np.minimum kinds of stuff, which can work slightly faster.
But I was wondering if there is any fast way to do that? As the above was still calculate the function over a somewhat large array already.

Tensorflow: why tf.nn.conv2d runs faster than tf.layers.conv2d?

I am writing a simple implementation of AlexNet. I tried with using tf.nn.conv2d and tf.layers.conv2d, and the results turn out that the loss dropped faster when using tf.nn.conv2d, even the structure is exactly the same. Does anyone know any explanation for that?
If you try to follow the chain of function calls, you will find that tf.layers.conv2D() makes calls to tf.nn.conv2D() so no matter what you use, tf.nn.conv2d() will be called, it will be just faster if you call it yourself. You can use traceback.print_stack() method to verify that for yourself.
NOTE This does not mean that they are one and the same, select the function based on your need as there are various other tasks undertaken by tf.layers.conv2D().

Is it bad practice to use Recursion where it isn't necessary?

In one of my last assignments, I got points docked for using recursion where it was not necessary. Is it bad practice to use Recursion where you don't have to?
For instance, this Python code block could be written two ways:
def test():
if(foo() == 'success!'):
print(True)
else:
test()
or
def test():
while(True):
if(foo() == 'success!'):
print(True)
break
Is one inherently better than another? (Performance-wise or Practice-wise)?
While recursion may allow for an easier-to-read solution, there is a cost. Each function call requires a small amount of memory overhead and set-up time that a loop iteration does not.
The Python call stack is also limited to 1000 nested calls by default; each recursive call counts against that limit, so you risk raising a run-time error with any recursive algorithm. There is no such hard limit to the number of iterations a loop may make.
They're not the same. The iterative version can in theory run forever, once it is entered it doesn't it doesn't change the state of the Python virtual machine. The recursive version, however, keeps expanding the call stack when keeps going. As #chepner mentions in his answer, there's a limit to how long you can keep that up.
For the example you give, you'll notice the difference quickly! As foo never changes, when foo !='success!' the recursive version will raise an exception once you blow out the stack (which won't take long), but the iterative version will "hang". For other functions that actually terminate, between two implementations of the same algorithm, one recursive and the other iterative, the iterative version usually will outperform the recursive one as function calls impose some overhead.
Generally, recursions should bottom out -- typically there's a simplest case they handle outright without further recursive calls (n = 0, list or tuple or dict is empty, etc.) For more complex inputs, recursive calls work on constituent parts of the input (elements of a list, items of a dict, ...), returning solutions to their subproblems which the calling instance combines in some way and returns.
Recursion is analogous, and in many ways related, to mathematical induction -- more generally, well-founded induction. You reason about the correctness of a recursive procedure using induction, as the arguments passed recursively are generally "smaller" than those passed to the caller.
In your example, recursion is pointless: you're not passing data to a nested call, it's not a case of divide-and-conquer, there's no base case at all. It's the wrong mechanism for "infinite"/unbounded loops.

Categories

Resources