I was wondering if there is a way that I can pass pyOpt a function that
should be called at the end of each iteration?
The reason I need something like this is that I am running a FEA
simulation in each function evaluation, and I would like to output the
FEA results (displacements, stresses) to an ExodusII file after each
optimization iteration. I originally placed my writeExodus function at
the end of the "function evaluation" function, my problem with this is
that a new "pseudo time-step" gets written to my exodus file each time
the function is evaluated rather than only at the end of each iteration,
so this obviously would lead to extra unncessary output to the exodus
file for numerical differentiation (finite difference, complex step) and
for optimizers that make multiple function evaluations per iteration
(i.e. GCMMA when checking if approximation is conservative).
So, is there a way I can tell pyOpt to execute a function (i.e. my
exodusWrite function) at the end of each iteration? Or alternatively,
is there anyway I can track the optimizer iterations in pyOpt so that
inside of my "function evaluation" function I can keep track of the
optimizer iterations and only write the exodus output when the iteration number changes?
you could either put your function into a component that you put at the end of your model. Since it won't have any connections, you'll want to set the run order manually for your group.
Alternatively, you could just hack the pyopt_sparse driver code to manually call your function. You would just add a call to your method of choice at the end if this call and it would then get called any time pyopt_sparse asks for an objective evaluation
Related
I have a function that can return values, but in order to do so it collects outputs from AWS (and it can be very large jobs, so it can take a long time and not always necessary).
Is there a way I can know, at run time, if the user has called my function with an assignment operator (i.e: x = foo()) or not (i.e foo()).
I know I can just add a flag, but the idea of checking the call during runtime seems elegant.
Thanks!
I got different results of the following two python timeit lines.
print(min(timeit.repeat(lambda: 1+1)))
print(min(timeit.repeat('lambda: 1+1')))
The output is something like:
0.13658121100002063
0.10372773000017332
Could you pls help explain the difference between them?
On second sight, this is a really interesting question!
But first, please have another look at the docs:
The constructor takes a statement to be timed, an additional statement used for setup, and a timer function. Both statements default to 'pass'; the timer function is platform-dependent (see the module doc string).
[...]
The stmt and setup parameters can also take objects that are callable without arguments. This will embed calls to them in a timer function that will then be executed by timeit(). Note that the timing overhead is a little larger in this case because of the extra function calls.
When you manage to not fall for the trap to attribute the observed difference to the function call overhead, you notice: the first argument is either a callable that is called or a statement that is executed.
So, in your two lines of code you measure the performance of two different things.
In the first line you pass a callable that is being called and its execution time is measured:
timeit.repeat(lambda: 1+1)
Here you pass a statement that is being executed and its execution time is measured:
timeit.repeat('lambda: 1+1')
Note that in the second case you don't actually call the function, but measure the time it takes to create the lambda!
If you again wanted to measure the execution time of the function call, you should have written something like this:
timeit.repeat('test()', 'test=lambda: 1+1')
For comparison, look at this example:
import time
print(min(timeit.repeat(lambda: time.sleep(1), number=1)))
print(min(timeit.repeat('lambda: time.sleep(1)', number=1)))
The output clearly shows the difference (first calls function, second creates function):
1.0009081270000024
5.370002327254042e-07
When writing a recursive function in Python, what is the difference between using "print" and "return"? I understand the difference between the two when using them for iterative functions, but don't see any rhyme or reason to why it may be more important to use one over the other in a recursive function.
What a strange question.
The two are completely different, and their correct use in a recursive function is just as important as in an iterative one. You might even say more important: after all, in an iterative function, you return the result once only; but in a recursive function, you must return something at every step, otherwise the calling step has nothing to work on.
To illustrate: if you are doing mergesort, for example, the recursive function at each stage must return the sorted sublist. If it simply prints it, without returning it, then the caller will not get the sublist to sort, so cannot then merge the two sorted sublists into a single sorted list for passing further up the stack.
I might add that from a Functional Programming perspective print is a side affect as it pertains to return.
Consider programming as an extent of mathematics. Your function takes a set of inputs, performs an action on them and returns the computation. Print in this case is not a computation. It causes an interaction with the system's IO to provide output to the user.
As for return and print in a recursive function, return is the only required operation. Recursion requires inputs, an optional computation and a test. The test defines if the function will be called again with the computation modified inputs or if the modified inputs are the final solution to the overall equation. No where in this process is print required, and per Functional purists, it really has no place in a recursive function ( unless its computation IS to print).
The difference between print and return in a recursive function is similar to the difference in an iterative function. Print is direct output to the user and return is the result of the function. You have to return at every step or the function will never end and you will get an error.
For example-
def factorial(n):
if n == 1:
return 1
else:
return n * factorial(n-1)
If you used print instead the function would never end.
I am looking at a debilitating performance problem in Python while testing code out in the IDLE GUI.
For a recursive function:
def f(input1,input2):
newinput1 = g(input1,input2);
return f(newinput1,input2)
If I call the function f(20,A+10) where A is a constant then does each recursive call of f() get input2 = "A+10" as a string that is reinterpreted, get an expression that needs to be recalculated, or get a number that is the result of A+10 ?
I found this in the help file, but need something more well defined to understand:
"Abstractions tend to create indirections and force the interpreter to work more. If the levels of indirection outweigh the amount of useful work done, your program will be slower. You should avoid excessive abstraction, especially under the form of tiny functions or methods (which are also often detrimental to readability)."
What exactly is going on in Python?
When you call a function as follows:
f(20, A+10)
Python evaluates 20 to 20 and A+10 to whatever that works out to. Let's say A is currently 20, so A+10 works out to 30. The names input1 and input2 are then bound to the values 20 and 30 in the environment of the call to f. Python will not need to reevaluate A+10 when the value is used, and it will not record anything about how the value 30 was obtained. In particular, if you call
f(20, A)
input2 will be bound to the current value of A, but it will not retain any ties to A. Reassigning input2 inside f will not affect A.
Say I a method to create a dictionary from the given parameters:
def newDict(a,b,c,d): # in reality this method is a bit more complex, I've just shortened for the sake of simplicity
return { "x": a,
"y": b,
"z": c,
"t": d }
And I have another method that calls newDict method each time it is executed. Therefore, at the end, when I look at my cProfiler I see something like this:
17874 calls (17868 primitive) 0.076 CPU seconds
and of course, my newDict method is called 1785 times. Now, my question is whether I can memorize the newDict method so that I reduce the call times? (Just to make sure, the variables change almost in every call, though I'm not sure if it has an effect on memorizing the function)
Sub Question: I believe that 17k calls are too much, and the code is not efficient. But by looking at the stats can you also please state whether this is a normal result or I have too many calls and the code is slow?
You mean memoize not memorize.
If the values are almost always different, memoizing won't help, it will slow things down.
Without seeing your full code, and knowing what it's supposed to do, how can we know if 17k calls is a lot or the little?
If by memorizing you mean memoizing, use functools.lru_cache.
It's a function decorator
The purpose of memoizing is to save a result of an operation that was expensive to perform so that it can be provided a second, third, etc., time without having to repeat the operation and repeatedly incur the expense.
Memoizing is normally applied to a function that (a) performs an expensive operation, (b) always produces the same result given the same arguments, and (c) has no side effects on the program state.
Memoizing is typically implemented within such a function by 'saving' the result along with the values of the arguments that produced that result. This is a special form of the general concept of a cache. Each time the function is called, the function checks its memo cache to see if it has already determined the result that is appropriate for the current values of the arguments. If the cache contains the result, it can be returned without the need to recompute it.
Your function appears to be intended to create a new dict each time it is called. There does not appear to be a sensible way to memoize this function: you always want a new dict returned to the caller so that its use of the dict it receives does not interfere with some other call to the function.
The only way I can visualize using memoizing would be if (1) the computation of one or more of the values placed into the result are expensive (in which case I would probably define a function that computes the value and memoize that function) or (2) the newDict function is intended to return the same collection of values given a particular set of argument values. In the latter case I would not use a dict but would instead use a non-modifiable object (e.g., a class like a dict but with protections against modifying its contents).
Regarding your subquestion, the questions you need to ask are (1) is the number of times newDict is being called appropriate and (2) can the execution time of each execution of newDict be reduced. These are two separate and independent questions that need to be individually addressed as appropriate.
BTW your function definition has a typo in it -- the return should not have a 'd' between the return keyword and the open brace.