I am using lmfit to do small angle X-ray scattering pattern fitting. To this end, I use the Model class to wrap my functions and to make Composite Models which works well. However, it happened that I wrote all my function with 'q' as the independent variable (convention in the discipline). Now I wanted to combine some of those q-functions with some of the built-in models. It clashes, because the independent_variable for those is 'x'. I have tried to do something like modelBGND = lmfit.models.ConstantModel(independent_vars=['q']), but it gives the error:
ValueError: Invalid independent variable name ('q') for function
constant
Of course this can be solved, by either rewriting the built-in function again in 'q', or by recasting all my previously written functions in terms of 'x'. I am just curious to hear if there was a more straight forward approach?
Sorry, I don't think that is possible.
I think you will have to rewrite the functions to use q instead of x. That is, lmfit.Model uses function inspection to determine the names of the function arguments, and most of the built-in models really do require the first positional argument to be named x.
Related
I've been using python for scientific purposes for some years now. I recently became more familiar with class writing, but I feel like I'm missing something regarding the standard way to instantiate classes.
Say I define a class MyClass.
class MyClass:
def __init__(self):
pass
Then I know that I can map x to an instance of MyClass simply with
x = MyClass()
This works well and exactly as I expect.
However, it seems to me that when I use code from standard libraries or from numpy or scipy, I don't create objects in the same way: as far as I know, I generally don't use the name of a class to instantiate it. From what I understand, I'd say that this implies that I use neither class methods nor the default constructor of a class, but rather other functions which are defined outside the class.
For example, numpy's random module uses a class Generator to generate random numbers. However, numpy explicitly recommends not to use the class constructor to get a Generator instance, and to use instead the default_rng function from the random module. So if I want to generate random numbers, I use
rng = numpy.random.default_rng()
to create a Generator instance. This is done without using explicitly the name of the class.
It seems to me that most of the code that I use is written in the latter way. Why is that so? Is it somehow considered bad practice to directly call default class constructors? Is it considered to be a better practice to have separate functions in a module to create class instances? Is it only because some preprocessing must usually be done before creating an instance of a class? (I guess not, because it that case, why not do that in the initialization of the class?)
No, it is not bad practice to use the normal constructor, but sometimes it can be useful to have an alternative constructor.
Reasons for using a function as an alternative constructor to create an object:
(not a complete list and not in any order)
Decouple the creation of an object from its implementation.
Decoupling is often aimed for in OOP.
Hide complexity
The constructor could have many parameters, but often a default object is needed.
Easier to read/write and understand
numpy.random.default_rng() vs numpy.random.Generator(numpy.random.PCG64())
A factory, that creates and returns a (different) object, based on sometimes complex conditions.
e.g. python's open() returns different objects for text files and for binary files.
Where to implement these?
In some other languages, these would be implemented as class methods of the class they instantiate, or even of a new class.
This could be done in python, too, but it is often shorter and more convenient to use, if they are implemented as functions at module level.
I think np.array call to create np.ndarray is probably one of the most common ways in which an object is created by calling another function. Here is an explanation of that.
What is the difference between ndarray and array in numpy?
I cannot answer for all cases in which we use a function to "wrap" the construction of an object, but I have used such functions to simplify object creation in many situations which results in cleaner code. I can speak of such situations.
For example, the underlying class definition may expose a lot of parameters. It may not make sense to ask the user to provide parameters values for all parameters of the class in 99.9% of the cases (say). These "spurious" parameters may be fixed, or may be inferred from other parameter values in most such situations (e.g., parameter b is 2x parameter a in most cases). The code becomes unwieldy in these 99.9% of cases to explicitly provide values for such parameters, so a wrapper function is written to make it cleaner.
It is possible to use default parameters to deal with many such situations, but it may not make sense to push the inference of parameter values into the class' init function itself. For example, while something like b = 2 * a if a is None else b seems reasonable to put in the init function, where a, b are parameters, it may not be so simple practically (e.g., b may have a complex relationship with a, c, d, f, etc or it may be a class object itself), or there may be 1000 such parameter inferences to be made. So it is logical to separate such "glue" code (which is a customization for ease of usage) into another function and keep the base code (which implements a specific functionality) clean and to-the-point.
Do we want to write another class wrapper instead of a function wrapper? In this case, the new class wrapper will present a simplified interface. But writing a class wrapper in this situation is unnecessary since class implies many things, while a function implies just procedural execution.
Note that this happens mostly in case of library type code which has the largest number of use cases where you want to make usage easiest for most people to use. Such issues do not exist for most "user" code where we simply write classes for a specific application. So in practice when we write applications, we should create classes directly using constructors when possible.
There is also the popular Factory Design pattern that some #ekhumoro referenced above which is very similar to this. But based on text-book definition, the Factory Design pattern seems to be restricted to super/sub classes (I could be wrong, and this might be useless semantics).
I'm using a software package for optimization called CVX. It has these "atoms" which take a CVX expression and construct a new CVX expression. One example is the trace atom to compute the trace of a matrix.
I thought I would need code as below to create a CVX variable (an n by n matrix) and compute its trace
X = cvxpy.Variable((n,n))
tr = cvxpy.atoms.affine.trace.trace(X)
This does work but what also works is simply
X = cvxpy.Variable((n,n))
tr = cvxpy.trace(X)
Why does the second option work? In general, when there is a class with nested methods, how am I able to call an inner method directly in Python?
I wouldn't generalize the behavior. Almost certainly this is done by design. Likely the lib devs didn't want to have to be so verbose with tracing down the hierarchy so gave you this shortcut. It's also possible (very likely) that the .trace directly against the cvxpy object is operating on that object itself whereas the deeper one is acting on the cvxpy.atoms.affine.trace object.
Be very careful because the side effects may not be the same.
To answer your question very directly I'd suggest that the second option works because somebody thought to make their API easier OR it just happens to work the way you're expecting.
To your second question: nested methods aren't a thing. There is a property of the cvxpy object called atoms which in turn has a property called affine which has a method called trace.
I am trying to register a python function and its gradient as a tensorflow operation.
I found many useful examples e.g.:
Write Custom Python-Based Gradient Function for an Operation? (without C++ Implementation)
https://programtalk.com/python-examples/tensorflow.python.framework.function.Defun/
Nonetheless I would like to register attributes in the operation and use these attributes in the gradient definition by calling op.get_attr('attr_name').
Is this possible without going down to C implementation?
May you give me an example?
Unfortunately I don't believe it is possible to add attributes without using a C++ implementation of the operation. One feature that may help though is that you can define 'private' attributes by prepending an underscore to the start. I'm not sure if this is well documented or what the long-term guarantees are, but you can try setting '_my_attr_name' and you should be able to retrieve it later.
I want to run parameter studies in different modelica building libraries (buildings, IDEAS) with python: For example: change the infiltration rate.
I tried: simulateModel and simulateExtendedModel(..."zone.n50", [value])
My questions:Why is it not possible to translate the model and then change the parameter: Warning: Setting zone.n50 has no effect in model. After translation you can only set literal start-values and non-evaluated parameters.
It is also not possible to run: simulateExtendedModel. When i go to command line in dymola and write for zone.n50, then i get the actual value (that i have defined in python), but in the result file (and the plotted variable) it is always the standard n50 value.So my question: How can I change values ( befor running (and translating?) the simulation?
The value for the parameter is also not visible in the variable browser.
Kind regards
It might be a strcutrual parameter, these are evaluated also. It should work if you explicitly set Evaluate=False for the parameter that you want to study.
Is it not visible in the variable browser or is it just greyed out and constant? If it is not visible at all you should check if it is protected.
Some parameters cannot be changed after compilation, even with Evaluate=False. This is the case for parameters that influence the structure of the model, for example parameters that influence a discretization scheme and therefore influence the number of equations.
Changing such parameters requires to recompile the model. You can still do this in a parametric study though, I think you can use Modelicares to achieve this (http://kdavies4.github.io/ModelicaRes/modelicares.exps.html)
In Python, how can I calculate LCOM (lack of cohesion) for C++ files (or any other file types) using SciTools Understand API?
For an assignment, we're asked to calculate LCOM ourselves instead of using SciTools's Understand.
To calculate LCOM4, I need the following metrics,
number of functions/methods in a class (given by Understand as "CountDeclFunction")
number of method pairs in class with at least one instance variable that they commonly use or define in their body.
number of method pairs in class that have at least one instance method that they commonly call in their body.
Any suggestion is much appreciated.
From the metrics listed on https://scitools.com/support/metrics-reports/, I believe you have to develop your own metrics to complement what Understand provides.