I have a number of free functions in a couple of Python modules and I need to create a UML Class Diagram to represent my entire program.
Can I represent a free functions in a Class Diagram somehow or do I need to create a Utility Class so I can represent them in my Class Diagram?
You will need to have some class in order to represent a "free function". You are quite free in how to do that. What I usually do is to create a stereotyped class. And it would be ok to use «utility» for that. Anything else would work, but of course you need to document that in your domain.
Usually a stereotype is bound to a profile. But most tools allow to use freely defined stereotypes. Though that is not 100% UML compliant it is quite a common practice.
Even though UML was conceived in a time, when object orientation was hyped, it doesn't mean that it cannot be used for functions. What many don't realize is, that Behavior in the UML is a Class. Therefore, any Behavior can be shown in a class diagram. Just put the metaclass in guillemets above the name, e.g. «activity». If you plan to describe the function with an activity diagram, that makes perfect sense. However, if you plan to describe it in (pseudo) code or in natural language, you can use «function behavior» which is defined as a behavior without side effects. Or, if it can have side effects, just use «opaque behavior».
Related
I have a python file, rules.py, that's a lot of functional utilities used through the application. Functions aren't contained within classes, and functions are called as needed from external files, classes and functions.
I started creating a UML class diagram (in Lucidcharts) with an empty attribute block and filling in the methods block (lower half) with the functions.
Is there a standardized non-class UML diagram standard that I should use or can use for documenting / diagramming long files full of functions? What is the standard I should use to document files full of functions?
UML is a collection of many different diagram types and many of them can be applied to less 'object orientated' type code. See the second answer here: https://softwareengineering.stackexchange.com/questions/302811/what-are-functional-programmers-using-in-place-of-uml
However, in truth, UML is mainly focused on Object Oriented software. If you want to use it in this sort of situation you can. But I doubt there's any sort of 'standard' for it. Just use the parts of UML that make sense for what you're modelling.
i want to know how the abstract data types works in python ! because my teacher gave us a project and said that we shall use it . we have to do 3 minors functions that we will use in the other 4 mains functions( the most important ones).
what i want to understand is this:
-- the techer said if we used, for example , lists in ours minors functions the code should run well if he changes the interior of the
minor functions to tuples or dictionaires ( for example ) ...
and i don t know how it is supposed to work , can u explain me ? give a simple example ?
In object oriented programming, an abstract class is like a normal class that cannot be instantiated.
It's a way for the class designer to provide a blueprint of a class, so that it's methods have to be implemented by the developer writing a class that inherits from it.
Now, for Abstract Data Types, according to wikipedia
An abstract data type is defined as a mathematical model of the data objects that make up a data type as well as the functions that operate on these objects. There are no standard conventions for defining them. A broad division may be drawn between "imperative" and "functional" definition styles.
As you can see, abstract pretty much means blueprints, not actual implementations, although in Java for example, an Abstract Class can have method bodies, i.e. implementation of the methods, just cannot be instantiated.
Furthermore, in Python, an abstract data type is one which you would make yourself.
Take for example a list and a hashset, they both form an abstract data type dictionary even though in python it would appear as a built in.
Abstraction is the technique in which you can make abstract data types or it can be viewed as a concept rather than a data type.
More useful information on geeks for geeks
I'm writing an interface to matplotlib, which requires that lists of floats are treated as corresponding to a colour map, but other types of input are treated as specifying a particular colour.
To do this, I planned to use matplotlib.colors.colorConverter, which is an instance of a class that converts the other types of input to matplotlib RGBA colour tuples. However, it will also convert floats to a grayscale colour map. This conflicts with the existing functionality of the package I'm working on and I think that would be undesirable.
My question is: is it appropriate/Pythonic to use an isinstance() check prior to using colorConverter to make sure that I don't incorrectly handle lists of floats? Is there a better way that I haven't thought of?
I've read that I should generally code to an interface, but in this case, the interface has functionality that differs from what is required.
It's a little subjective, but I'd say: in general it's a not a good idea, but here where you're distinguishing between a container and an instance of a class it is appropriate (especially when, say, those classes may themselves be iterable, like tuples or strings and doing it the duck-typing would get quite tricky).
Aside: coding to an interface is generally recommended, but it's far more applicable to the Java-style static languages than Python, where interfaces don't really exist, unless you count abstract base classes and the abc module etc. (much deeper discussion in What's the Python version for “Code against an interface, not an object”?)
Hard to say without more details, but it sounds like you're closer to building a facade here than anything, and as such you should be free to use your own (neater / tighter / different) API, insulating users from your underlying implementation (Matplotlib).
Why not write two separate functions, one that treats its input as a color map, and another that treats its input as a color? This would be the simplest way to deal with the problem, and would both avoid surprises, and leave you room to expand functionality in the future.
This is a somewhat basic question about the correct order of class inheritance.
Basically I'm trying to write a numerical simulation to solve a physical model, the details are not important (I happen to be writing this in python), it is a well known algorithm solved by iterating over a volume of space.
The classes that I think I need are:
Setup: A class that defines all of the simulation parameters, like volume size, and has methods for checking for correct parameter type, calculating derived parameters etc.
Solver: Contains the actual algorithm for solving
Output: Contains handles for all the plot output and has access to save file etc.
I also need a run method which can run the solver and periodically (with periods defined in Setup) run some of the output functions.
In a high quality program which class would inherit from which? (My guess Output inherits from Solver inherits from Setup)
Where does the run method belong? Maybe there should be some extra base class like Interface that the user interacts with and includes the run method?
There is a concept that encourages the use of composition over inheritance (http://en.wikipedia.org/wiki/Composition_over_inheritance) so I would say that if you really don't need inheritance don't use it (they can be independent objects or functions, which in python are like objects).
If you model this with objects, run() should be in #Solver. Recall that the concept of interface is not necessary in python like in other languages, so you can either use objects, or functions with the algorithms you need.
Are you coming from a Java background by any chance?
First off, you've given no indication that any of your classes should inherit from another. For that matter, you probably don't need as many classes as you think you do.
Solver #Contains the actual algorithm for solving
If it's only one function you might as well just leave it as a free function.
Output #Contains handles for all the plot output and has access to save file etc.
If the functions don't have shared state, it could just as easily be a module.
As for the run method, just stick it wherever it is most convenient. The nice thing about Python is that you can start prototyping without any classes, and just refactor into a class whenever you find yourself passing the same set of data around a lot.
Summary
What are the pros and cons of splitting pure functions into passive objects that describe the algorithms and active objects that can execute those algorithms? Note that the situation is greatly simplified by the fact that the functions have no side effects.
Detail
The portion of the code I'm writing (in Python 3) will largely adhere to functional programming.
There is some (immutable) data. There are some algorithms. And I need to apply those algorithms to the data, and get the result.
The algorithms could be represented as regular functions, which will be transformed using standard operations (e.g., I may compose two functions, then freeze some parameters using functools.partial, then passed the resulting function to another function as an argument). Many of the lower-level functions would be memoized for performance reasons.
But an idea occurred to me that perhaps I should instead represent algorithms as passive objects. Such objects wouldn't be able to execute anything themselves. When I'm ready to execute, I'll feed the algorithm object and all the inputs it expects into a special "computation" object. This would match my mental model of an algorithms far better, but I'm concerned that I might be missing some problems with this approach.
Algorithm objects could be implemented in a variety of ways; perhaps even multiple implementations could be allowed. Let's say my algorithms are instances of an abstract class Algorithm; then its subclasses could represent:
strings of text in a domain-specific language that I'll create
some kind of execution trees that I'll construct
even regular Python functions
I have never done this before, so I wanted to get some feedback on this idea. Does it offer any real design advantages, apart from my subjective feeling that it's more "natural"? Does it lead to any problems?
I don't think the design offers any major advantage or disadvantage.
Assuming that any computation object can run any Algorithm, then your class Algorithm presumably is going to have a function called something like execute that knows how to run the algorithm. Name that function __call__, and now your Algorithm class is exactly like a Python callable object (including functions).
For your strings of DSL code: under your design you'd represent them as a subclass of Algorithm that overrides execute to run an interpreter. Under the other design you'd just do something like:
def createDSLAlgorithm(code):
def coderunner(*args, **kwargs):
DSLInterpreter().interpret(code, *args, **kwargs)
return coderunner
And similar to create a function that when called will execute a specified expression tree.
Of course I might be missing something that you're planning to put into your Algorithm design that's not possible for functions. Not all Python functions have mutable attributes, for example. But since user-defined functions can be closures, can have attributes, and any object can "behave like a function" just by implementing __call__, I suspect it's different names for the same thing.
Choosing your own names, of course, is a small advantage if it aids code readability. And it might feel a bit more natural to attach attributes to "objects" than it does to attach them to "functions", if your computation objects are going to interrogate certain known attributes of Algorithms in order to help decide what to do when computing them (for example whether or not to memoize).