When to create Class file and function file during Python programming? - python

I am beginner to Python. I am getting bit confuse while practicing program. Please help me, how can I determine that I need to create a class file and when I should go for a function file?

Create a function. Functions do specific things, classes are specific things.
1.Classes often have methods, which are functions that are associated with a particular class, and do things associated with the thing that the class is - but if all you want is to do something, a function is all you need.
2.Essentially, a class is a way of grouping functions (as methods) and data (as properties) into a logical unit revolving around a certain kind of thing. If you don't need that grouping, there's no need to make a class.

I would briefly say that:
Classes are the smallest component in Object Oriented Programming, so use them whenever you want to benefit from OOP. I mean inheritance, encapsulation, polymorphism, abstraction...
Functions are helpful when you want to take repetitive code out of the main code and call the same block of code over and over with just changing the input.
You should ommit the word "file" from your question. There is no class file or function file. If you have a file that has some Python code inside, it is called a module. Classes and Functions are defined inside the module.

Related

Selecting executed method of class at runtime in python?

This question is very generic but I don't think it is opinion based. It is about software design and the example prototype is in python:
I am writing a program which goal it is to simulate some behaviour (doesn't matter). The data on which the simulation works is fixed, but the simulated behaviour I want to change at every startup time. The simulation behaviour can't be changed at runtime.
Example:
Simulation behaviour is defined like:
usedMethod = static
The program than looks something like this:
while(true)
result = static(object) # static is the method specified in the behaviour
# do something with result
The question is, how is the best way to deal with exchangeable defined functions? So another run of the simulation could look like this
while(true)
result = dynamic(object)
if dynamic is specified as usedMethod. The first thing that came in my mind was an if-else block, where I ask, which is the used method and then execute this on. This solution would not be very good, because every time I add new behaviour I have to change the if-else block and the if-else block itself would maybe cost performance, which is important, too. The simulations should be fast.
So a solution I could think of was using a function pointer (output and input of all usedMethods should be well defined and so it should not be a problem). Then I initalize the function pointer at startup, where the used method is defined.
The problem I currently have, that the used method is not a function per-se, but is a method of a class, which depends heavily on the intern members of this class, so the code is more looking like this:
balance = BalancerClass()
while(true)
result = balance.static(object)
...
balance.doSomething(input)
So my question is, what is a good solution to deal with this problem?
I thought about inheriting from the balancerClass (this would then be an abstract class, I don't know if this conecpt exists in python) and add a derived class for every used method. Then I create the correct derived object which is specified in the simulation behaviour an run-time.
In my eyes, this is a good solution, because it encapsulates the methods from the base class itself. And every used method is managed by its own class, so it can add new internal behaviour if needed.
Furthermore the doSomething method shouldn't change, so therefore it is implemented the base class, but depends on the intern changed members of the derived class.
I don't know in general if this software design is good to solve my problem or if I am missing a very basic and easy concept.
If you have a another/better solution please tell me and it would be good, if you provide the advantages/disadvantages. Also could you tell me advantages/disadvantages of my solution, which I didn't think of?
Hey I can be wrong but what you are looking for boils down to either dependency injection or strategy design pattern both of which solve the problem of executing dynamic code at runtime via a common interface without worrying about the actual implementations. There are also much simpler ways just like u desrcibed creating an abstract class(Interface) and having all the classes implement this interface.
I am giving brief examples fo which here for your reference:
Dependecy Injection(From wikipedia):
In software engineering, dependency injection is a technique whereby one object supplies the dependencies of another object. A "dependency" is an object that can be used, for example as a service. Instead of a client specifying which service it will use, something tells the client what service to use. The "injection" refers to the passing of a dependency (a service) into the object (a client) that would use it. The service is made part of the client's state.
Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.
Python does not have such a conecpt inbuilt in the language itself but there are packages out there that implements this pattern.
Here is a nice article about this in python(All credits to the original author):
Dependency Injection in Python
Strategy Pattern: This is an anti-pattern to inheritance and is an example of composition which basically means instead of inheriting from a base class we pass the required class's object to the constructor of classes we want to have the functionality in. For example:
Suppose you want to have a common add() operation but it can be implemented in different ways(add two numbers or add two strings)
Class XYZ():
def __constructor__(adder):
self.adder = adder
The only condition being all adders passed to the XYZ class should have a common Interface.
Here is a more detailed example:
Strategy Pattern in Python
Interfaces:
Interfaces are the simplest, they define a set of common attributes and methods(with or without a default implementation). Any class then can implement an interface with its own functionality or some shared common functionality. In python Interfaces are implemented via abc package.

Are the global variables for a package in python can considered as evil?

For start, sorry for my bad English.
Recently I read book called "elegant objects" by Yegor Bugayenko. One of the topics in this book is dedicated to using of objects instead of public constants. Author writes that public constants is a pure evil. There are a lot of reasons for this. For example, using of public constants breaks incapsulation - if we want to change this contant we need to know how classes of our package using this constant. If two classes in project are using this constant in its own way we need to change code of this two classes if we change constant.
Author suggests that instead of using global constants we need to create objects and that's allows us to change code only in one place. I will give an example below.
I already read similar topics on stackoverflow, but did not find the answer about best practices or use cases. Is the use of objects better than creating a global variable in some file, for example "settings.py"?
Is this
class WWStartDate:
date_as_string = '01.09.1939'
def as_timestamp(self):
# returns date as timestamp
def as_date_time(self):
# returns date as datetime object
better than this stored in some file inside package conf.py for example:
DATE_STRING = '01.09.1939'
if we are talking about using this date in several classes of our package?
After reading of this book I decided that objects are much better, but I see a lot of cases when developers of framework or library force us to use global variables. So It is not that simple as I can see. Why, for example, django use this approach? I am talking about file settings.py.
I think that by creating a class for a constant to offer itself in different forms to different (other) classes you are overengineering your constants. Why does the constant have to know how it's being used? Does math.pi know how it is being used? It actually has some methods because it's a float object. Nothing specific to the pi constant. Besides, when you do it, you couple the constant to other classes, and when they change their uses you may have to update the constant class. Now that's a bad idea. You want them decoupled. Each class (and only the class) should know what it is doing with the constant.
This is not to say that a wrapping class can't be useful sometimes, specially if you group related constants. But them you can use Enums too.
And globals in a module are only globals in that module. When imported you are either using copies or (qualified) linked names. But even in the linked case if you change the imported value (which you shouldn't because you say it is constant) using qualification, and a different module is importing the same value, it does not see the changes you made. They are using different namespaces. This is not the usual "global variable" concept, when we say globals are evil.

Why use python classes over modules with functions?

Im teaching myself python (3.x) and I'm trying to understand the use case for classes. I'm starting to understand what they actually do, but I'm struggling to understand why you would use a class as opposed to creating a module with functions.
For example, how does:
class cls1:
def func1(arguments...):
#do some stuff
obj1 = cls1()
obj2 = cls1()
obj1.func1(arg1,arg2...)
obj2.func1(arg1,arg2...)
Differ from:
#module.py contents
def func1(arguments...):
#do some stuff
import module
x = module.func1(arg1,arg2...)
y = module.func1(arg1,arg2...)
This is probably very simple but I just can't get my head around it.
So far, I've had quite a bit of success writing python programs, but they have all been pretty procedural, and only importing basic module functions. Classes are my next biggest hurdle.
You use class if you need multiple instance of it, and you want that instances don't interfere each other.
Module behaves like a singleton class, so you can have only one instance of them.
EDIT: for example if you have a module called example.py:
x = 0
def incr():
global x
x = x + 1
def getX():
return x
if you try to import these module twice:
import example as ex1
import example as ex2
ex1.incr()
ex1.getX()
1
ex2.getX()
1
This is why the module is imported only one time, so ex1 and ex2 points to the same object.
As long as you're only using pure functions (functions that only works on their arguments, always return the same result for the same arguments set, don't depend on any global/shared state and don't change anything - neither their arguments nor any global/shared state - IOW functions that don't have any side effects), then classes are indeed of a rather limited use. But that's functional programming, and while Python can technically be used in a functional style, it's possibly not the best choice here.
As soon has you have to share state between functions, and specially if some of these functions are supposed to change this shared state, you do have a use for OO concepts. There are mainly two ways to share state between functions: passing the state from function to function, or using globals.
The second solution - global state - is known to be troublesome, first because it makes understanding of the program flow (hence debugging) harder, but also because it prevents your code from being reentrant, which is a definitive no-no for quite a lot of now common use cases (multithreaded execution, most server-side web application code etc). Actually it makes your code practically unusable or near-unusable for anything except short simple one-shot scripts...
The second solution most often implies using half-informal complex datastructures (dicts with a given set of keys, often holding other dicts, lists, lists of dicts, sets etc), correctly initialising them and passing them from function to function - and of course have a set of functions that works on a given datastructure. IOW you are actually defining new complex datatypes (a data structure and a set of operations on that data structure), only using the lowest level tools the language provide.
Classes are actually a way to define such a data type at a higher level, grouping together the data and operations. They also offer a lot more, specially polymorphism, which makes for more generic, extensible code, and also easier unit testing.
Consider you have a file or a database with products, and each product has product id, price, availability, discount, published at web status, and more values. And you have second file with thousands of products that contain new prices and availability and discount. You want to update the values and keep control on how many products will be change and other stats. You can do it with Procedural programming and Functional programming but you will find yourself trying to discover tricks to make it work and most likely you will be lost in many different lists and sets.
On the other hand with Object-oriented programming you can create a class Product with instance variables the product-id, the old price, the old availability, the old discount, the old published status and some instance variables for the new values (new price, new availability, new discount, new published status). Than all you have to do is to read the first file/database and for every product to create a new instance of the class Product. Than you can read the second file and find the new values for your product objects. In the end every product of the first file/database will be an object and will be labeled and carry the old values and the new values. It is easier this way to track the changes, make statistics and update your database.
One more example. If you use tkinter, you can create a class for a top level window and every time you want to appear an information window or an about window (with custom color background and dimensions) you can simply create a new instance of this class.
For simple things classes are not needed. But for more complex things classes sometimes can make the solution easier.
I think the best answer is that it depends on what your indented object is supposed to be/do. But in general, there are some differences between a class and an imported module which will give each of them different features in the current module. Which the most important thing is that class has been defined to be objects, this means that they have a lot of options to act like an object which modules don't have. For example some special attributes like __getattr__, __setattr__, __iter__, etc. And the ability to create a lot of instances and even controlling the way that they are created. But for modules, the documentation describes their use-case perfectly:
If you quit from the Python interpreter and enter it again, the
definitions you have made (functions and variables) are lost.
Therefore, if you want to write a somewhat longer program, you are
better off using a text editor to prepare the input for the
interpreter and running it with that file as input instead. This is
known as creating a script. As your program gets longer, you may want
to split it into several files for easier maintenance. You may also
want to use a handy function that you’ve written in several programs
without copying its definition into each program.
To support this, Python has a way to put definitions in a file and use
them in a script or in an interactive instance of the interpreter.
Such a file is called a module; definitions from a module can be
imported into other modules or into the main module (the collection of
variables that you have access to in a script executed at the top
level and in calculator mode).

Watching which function is called in Python

What is the easiest way to record function calls for debugging in Python? I'm usually interested in particular functions or all functions from a given class. Or sometimes even all functions called on a particular object attribute. Seeing the call arguments would be useful, too.
I can imagine writing decorators for all that, but then I'd still have to modify the source code in different places. And writing a class decorator which modifies all methods isn't that straightforward.
Is there a solution where I don't have to modify my source code? Ideally something which doesn't slow down Python too much.
You ought to be able to implement something that does what you want using either sys.setprofile() or perhaps sys.settrace(). They both let you define a function to be called when specific "events" occur, like function calls, and pass additional information to which can be used to to determine the function/method being called and examine its arguments.
If you look around, there's probably sample usage code to use as a good starting point.
Except decorators, for Python >= 3.0 you could use new __getattribute__ method for a class, which will be called every time you call any method of the object.
You could look through Lutz "Learning Python" chapters 31, 37 about it.

Adding data members to Python classes from outside the function definition

I am quite new to Python and I cant seem to be able to understand this.Consider this simple Python code.
class point:
i = 34
test = point()
test.y = 45
print test.y
As you can see I instance of point called test but then did test.y = 45 when y is not a
data member of the point class. No error was thrown and the y attribute seems to have
been added to the class automatically.
Why did this happen? Isn't this a misfeature? Or am I missing something very basic.
The same thing cannot be done with C++ and it would throw a compiler error. Any reason for this strange feature?
Because this is Python, not C++ (or Java). Calling it a misfeature is a fundamental misunderstanding of how Python works.
In Python you don't declare variables, or attributes. There's no such thing as "a data member of the point class". Your i is just a class-level variable, but it would be the same wherever you associate that attribute with the class. You can dynamically add attributes to classes, instances, modules, whatever you like. That's what it is to be a dynamically typed language.
In fact, doing this like this is the only way to define instance variables. As I said, your i above is a class attribute, shared by all members of the class. The only way to get an instance-level variable is to "dynamically" add it, usually in the __init__ method but you can do it wherever you like.
It is just a common thing from scripting languages. Python lets you do it, Ruby also, and in deed, you never had to pre-define your local variables. Why not also in your classes? Not only that, you can choose if the new inserted variables/functions affect only one object or all instances of that class.
People who are doing heavy things in TDD and UnitTesting love that misfeature. Actually I would even go as far as to say that C++ static typing does not reduce my programs bugs nearly as much as a language that ease my unit tests giving me exactly that.
However, if you are concerned, you can always use _____slots_____
https://stackoverflow.com/a/3603624/253098
This is just how Python works. You can create new attributes on classes or instances at will. It has the disadvantage that many errors can't be caught at compile time, but it has the advantage of allowing more flexible dynamic programming.
If you are surprised by this behavior, you should read the Python tutorial to familiarize yourself with Python basics.

Categories

Resources