BaseClass inheritance from a separate file - python

I am automating an environment that consists of multiple independent applications. At some point I have decided that it will make the most sense to define each application as a class and save it as a separate file.
What I have at the moment:
A directory with *py files where each file defines a single class for a single application. So, for example, I have application App1. It is saved in App1.py file and looks something like this:
class App1:
def __init__(self):
self.link = config.App1_link
self.retry = config.App1_retry
def access(self):
pass
def logOut(self):
pass
So each class defines all operations App1 can perform, like start\login to the application, perform operation, log out from the application, etc..
Then, in a separate file I create application objects and call object's methods one by one to create a multiple-steps scenario.
What I want to achieve
It all works fine but there is a place for improvement. Many of the class methods are more or less similar (for example methods like login\logout to\from the application) and clearly could be defined in some kind of base class I can then inherit from. However, since I have all the applications defined in separate files (which seems the logical solution at the moment), I don't know where to define such base class and how to inherit from it. I had 2 options in mind but I'm not sure which one (if any) will be the proper way to do it.
Define a base class in a separate file and then import it into each one of the application classes. Not sure if that's even makes sense.
Create one very long file with all the applications classes and include the base class in the same file. Note: such file will be really huge and hard to maintain since each application has its own methods and some of them a fairly big files by themselves.
So are these the only options I have and if they are, which one of them makes more sense?

This is exactly what importing is for. Define your base class in a separate module, and import it wherever you need it. Option 1 makes excellent sense.
Note that Python's module naming convention is to use lowercase names; that way you can distinguish between the class and the module much easier. I'd rename App1.py to app1.py.

Related

What is the right way to use service methods in Django?

As an example, let's say I am building an Rest API using Django Rest Framework. Now as part of the application, a few methods are common across all views. My approach is that in the root directory, I have created a services.py file. Inside that module, is a class (CommonUtils) containing all the common utility methods. In that same services.py module I have instantiated an object of CommonUtils.
Now across the application, in the different views.py files I am importing the object from the module and calling the methods on that object. So, essentially I am using a singleton object for the common utility methods.
I feel like this is not a good design approach. So, I want to get an explanation for why this approach is not a good idea and What would the best practice or best approach to achieve the same thing, i.e use a set of common utility methods across all views.py files.
Thanks in advance.
Is this the right design? Why? How to do better?
I feel like this is not a good design approach. So, I want to get an explanation for why this approach is not a good idea and What would the best practice or best approach to achieve the same thing, i.e use a set of common utility methods across all views.py files.
Like #Dmitry Belaventsev wrote above, there is no general rule to solve this problem. This is a typical case of cross-cutting-concerns.
Now across the application, in the different views.py files I am importing the object from the module and calling the methods on that object. So, essentially I am using a singleton object for the common utility methods.
Yes, your implementation is actually a singleton and there is nothing wrong with it. You should ask yourself what do you want to achieve or what do you really need. There are a lot of solutions and you can start with the most basic one:
A simple function in a python module
# file is named utils.py and lives in the root directory
def helper_function_one(param):
return transcendent_all_evil_of(param)
def helper_function_two(prename, lastname):
return 'Hello {} {}'.format(prename, lastname)
In Python it is not uncommon to use just plain functions in a module. You can upgrade it to a method (and a class) if this is really necessary and you need the advantages of classes and objects.
You also can use a class with static methods:
# utils.py
class Utils():
#staticmethod
def helper_one():
print('do something')
But you can see, this is nothing different than the solution with plain functions besides the extra layer of the class. But it has no further value.
You could also write a Singleton Class but in my opinion, this is not very pythonic, because you get the same result with a simple object instance in a module.

How to structure methods of classes that inherit from one BaseClass?

I have a lot of different child classes that inherit from one base class. However all the different child classes implement very similar methods. So if I want to change code in the child classes, I have to change it multiple times.
For me this sounds like bad practice and I would like to implement it correcty. But after a lot of googling I still didn't find a coherent way of how this should be done.
Here is an example of what I mean:
from ABC import ABC, abstractmethod
import logging.config
class BaseModel(ABC):
def __init__(self):
# initialize logging
logging.config.fileConfig(os.path.join(os.getcwd(),
'../myconfig.ini'))
self.logger = logging.getLogger(__name__)
#abstractmethod
def prepare_data(self):
"""
Prepares the needed data.
"""
self.logger.info('Data preparation started.\n')
pass
So this is my BaseClass. Now from this class multiple other classes inherit the init and prepare_data method. The prepare_data method is very similar for every class.
class Class_One(BaseModel):
def __init__(self):
super.__init()__
def prepare_data(self):
super().prepare_data()
# Some code that this method does
class Class_Two(BaseModel):
def __init__(self):
super.__init()__
def prepare_data(self):
super().prepare_data()
# Some code that this method does
# Code is almost the same as for Class_One
class Class_Three(BaseModel):
def __init__(self):
super.__init()__
def prepare_data(self):
super().prepare_data()
# Some code that this method does
# Code is almost the same as for Class_One and Class_Two
# etc.
I suppose you could refactor the methods into another file and then call them in each class. I would love to know how to do this correctly. Thanks a lot in advance!
I'm afraid there's no generic one-size-fits-all magic answer - it all really depend on the "almost" part AND on the forces that will drive change in those parts of the code. IOW, one can only really answer on a concrete example...
This being said, there are a couple lessons learned from experience, which are mostly summmarized in the famous (but unfortunately often misunderstood) GOF "Design Patterns" book. If you take time to first read the first part of the book, you understand that most of (if not all) the patterns in the catalog are based on the same principle: separate the variant from the invariant. Once you can tell one from the other in your code (warning: there's a trap here and beginner almost always fall into it), which pattern to apply is usually obvious (sometimes to the point you only realize you used this and that patterns after you refactored your code).
Now as I said, there IS a trap: accidental duplication. Just because two pieces of code look similar doesn't mean they are duplicates - quite often, they are only "accidentally" similar now but the forces that will make one or the other change are mostly unrelated. If you try to immediatly refactor this code, you'll soon find yourself making the "generic" case more and more complicated to support changes that are actually unrelated, and end up with an overcomplicated, undecipherable mess that only make your code unmaintainable. So the trick here is to carefully examine the whole context, ask yourself what would drive change in one or the other "similar" parts, and if in doubt, wait until you know more. If it happens than each time you change A you have to make the exact same change in B for the exact same reasons then you DO have real duplicate.
For a more practical, short-term advise based on what we can guess from your way too abstract example (and from experience), there are at least two patterns that are most often involved in factoring out duplication in a class hierarchy: the template method and the strategy.
NB : I said "unfortunately often misunderstood" because most people seem to jump to the patterns catalog and try to forcefit all of them in their code (whether it makes sense for the problem at hand or not), and usually by copy-pasting the canonical textbook _implementation_ (usually Java or C++ based) instead of understanding the _concept_ and implementing it in a way that's both idiomatic and adapted to the concrete use case (example: when functions are first class object, you don't necessarily need a Strategie class with abstract base and concrete subclasses - most often plain old callback functions JustWork(tm)).
EDIT totally unrelated but this:
def __init__(self):
# initialize logging
logging.config.fileConfig(os.path.join(os.getcwd(),
'../myconfig.ini'))
self.logger = logging.getLogger(__name__)
is NOT how to use logging. Library code can use loggers, but must not configure anything - this is the application's (your main script / function / whatever) responsability, the rational being that the proper logging config depends on the context - which type of application is using the lib (a CLI app, a local GUI app and a backend web app don't have the same needs at all) and in which kind of environment (a local dev env will want much more logs than a production one for example).
Also, with the logger created with __name__ in your base class module, all child classes will send their log to the same logger, which is certainly not what you want (you want them to have their own package / module specific loggers so you can fine tune the config per package / module).
And finally, this:
os.path.join(os.getcwd(), '../myconfig.ini')
certainly doesn't work as you expect - your cwd can be just anything at this point and you have no way of knowing in advance. If you want to reference a path relative to the current file's directory, you want os.path.dirname(os.path.realpath(__file__)). And of course adding system specific path stuff (ie "../") in a os.path.join() call totally defeats the whole point of using os.path.

Style question: keep static methods in class or outside?

I have a class and some methods that I would like to turn into a library. The library relies on a single class, Class. Class has quite a few static methods. These could be moved outside of the Class file.
If I want to turn this into a package, how should I place everything? Should I have one file which has the class, and the methods with static decorators? Or should I move the static methods to a separate file? I know both are functionally equivalent, but I was wondering about generally accepted practice.
In which case it is better to do one versus the other?
a library (package) != a file (module) != a class. That is, you can also have one file with a class and a bunch of functions. Which is probably the best in your case.
Static method make sense when you either have a great amount of functions, so you need namespacing, or you plan to exploit inheritance and dynamic binding. In your case, they don't provide much value.

Class names and imports as variables in Python

I want a separate Python code where I can define default .py files that have to be created at the start of a project, depending on what models I want. So when I start a new project, I don't have to copy the code from a different project and adjust class names, etc. So for instance, I want to automatically create a model_1.py as:
class Model1(object):
code
and a model_2.py as:
class Model2(object):
code
I want these to be created from another file, where I define which models have to be created. So for instance:
models = ['Model1', 'Model2']
Is it possible to have the class name as a variable? So something like:
class models[0]()
Moreover, is something similar possible for the import part? So
from model_type_x.test import *
where model_type_x is a variable?
What other possibilities are there? Let Python create a text file and turn this into a .py file?
You need this module named cookiecutter. You can have templates for your project and have them configured with a prompt to create your project
First of all, python file are simply text files. You just have to save them with a .py extension.
What you're trying to achieve is more or less out of the scope of python. Python by itself doesn't generate code. If you want to generate code, you can use templates in any language that you like. It doesn't really matter much since the code isn't going to get executed.
Class names and import names cannot be variables. These are syntax sugar allowing you to define types or import code.
If you want to import using a variable name, you can import modules as such:
__import__(module_name)
Where module_name is a variable so you can import modules at runtime with this if you can guess how they are called or going to be imported... Even though it's possible to do that, I do not recommend using this method as it's pretty ugly and pretty much useless to do that that way since we usually know beforehand what we're importing. You can always use the "*" but that's also not a particularly good idea because some things inside a module won't get exported and it's usually better to explicitly tell what you're importing.
class models[0]()
This is clearly not possible, the keyword class is used to define a type. What you can do on the other hand is this:
locals()[models[0]] = type(models[0], parent_tuple, attributes)
But accessing locals() to define a local variable using the type constructor to define a new type manually instead of using the class keyword that makes things much more easily to read... There's just no point to do otherwise.
The real question here is... What are you trying to achieve? Chances are that you're not looking for the right solution to a problem you don't have.

Python: add a parent class to a class after initial evaluation

General Python Question
I'm importing a Python library (call it animals.py) with the following class structure:
class Animal(object): pass
class Rat(Animal): pass
class Bat(Animal): pass
class Cat(Animal): pass
...
I want to add a parent class (Pet) to each of the species classes (Rat, Bat, Cat, ...); however, I cannot change the actual source of the library I'm importing, so it has to be a run time change.
The following seems to work:
import animals
class Pet(object): pass
for klass in (animals.Rat, animals.Bat, animals.Cat, ...):
klass.__bases__ = (Pet,) + klass.__bases__
Is this the best way to inject a parent class into an inheritance tree in Python without making modification to the source definition of the class to be modified?
Motivating Circumstances
I'm trying to graft persistence onto the a large library that controls lab equipment. Messing with it is out of the question. I want to give ZODB's Persistent a try. I don't want to write the mixin/facade wrapper library because I'm dealing with 100+ classes and lots of imports in my application code that would need to be updated. I'm testing options by hacking on my entry point only: setting up the DB, patching as shown above (but pulling the species classes w/ introspection on the animals module instead of explicit listing) then closing out the DB as I exit.
Mea Culpa / Request
This is an intentionally general question. I'm interested in different approaches to injecting a parent and comments on the pros and cons of those approaches. I agree that this sort of runtime chicanery would make for really confusing code. If I settle on ZODB I'll do something explicit. For now, as a regular user of python, I'm curious about the general case.
Your method is pretty much how to do it dynamically. The real question is: What does this new parent class add? If you are trying to insert your own methods in a method chain that exists in the classes already there, and they were not written properly, you won't be able to; if you are adding original methods (e.g. an interface layer), then you could possibly just use functions instead.
I am one who embraces Python's dynamic nature, and would have no problem using the code you have presented. Make sure you have good unit tests in place (dynamic or not ;), and that modifying the inheritance tree actually lets you do what you need, and enjoy Python!
You should try really hard not to do this. It is strange, and will likely end in tears.
As #agf mentions, you can use Pet as a mixin. If you tell us more about why you want to insert a parent class, we can help you find a nicer solution.

Categories

Resources