Little question concerning app architecture:
I have a python script, running as a daemon.
Inside i have many objects, all inheriting from one class (let's name it 'entity')
I have also one main object, let it be 'topsys'
Entities are identified by pair (id, type (= class, roughly)), and they are connected in many wicked ways. They are also created and deleted all the time, and they are need to access other entities.
So, i need a kind of storage, basically dictionary of dictionaries (one for each type), holding all entities.
And the question is, what is better: attach this dictionary to 'topsys' as a object property or to class entity, as a property of the class? I would opt for the second (so entities does not need to know of existence of 'topsys'), but i am not feeling good about using properties directly in classes. Or maybe there is another way?
There's not enough detail here to be certain of what's best, but in general I'd store the actual object registry as a module-level (global) variable in the top class, and have a method in the base class to access it.
_entities = []
class entity(object):
#staticmethod
def get_entity_registry():
return _entities
Alternatively, hide _entites entirely and expose a few methods, eg. get_object_by_id, register_object, so you can change the storage of _entities itself more easily later on.
By the way, a tip in case you're not there already: you'll probably want to look into weakrefs when creating object registries like this.
There is no problem with using properties on classes. Classes are just objects, too.
In your case, with this little information available, I would go for a class property, too, because not creating dependencies ist great and will be one worry less sometimes later.
Related
Are there any conventions on how to implement services in Django? Coming from a Java background, we create services for business logic and we "inject" them wherever we need them.
Not sure if I'm using python/django the wrong way, but I need to connect to a 3rd party API, so I'm using an api_service.py file to do that. The question is, I want to define this service as a class, and in Java, I can inject this class wherever I need it and it acts more or less like a singleton. Is there something like this I can use with Django or should I build the service as a singleton and get the instance somewhere or even have just separate functions and no classes?
TL;DR It's hard to tell without more details but chances are you only need a mere module with a couple plain functions or at most just a couple simple classes.
Longest answer:
Python is not Java. You can of course (technically I mean) use Java-ish designs, but this is usually not the best thing to do.
Your description of the problem to solve is a bit too vague to come with a concrete answer, but we can at least give you a few hints and pointers (no pun intended):
1/ Everything is an object
In python, everything (well, everything you can find on the RHS of an assignment that is) is an object, including modules, classes, functions and methods.
One of the consequences is that you don't need any complex framework for dependency injection - you just pass the desired object (module, class, function, method, whatever) as argument and you're done.
Another consequence is that you don't necessarily need classes for everything - a plain function or module can be just enough.
A typical use case is the strategy pattern, which, in Python, is most often implemented using a mere callback function (or any other callable FWIW).
2/ a python module is a singleton.
As stated above, at runtime a python module is an object (of type module) whose attributes are the names defined at the module's top-level.
Except for some (pathological) corner cases, a python module is only imported once for a given process and is garanteed to be unique. Combined with the fact that python's "global" scope is really only "module-level" global, this make modules proper singletons, so this design pattern is actually already builtin.
3/ a python class is (almost) a singleton
Python classes are objects too (instance of type type, directly or indirectly), and python has classmethods (methods that act on the class itself instead of acting on the current instance) and class-level attributes (attributes that belong to the class object itself, not to it's instances), so if you write a class that only has classmethods and class attributes, you technically have a singleton - and you can use this class either directly or thru instances without any difference since classmethods can be called on instances too.
The main difference here wrt/ "modules as singletons" is that with classes you can use inheritance...
4/ python has callables
Python has the concept of "callable" objects. A "callable" is an object whose class implements the __call__() operator), and each such object can be called as if it was a function.
This means that you can not only use functions as objects but also use objects as functions - IOW, the "functor" pattern is builtin. This makes it very easy to "capture" some context in one part of the code and use this context for computations in another part.
5/ a python class is a factory
Python has no new keyword. Pythonc classes are callables, and instanciation is done by just calling the class.
This means that you can actually use a class or function the same way to get an instance, so the "factory" pattern is also builtin.
6/ python has computed attributes
and beside the most obvious application (replacing a public attribute by a pair of getter/setter without breaking client code), this - combined with other features like callables etc - can prove to be very powerful. As a matter of fact, that's how functions defined in a class become methods
7/ Python is dynamic
Python's objects are (usually) dict-based (there are exceptions but those are few and mostly low-level C-coded classes), which means you can dynamically add / replace (and even remove) attributes and methods (since methods are attributes) on a per-instance or per-class basis.
While this is not a feature you want to use without reasons, it's still a very powerful one as it allows to dynamically customize an object (remember that classes are objects too), allowing for more complex objects and classes creation schemes than what you can do in a static language.
But Python's dynamic nature goes even further - you can use class decorators and/or metaclasses to taylor the creation of a class object (you may want to have a look at Django models source code for a concrete example), or even just dynamically create a new class using it's metaclass and a dict of functions and other class-level attributes.
Here again, this can really make seemingly complex issues a breeze to solve (and avoid a lot of boilerplate code).
Actually, Python exposes and lets you hook into most of it's inners (object model, attribute resolution rules, import mechanism etc), so once you understand the whole design and how everything fits together you really have the hand on most aspects of your code at runtime.
Python is not Java
Now I understand that all of this looks a bit like a vendor's catalog, but the point is highlight how Python differs from Java and why canonical Java solutions - or (at least) canonical Java implementations of those solutions - usually don't port well to the Python world. It's not that they don't work at all, just that Python usually has more straightforward (and much simpler IMHO) ways to implement common (and less common) design patterns.
wrt/ your concrete use case, you will have to post a much more detailed description, but "connecting to a 3rd part API" (I assume a REST api ?) from a Django project is so trivial that it really doesn't warrant much design considerations by itself.
In Python you can write the same as Java program structure. You don't need to be so strongly typed but you can. I'm using types when creating common classes and libraries that are used across multiple scripts.
Here you can read about Python typing
You can do the same here in Python. Define your class in package (folder) called services
Then if you want singleton you can do like that:
class Service(object):
instance = None
def __new__(cls):
if cls.instance is not None:
return cls.instance
else:
inst = cls.instance = super(Service, cls).__new__()
return inst
And now you import it wherever you want in the rest of the code
from services import Service
Service().do_action()
Adding to the answer given by bruno desthuilliers and TreantBG.
There are certain questions that you can ask about the requirements.
For example one question could be, does the api being called change with different type of objects ?
If the api doesn't change, you will probably be okay with keeping it as a method in some file or class.
If it does change, such that you are calling API 1 for some scenario, API 2 for some and so on and so forth, you will likely be better off with moving/abstracting this logic out to some class (from a better code organisation point of view).
PS: Python allows you to be as flexible as you want when it comes to code organisation. It's really upto you to decide on how you want to organise the code.
So I'm wondering, where does Kivy (or Python, for that matter) hold all these instances of the class? And how do I reference 'homebase' from a class?
For example, self from within a method calls directly to the instance of the class. This is really helpful, because I can directly modify the object itself.
But if I'm creating a method from a different class, I don't know how to find that list/dict that contains all the other instances from the .kv code because I didn't write the code.
Another way to put it: If the code were like a filing cabinet, I'm wondering in what folder might I be able to pull out each individual instance that's been instantiated from my code?
Especially given Kivy's proclivity to instantiate on my behalf from the .kv file. I wonder where it puts all those objects instantiated from the class.
Use locals() and globals() to access all accessible instances, however it's kind of tricky because it returns even strings, ints, etc. What you can do is probe the dictionary recursively and before each new recursive call do this:
if not isinstance(key, (int, str, ...)):
dig_deeper()
which should provide you a way to dig through all instances and basically find them. However, if there was something like:
FloatLayout()
or
Color()
or anything not stored elsewhere as a variable, you might not be able to access it. Color(), however will be stored somewhere in the canvas, so that should be easy if you know where to look.
This is typical pattern that I run into in Python but probably applies to most other multi-paradigm languages.
I write a bunch of functions. Some of these are like load_data() and some are like do_something_with_data(). That is the latter acts on the data that is read in with the first function. Let's say it takes 1 minute to read in the data.
After a while I refactor the code so that these are both methods within a class. While this seems neater, it is also harder to develop on. That is, if I fix a bug in do_something_with_data() the object that is already instantiated is not fixed. I have to re-instantiate it which might take a minute or so since it has to read the data.
object=my_object();object.load_data();object.do_something_with_data()
I am wondering if there is a good pattern for handling this issue. Can you update an object's methods without refreshing the data? Should I write a method that takes an old object and copies in all the data fields from an object that has been saved? Other ideas?
Methods are looked up on the class. On module reload, existing instances end up referencing a class that no longer exists; their __class__ points to an object that was the old module.classname, and is not the same object as the new module.classname.
You have two options:
Update the old class to have your new method:
existing_instance.__class__.methodname = module.classname.methodname.__func__
Replace the class references on the existing objects:
existing_instance.__class__ = module.classname
I'm hoping someone may be able to help me out with a design issue I'm dealing with. It's specifically in the game development domain, but I think it's really a broader issue that has probably been solved in an accepted way. I'm working in Python.
I have a GameObject class that holds the position of the object (and other general state attributes) and a reference to my Engine object, which holds information about the game world at large. GameObjects can be a categorized further: they can be VisibleGameObjects, PhysicalGameObjects (collidable), or both, in concrete form. For example, I could have an invisible boundary, which is physical, but does not have a visible representation.
VisibleGameObjects implement a draw() method that handles drawing functionality, delegating this through its parent's Engine reference. PhysicalGameObjects have bounding boxes, and define logic to handle collisions, also requiring access to GameObject attributes (acceleration, velocity, etc.)
The problem is, what happens when I'd like to define a concrete object that needs to inherit the behavior of both a VisibleGameObject, and a PhysicalGameObject (which both share a parent GameObject)? It's my understanding that this type of circular inheritance is a big-bad idea.
How can I refactor this to essentially bolt on the specific behaviors to a concrete child class (drawable, collidable) that depend on the state of the parent abstract class?
EDIT: My one thought was to assign them to concrete instances of GameObjects as components, favoring a has-a relationship over an is-a relationship. Even that doesn't seem so clean however; trying to check to see if an object is collidable by searching a "components" list for a collidable component doesn't seem great either.
It seems like you're looking for a trait
Unfortunately, python doesn't support traits natively, although there are multiple modules that try to implement the model.
My suggestion (unless you want to depend on the mentioned modules) would be to write abstract classes to expose the behaviour you want, but that don't inherit the main class - leaving that to a third class, which inherits both the main, and the behaviour-class.
It's probably less confusing with an example:
create a Visible abstract class that does not inherit from GameObject, and exposes all the intended behaviour/functions (as if it inherited from GameObject). Then, have VisibleGameObject inherit from both GameObject and Visible.
Obviously, you can only manage to write Visible on a dynamic language like python - otherwise the compiler would complain that it couldn't access inexistent fields.
Let's say that i have a Python module to control a videoconference system. In that module i have some global variables and functions to control the states of the videoconference, the calls, a phone book, etc.
To start the control system, the module self-executes a function to initialize the videoconference (ethernet connection, polling states and so)
Now, if i need to start controlling a second videoconference system, i'm not sure how to approach that problem: i thought about making the videoconference module a class and create two instances (one for each videoconference system) and then initialize both, but the problem is that i don't really need to have two instances of a videoconference class since i won't do anything with those objects because i only need to initialize the systems; after that i don't need to call or keep them for anything else.
example code:
Videoconference.py
class Videoconference:
def __init__(self):
self.state = 0
#Initialization code
Main.py
from Videoconference import Videoconference
vidC1 = Videoconference()
vidC2 = Videoconference()
#vidC1 and vidC2 will never be use again
So, the question is: should i convert the videoconference module to a class and create instances (like in the example), even if i'm not going to use them for anything else appart of the initialization process? Or is there another solution without creating a class?
Perhaps this is a matter of preference, but I think having a class in the above case would be the safer bet. Often I'll write a function and when it gets too complicated I'll think that I should have created a class (and often do so), but I've never created a class that was too simple and thought that this is too easy why didn't I just create a function.
Even if you have one object instead of two, it often helps readability to create a class. For example:
vid = VideoConference()
# vid.initialize_old_system() # Suppose you have an old system that you no longer use
# But want to keep its method for reference
vid.initialize_new_system()
vid.view_call_history(since=yesterday)
This sounds like the perfect use case for a VideoConferenceSystem object. You say you have globals (ew!) that govern state (yuck!) and calls functions for control.
Sounds to me like you've got the chance to convert that all to an object that has attributes that hold state and methods to mutate it. Sounds like you should be refactoring more than just the initialization code, so those vidC1 and vidC2 objects are useful.
I think you're approaching this problem the right way in your example. In this way, you can have multiple video conferences, each of which may have different attribute states (e.g. vidC1.conference_duration, etc.).