Python : Passing parameters by reference down chain - python

I am trying to pass a variable (MySQL connection class instance) down into a method and then into a class, the issue is that it needs to be done 'by reference' as the main class can change the value. The variable in the final class does not update though:
application:
def __init__(self, quart_instance) -> None:
self._db_object : mysql_connection.MySQLConnection = None
def initialise_app(self):
self.view_blueprint = health_view.create_blueprint(self._db_object)
Health View:
def create_blueprint(db_connector : mysql_connection.MySQLConnection):
view = View(db_connector)
class View:
def __init__(self, db_connector):
self._db_connector = db_connector
When the application performs the database connection in the background in the application I was expecting self._db_connector in the view to update. Any help would be appreciated as I am very confused.

Don't confuse changing the state of an object with changing the value of a variable; the former is visible through all references to that object, the latter only affects that particular variable.
For this to work, the application's _db_object and the view's db_connector must refer to the same object at all times.
There are essentially two solutions:
Give MySQLConnection a default state, so you can create one immediately to pass along to View rather than starting with None and modify it later, or
Wrap MySQLConnection in another object that you can do the same with
Both options have both benefits and drawbacks.

Related

How to let a nested class implicitly know about the attributes of an outer class instance that created it?

I am trying to create a client class that is able to be instantiated with connection information and other attributes about how the client is connecting and interacting with a service. The client class would have inner classes that represent objects in the service. These objects could be instantiated in a couple of different ways:
The outer, client class has a factory method that creates the inner class, like client.make_someobject() - this would pass an instance of itself to the new object, so the object would know about and can use the connection information without the caller explicitly passing the connection in.
An existing object in the service can be pulled in by writing client.SomeObject(some_id)
My question is mostly related to the second scenario. When creating an instance of an inner class directly, without a factory method that can just pass in self, how could I ensure that the new instance of the inner class knows about the attributes of the outer, client class?
Illustrative example:
class Client():
def __init__(self, client_attr):
self.client_attr = client_attr
def make_serviceobject(self):
return ServiceObject._make_serviceobject(self)
class ServiceObject():
def __init__(self, id,client=None):
self.id = id
if client:
self.client = client
# ...
#classmethod
def _make_serviceobject(cls, client):
id = 'some_id'
return cls(id, client=client)
my_client = Client(some_attr)
# now, how can this new ServiceObject know about the my_client attributes and methods?
my_existing_resource = my_client.ServiceObject(some_id)
# I am trying to avoid this:
my_existing_resource = my_client.ServiceObject(some_id, client=my_client)
You're pretty close, but there are a few issues here:
Class methods (where you expect a class instance to be passed as the first parameter, conventionally named cls, rather than an object instance conventionally named self) need to be decorated with #classmethod. However, there isn't really a reason to set up a hidden class method within the inner class; since it's logic that only gets used by the outer class, and is accessible (via the Client.make_serviceobject interface) from the outer class, it logically belongs in the outer class. (There also isn't a reason to use classmethod anyway - as opposed to staticmethod - because there is no hope of this working polymorphically - the inner class is specific to this outer class.)
my_client.ServiceObject (that is, the class name of the inner class, looked up via an outer class instance) just gets you that class itself, rather than anything associated with or bound to the outer class instance.
It makes no sense to offer a default None for the inner class' client (i.e., containing instance), because a) there will never be a None value (we only intend to create the instance from the outer class) and b) the outer class instance is presumably necessary for the inner class' functionality (otherwise, why do any of this setup work at all rather than just having two separate classes?)
You apparently want to supply the ID from the calling code, so the internal code can't just make one up.
To fix these problems, we simply:
Make the outer class' interface do the work needed to create an instance of the inner class, and have it accept a parameter for the ID.
Have it do so directly, via the constructor.
Have calling code use that interface.
I would also mark the inner class' name to indicate that it isn't intended to be dealt with directly.
It looks like:
class Client():
def __init__(self, client_attr):
self.client_attr = client_attr
def make_serviceobject(self, id):
# any logic necessary to compute the ID goes here.
return _ServiceObject(id, self)
class _ServiceObject():
def __init__(self, id, client):
self.id = id
self.client = client
# ...
my_client = Client(some_attr)
my_existing_resource = my_client.make_serviceobject('some_id')
# assert my_existing_resource.client == my_client

Python - Using classes as default values of another class' attribute - NameError

I'm creating a structure of classes for a wrapper of an API I'm currently writing.
I have multiple classes defined inside my models file. I want to assign the default value of some attributes of classes to other classes. When I do this, I get a NameError because sometimes I try to use classes that are defined below the current class, thus Python does not know these classes yet. I've tried multiple solutions but none of them seem to work. Does anybody know an alternative or has experience with this?
my classes I've defined:
class RateResponse(BaseModel):
def __init__(self,
provider=Provider()
):
self.provider = provider
class Provider(ObjectListModel):
def __init__(self):
super(Provider, self).__init__(list=[], listObject=ProviderItem)
#property
def providerItems(self):
return self.list
class ProviderItem(BaseModel):
def __init__(self,
code=None,
notification=Notification(),
service=Service()
):
self.code = code
self.notification = notification
self.service = service
As you can see above, I'm initialising the attribute 'provider' on the class RateResponse with the an empty object of the class Provider, which is defined below it. I'm getting a NameError on this line because it's defined below RateResponse.
provider=Provider()
NameError: name 'Provider' is not defined
The simple solution to above would be to shift the places of the classes. However, this is only a snippet of my file that is currently 400 lines long, all with these types of classes and initializations. It would be impossible to order them all correctly.
I've looked up some solutions where I thought I could return an empty object of a class by a string. I thought the function would only evaluate after all the classes were defined, but I was wrong. This is what I tried:
def getInstanceByString(classStr):
return globals()[classStr]()
class RateResponse(BaseModel):
def __init__(self,
provider=getInstanceByString('Provider')
):
self.provider = provider
But to no avail. Does anybody have experience with this? Is this even possible within Python? Is my structure just wrong? Any help is appreciated. Thanks.
This code might not mean what you want it to mean:
class RateResponse(BaseModel):
def __init__(self,
provider=Provider()
):
...
This code is saying that when this class is declared you want to make an instance of Provider which will be the default value for the provider parameter.
You may have meant that the default argument should be a new instance of Provider for each client that makes an instance of RateResponse.
You can use the Mutable Default Argument pattern to get the latter:
class RateResponse(BaseModel):
def __init__(self, provider=None):
if provider is None:
provider = Provider()
...
However, if you really do want a single instance when the client wants the default you could add a single instance below the Provider definition:
class Provider(ObjectListModel):
...
Singleton_Provider = Provider()
Then the RateResponse class could still use the current pattern, but instead perform this assignment inside the if:
if provider is None:
provider = Singleton_Provider
At the time that the assignment is performed, the Singleton_Provider will have been created.

Is it a good practice to keep reference in a class variable to the current instance of it?

I have a class that will always have only 1 object at the time. I'm just starting OOP in python and I was wondering what is a better approach: to assign an instance of this class to the variable and operate on that variable or rather have this instance referenced in the class variable instead. Here is an example of what I mean:
Referenced instance:
def Transaction(object):
current_transaction = None
in_progress = False
def __init__(self):
self.__class__.current_transaction = self
self.__class__.in_progress = True
self.name = 'abc'
self.value = 50
def update(self):
do_smth()
Transaction()
if Transaction.in_progress:
Transaction.current_transaction.update()
print Transaction.current_transaction.name
print Transaction.current_transaction.value
instance in a variable
def Transaction(object):
def __init__(self):
self.name = 'abc'
self.value = 50
def update(self):
do_smth()
current_transaction = Transaction()
in_progress = True
if in_progress:
current_transaction.update()
print current_transaction.name
print current_transaction.value
It's possible to see that you've encapsulated too much in the first case just by comparing the overall readability of the code: the second is much cleaner.
A better way to implement the first option is to use class methods: decorate all your method with #classmethod and then call with Transaction.method().
There's no practical difference in code quality for these two options. However, assuming that the the class is final, that is, without derived classes, I would go for a third choice: use the module as a singleton and kill the class. This would be the most compact and most readable choice. You don't need classes to create sigletons.
I think the first version doesn't make much sense, and the second version of your code would be better in almost all situations. It can sometimes be useful to write a Singleton class (where only one instance ever exists) by overriding __new__ to always return the saved instance (after it's been created the first time). But usually you don't need that unless you're wrapping some external resource that really only ever makes sense to exist once.
If your other code needs to share a single instance, there are other ways to do so (e.g. a global variable in some module or a constructor argument for each other object that needs a reference).
Note that if your instances have a very well defined life cycle, with specific events that should happen when they're created and destroyed, and unknown code running and using the object in between, the context manager protocol may be something you should look at, as it lets you use your instances in with statements:
with Transaction() as trans:
trans.whatever() # the Transaction will be notified if anything raises
other_stuff() # an exception that is not caught within the with block
trans.foo() # (so it can do a rollback if it wants to)
foo() # the Transaction will be cleaned up (e.g. committed) when the indented with block ends
Implementing the context manager protocol requires an __enter__ and __exit__ method.

What is the pythonic way of saving data between function calls?

The context for me is a single int's worth of info I need to retain between calls to a function which modifies that value. I could use a global, but I know that's discouraged. For now I've used a default argument in the form of a list containing the int and taken advantage of mutability so that changes to the value are retained between calls, like so--
def increment(val, saved=[0]):
saved[0] += val
# do stuff
This function is being attached to a button via tkinter, like so~
button0 = Button(root, text="demo", command=lambda: increment(val))
which means there's no return value I can assign to a local variable outside the function.
How do people normally handle this? I mean, sure, the mutability trick works and all, but what if I needed to access and modify that value from multiple functions?
Can this not be done without setting up a class with static methods and internal attributes, etc?
Use a class. Use an instance member for keeping the state.
class Incrementable:
def __init__(self, initial_value = 0):
self.x = initial_value
def increment(self, val):
self.x += val
# do stuff
You can add a __call__ method for simulating a function call (e.g. if you need to be backward-compatible). Whether or not it is a good idea really depends on the context and on your specific use case.
Can this not be done without setting up a class with static methods and internal attributes, etc?
It can, but solutions not involving classes/objects with attributes are not "pythonic". It is so easy to define classes in python (the example above is only 5 simple lines), and it gives you maximal control and flexibility.
Using python's mutable-default-args "weirdness" (I'm not going to call it "a feature") should be considered a hack.
If you don't want to set up a class, your only1 other option is a global variable. You can't save it to a local variable because the command runs from within mainloop, not within the local scope in which it was created.
For example:
button0 = Button(root, text="demo", command=lambda: increment_and_save(val))
def increment_and_save(val):
global saved
saved = increment(val)
1 not literally true, since you can use all sorts of other ways to persist data, such as a database or a file, but I assume you want an in-memory solution.
Aren't you mixing up model and view?
The UI elements, such as buttons, should just delegate to your data model. As such, if you have a model with a persistent state (i.e. class with attributes), you can just implement a class method there that handles the required things if a button is clicked.
If you try to bind stateful things to your presentation (UI), you will consequently lose the desirable separation between said presentation and your data model.
In case you want to keep your data model access simple, you can think about a singleton instance, such that you don't need to carry a reference to that model as an argument to all UI elements (plus you don't need a global instance, even though this singleton holds some kind of globally available instance):
def singleton(cls):
instance = cls()
instance.__call__ = lambda: instance
return instance
#singleton
class TheDataModel(object):
def __init__(self):
self.x = 0
def on_button_demo(self):
self.x += 1
if __name__ == '__main__':
# If an element needs a reference to the model, just get
# the current instance from the decorated singleton:
model = TheDataModel
print('model', model.x)
model.on_button_demo()
print('model', model.x)
# In fact, it is a global instance that is available via
# the class name; even across imports in the same session
other = TheDataModel
print('other', other.x)
# Consequently, you can easily bind the model's methods
# to the action of any UI element
button0 = Button(root, text="demo", command=TheDataModel.on_button_demo)
But, and I have to point this out, be cautious when using singleton instances, as they easily lead to bad design. Set up a proper model and just make the access to the major model compound accessible as a singleton. Such unified access is often referred to as context.
We can make it context oriented by using context managers. Example is not specific to UI element however states general scenario.
class MyContext(object):
# This is my container
# have whatever state to it
# support different operations
def __init__(self):
self.val = 0
def increament(self, val):
self.val += val
def get(self):
return self.val
def __enter__(self):
# do on creation
return self
def __exit__(self, type, value, traceback):
# do on exit
self.val = 0
def some_func(val, context=None):
if context:
context.increament(val)
def some_more(val, context=None):
if context:
context.increament(val)
def some_getter(context=None):
if context:
print context.get()
with MyContext() as context:
some_func(5, context=context)
some_more(10, context=context)
some_getter(context=context)

Calling type(dict) functions within classes on class variables (Python 3.4)

I am creating a class and trying to define class variables that correspond to a function like .keys() or .values() that are called on another class variable.
For example:
class DATA(object):
def __init__(self, id, database = {}):
self.id = id
self.database = database
self.addresses = database.keys()
self.data = database.values()
This does not seem to work, as when I create an instance of the class
foo = DATA(0,{"a":1,"b":2})
and then ask for:
print(foo.addresses)
>>> []
and it gives back an empty list.
Note:
On my actual program I start out with an empty dictionary for any class instance, then later on I use a function to add to the dictionary. In this case calling the ".database" still works but ".addresses" does not.
Can anyone help me with this problem?
I'm not sure that this is the problem, but using a mutable such as {} as a default argument often leads to bugs. See: "Least Astonishment" and the Mutable Default Argument
This is safer:
def __init__(self, id, database=None):
if database is None:
self.database = {}
else:
self.database = database
I don't understand the purpose of DATA.addresses and DATA.data. Could you use functions with the property decorator instead, to avoid redundancy?
#property:
def addresses(self):
return self.database.keys()
#property:
def data(self):
return self.database.values()
The issue is that you're calling keys right in your __init__ method, and saving the result. What you want to do instead is to call keys only when you want to access it.
Now, depending on the requirements of your class, you may be able to do this in a few different ways.
If you don't mind exposing changing the calling code quite a bit, you could make it very simple, just use foo.database.keys() rather than foo.addresses. The latter doesn't need to exist, since all the information it contains is already available via the methods of the databases attribute.
Another approach is to save the bound instance method database.keys to an instance variable of your DATA object (without calling it):
class DATA(object)
def __init__(self, database=None):
if database is None:
database = {}
self.database = database
self.addresses = database.keys # don't call keys here!
In the calling code, instead of foo.addresses you'd use foo.addresses() (a function call, rather than just an attribute lookup). This looks like a method call on the DATA instance, though it isn't really. It's calling the already bound method on the database dictionary. This might break if other code might replace the database dictionary completely (rather than just mutating it in place).
A final approach is to use a property to request the keys from the database dict when a user tries to access the addresses attribute of a DATA instance:
class DATA(object)
def __init__(self, database=None):
if database is None:
database = {}
self.database = database
# don't save anything as "addresses" here
#property
def addresses(self):
return self.database.keys()
This may be best, since it lets the calling code treat addresses just like an attribute. It will also work properly if you completely replace the database object in some other code (e.g. foo.database = {"foo":"bar"}). It may be a bit slower though, since there'll be an extra function call that the other approaches don't need.

Categories

Resources