I'm working on a game as a python learning project, and I'm trying to move away from having a single file. What is the best way to parse functions out so they can still access important objects? For example, I have an object that basically controls the window and calculates the sizes of ui elements when the user resizes the window. I'd like to be able to use the instance of this object without having to pass it into every function through an argument. An example:
function.py
import test
def add_foo():
newfoo.add()
test.py
import function
class foo:
def __init__(self):
self.x = 1
def add(self):
self.x += 10
newfoo = foo()
if __name__ == '__main__':
newfoo.add()
print newfoo.x
function.add_foo()
print newfoo.x
Am I just thinking about this the wrong way? The obvious solution here is to add an argument to function.add_foo() that lets me pass in the newfoo instance which lets me use it. But I find this to be a pretty cumbersome way to deal with an object that essentially needs to be universally accessible to every module and function if I call it since it changes anytime the user resizes the window.
It's also possible I should be structuring my programs in a different way entirely.
Related
Is there a way to get a reference to the local variables defined in a different module?
for example, I have two files: framework.py and user_code.py:
framework.py:
from kivy.app import App
class BASE_A:
pass
class MyApp(App):
def on_start(self):
'''Here I'd like to get a reference to sub-classes of BASE_A and
instantiated objects of these sub-classes, defined in the file
"user_code.py" such as a1, a2, as well as the class A itself,
without explicitly passing them to MyApp's instance.
'''
user_code.py:
from framework import MyApp
class A(BASE_A):
pass
app = MyApp()
a1 = A()
a2 = A()
app.run()
What I'd like to do is to somehow get a reference to the objects a1 and a2, as well as the class A, that were all defined in user_code.py. I'd like to use them in the method on_start, which is invoked in app.run().
Is it possible, for example, to get a reference to the scope in which the MyApp object was defined (user_code.py)?
Some background for anyone who's interested:
I know it's a bit of an odd question, but the reason is:
I'm writing a python framework for creating custom-made GUI control programs for self-made instruments, based on Arduino. It's called Instrumentino (sitting in GitHub) and I'm currently developing version 2.
For people to use the framework, they need to define a system description file (user_code.py in the example) where they declare what parts they're using in their system (python objects), as well as what type of actions the system should perform (python classes).
What I'm trying to achieve is to automatically identify these objects and classes in MyApp's on_start without asking the user to explicitly pass these objects and classes, in order to make the user code cleaner. Meaning to avoid code such as:
app.add_object(a1)
app.add_object(a2)
app.add_class(A)
New-style classes in Python have a method named __subclasses__ which returns a list of all direct subclasses that have been defined so far. You can use that to get a hold of the A class in your example, just call BASE_A.__subclasses__() (if you're using Python 2, you'll also need to change BASE_A to inherit from object). See this question and its answers for more details (especially the functions to recursively get all subclasses).
As for getting access to the instances, for that you probably should add some code to the base class, perhaps saving the instances created by __new__ into some kind of data structure (e.g. a weakset). See this question and its answers for more on that part. Actually, now that I think about it, if you put your instances into a centralized data structure somewhere (e.g. not in an attribute of each subclass), you might not need the function to search for the classes, since you can just inspect the type of the instances and find the subclasses that are being used.
Your question is a bit illogical.
Since Python interprets the code sequentially: b is not defined before a initialization.
If you can set b before a then:
b = None # global variable
class A():
global b
def __init__(self):
'''Here I'd like to get a reference to b (of type B) without passing it as an argument'''
class B_BASE():
pass
class B(B_BASE):
def __init__(self):
pass
if __name__ == '__main__':
b = B()
a = A()
I wouldn't recommend doing this because I find that this isn't clean. Since you have a dependency on b in a you should pass it as a parameter to the A class
I have the following program:
class MyClass(object):
def __init__(self):
pass
def hello(self):
print ("hello")
if __name__ == "__main__":
a = MyClass()
a.hello()
It could also be launched as
class MyClass(object):
def __init__(self):
self.hello()
def hello(self):
print ("hello")
if __name__ == "__main__":
MyClass()
Is there a reason to prefer one style over the other, python wise (outside of personal preferences)? In other words: is there a advantage/disadvantage to handle the program general flow and logic within __init__ vs. having that flow outside of the class?
The code is intended to run as a standalone script, pretty much like in the examples above, just more complicated.
I've invented another example that shows the motivation for having a class at all, in your second style. Obviously this is massively over-engineered for what it does. Let's just suppose that aside from your call, there's some other code somewhere that will use the class to do something other than just call one function, so that this complex interface is justified.
class MyClass(object):
def __init__(self):
self.get_config()
self.validate_config()
self.hello()
def get_config(self):
self.message = "hello"
def validate_config(self):
# I'm not claiming this is good practice, just
# an example of multiple methods that share state
if not self.message:
raise Exception()
def hello(self):
print(self.message)
if __name__ == "__main__":
MyClass()
So what do we have here? Basically you're using the call to MyClass not for the purpose of creating an object (although it does do that and you discard it), but in order to execute the object's __init__ method. It works, of course, but it's not "natural". The natural thing is that if you want to run "some stuff" you call a function containing the stuff. You don't especially want it to return some object you don't even care about. If the stuff needs to maintain some state to do its work then the function can handle that, you don't need it to show you the state at the end:
class MyClass(object):
def __init__(self):
self.get_config()
self.validate_config()
def get_config(self):
self.message = "hello"
def validate_config(self):
# I'm not claiming this is good practice, just
# an example of multiple methods that share state
if not self.message:
raise Exception()
def hello(self):
print(self.message)
def main():
MyClass().hello()
if __name__ == "__main__":
main()
This looks more like your first style than your second, so in that sense I prefer the first.
Having a main function might make it a tiny bit easier to use your module under some kind of harness, like a Python prompt or a test harness. It's not necessary, though. If someone wants to do the same as happens when __name__ == '__main__' then worst case they can copy that source.
You can always ask yourself: Is this still the behaviour I want if I instantiate 100 instances of this class?
It is tempting to load everything this object is supposed to do into a single method such as init(self), to make the main method as short as possible. As long as this stays your private script, there is no point in preferring one over the other.
However, once the code is shared, or the class is imported by another script, you increase the reusability by making init() only execute a few necessary initialization commands.
I appreciate your example is only to illustrate your point, and not because you want to print "hello" and need a class to do so ;)
The if __name__ == "__main__": in the file suggests that the file is also suitable for importing. Do you want the class to always print "hello" when instantiated? Normally, behavior is in methods and the __init__ only contains initialization.
Well, you solve different things here. Usually, the purpose of creating an object is not to have it fire off something and then disappear (that’s more a function’s job). Instead, you usually want to object to stay around, and interact with it. As such, you would definitely want to save the object reference in a variable:
obj = MyClass()
Next, you should think about if something should always be called whenever an object is created. If that’s the case, then yes, you probably should put that into the __init__. If it is not true for even just a minor case, then you should probably separate it to make it possible to do initialization and that action separately.
Your example is obviously just very simple so it doesn’t make much sense in the first place. But usually, I would see hello as a public API which I would likely to call when having an object of that type. On the other hand, I wouldn’t expect it to be called automatically just when I create the object.
There might be cases where something like this is useful and desired, but in general, you would prefer calling the method explicitely.
I'm new to programming, and have recently learned python and the basics of object oriented programming. I'm aware that having lots of global variables is generally a bad idea, and that I can put them all into a class instead. Is this the right way to do it?
class GameState(object):
def __init__(self):
self.variable1 = 1
self.variable2 = 2
self.list = [3, 4, 5]
g_state = GameState()
And, if I wish to access the variables within g_state, what is the best way to go about doing it?
Pass g_state into the functions/classes that need access?
Implement getters and call those?
Use g_state.variable1 directly?
Or is there a better way?
EDIT: To be more specific, I'm trying to write a game in python using pygame, and was thinking of putting my gamestate variables into a class so as to not have a bunch of global variables lying around. I'm unsure of how to access those variables with good design so I don't run into trouble later.
You are right that too many global variables is not a good idea. Polluted global namespace may lead to errors.
However, don't put them into class for the sake of it. If you have really that many variables maybe you should consider splitting your program into multiple modules.
Also please understand that you can't really crate global variables in Python like you can in JavaScript. Your variables are always scoped under the module.
Let me illustrate with an example. Module a.py:
A = 42
Module b.py:
import a
print(A)
What do you get? NameError. Why? because variable A is not global, it is under module a. You need to use a.A to reference it.
There is no need to stuff variables under class. They are under modules, which acts as a namespace, and there is nothing wrong with it.
There are two ways to access the variables: one global object or passing an instance to a function. The first one is a bad idea too in general. The second one is better. But do not create a single object with all variables! (see the first comment).
There are more things to consider if you pass around an object. A good idea is implementing things as a member-function if suitable.
class VariableBox(object):
def __init__(self):
self.variable1 = 1
self.variable2 = 2
self.list = [3, 4, 5]
def do_something(self):
self.variable1 = self.variable2 + 42
return self.variable1
Creating a class simply for storing variables is not necessary. A class would only be needed if you really do need multiple instances of that class, each with unique values.
But for a single global state, a dictionary object can suffice for this purpose. You can store it in a module specifically intended for config and state if you want:
conf.py
GAME_STATE = {
'level': 0,
'score': 0,
'misc': [1,2,3],
}
main.py
import conf
conf.GAME_STATE['score'] = 100
So your other modules can just import the conf.py module and access the state dict. You can store whatever types you need in this object. It also gives you a convenient location to add functionality for serializing these values out to disk if you want, and reading them back at future runs of the program, and to keep them alongside configuration options.
NO!!! Building a VariableBox will help you NOT!
Simply use the var you want, wherever it may applies. If you have too many global vars, it's rather a problem with what should be considered global, and what should pertain to specific structures. Or even a difficulty in naming the vars, or creating arrays, instead of var1, var2, var3, ....
Classes are designed for building of objects, i. e., for creating things that differ for specificities, but have the same basis. A valuable class is something that somewhat defines an entity, and the main behaviors of this entity.
EDIT:
Python does not provide visibility constraints, so you won't be able to protect data by simply stuffing it into a class; the entries can be accessed from any place an instance of the class is.
Creating getters or simply maintaining an instance of a class is just a matter of deciding which one to work with. For the sake of maintaining things clear, it may be better to make a controller to your game, that will make this interface between game assets and gameplay.
For example, during execution, you could have:
class Controller:
def __init__():
self.turn = 0
self. ...
def begin():
self.turn += 1
self.opening_scene()
class Gameplay:
def __init__(self, num_players, turn, ...):
self.turn = turn # if it happens you want to use this value in the game
self.num_player = num_players
# Main loop
controller = Controller()
controller.num_players = int(raw_input("Number of players: "))
gameplay = Gameplay(controller.num_players, controller.turn)
while True:
if gameplay.action == ...:
elif ...:
...
elif *next turn*:
controller.next_turn() # to set things up to next turn
else ...:
...
Inside Controller you may aggregate correlated info, so you won't have an endless list of parameters in the upcoming functions.
Anyway, I'm not capable of telling you which is the best to use; there are lots of people that study these modularity issues, and I'm not one of them, so this is just my point of view on what could work out nice on your app.
I'm trying to code up an application to help me keep track of my students. Basically a customized notebook/gradebook. I hacked something together last summer that worked for this past year, but I need something better.
I'm going to pull each students record from a database, display it on my main page and have elements clickable to open a frame so that I can edit it. I need to pass information between these two frames and I'm an idiot because I can't seem to figure out how to alter the examples I've come across showing lambdas and same-class information passing.
On my main window I have a StaticText that looks like this
self.q1a_lbl = wx.StaticText(id=wxID_MAINWINDOWQ1A_LBL, label=u'87%',
name=u'q1a_lbl', parent=self.alg_panel, pos=wx.Point(115, 48),
size=wx.Size(23, 17), style=0)
self.q1a_lbl.SetToolTipString(u'Date \n\nNotes')
self.q1a_lbl.Bind(wx.EVT_LEFT_UP, self.OnQ1a_lblLeftUp)
Then I have the function:
def OnQ1a_lblLeftUp(self, event):
import quiz_notes
quiz_notes.create(self).Show(True)
Which works graphically, but I'm not really doing anything other than opening a window when the text is clicked on. Then I have another Frame with
import wx
def create(parent):
return quiz_notes(parent)
[wxID_QUIZ_NOTES, wxID_QUIZ_NOTESCANCEL_BTN, wxID_QUIZ_NOTESDATEPICKERCTRL1,
wxID_QUIZ_NOTESENTER_BTN, wxID_QUIZ_NOTESPANEL1, wxID_QUIZ_NOTESTEXTCTRL1,
] = [wx.NewId() for _init_ctrls in range(6)]
class quiz_notes(wx.Frame):
def _init_ctrls(self, prnt):
...and so on
I would like to pass at least a couple of variables. Eventually, when I start integrating the database into it, I would just pass a tuple. Or a reference to it. In C I'd just use a pointer. Anyway, make changes and then go back to my main window. In short, whats the best way to work with data between these two classes?
There are not pointers in Python, but there are mutable structures. You can share state between objects by handing the same state-object to multiple instances.
What that state-object should be depends entirely on what you're trying to do (I still have no idea). It could be anything mutable, a module, class, instance, dict or list.
In this example the shared state is a list in a global variable:
# a list is mutable
state = 'Hello World'.split()
class Class1:
def hi(self):
print ' '.join(state)
# do something to the shared state
state[0] = 'Bye'
class Class2:
def hi(self):
print ' '.join(state)
x = Class1()
y = Class2()
x.hi()
y.hi()
Check out Mike Driscoll's blog post on using PubSub in wxPython.
It's using the included PubSub in wxPython - just be aware that it is a stand-alone library, and the latest version's API is different to the one included in wx (a better API, if I may say so)
I'm developing a PyQT4 application, and it's getting pretty hard for me to navigate through all of the code at once. I know of the import foo statement, but I can't figure out how to make it import a chunk of code directly into my script, like the BASH source foo statement.
I'm trying to do this:
# File 'functions.py'
class foo(asd.fgh):
def __init__(self):
print 'foo'
Here is the second file.
# File 'main.py'
import functions
class foo(asd.fgh):
def qwerty(self):
print 'qwerty'
I want to include code or merge class decelerations from two separate files. In PHP, there is import_once('foo.php'), and as I mentioned previously, BASH has source 'foo.sh', but can I accomplish this with Python?
Thanks!
For some reason, my first thought was multiple inheritance. But why not try normal inheritance?
class foo(functions.foo):
# All of the methods that you want to add go here.
Is there some reason that this wont work?
Since you just want to merge class definitions, why don't you do:
# main.py
import functions
# All of the old stuff that was in main.foo is now in this class
class fooBase(asd.fgh):
def qwerty(self):
print 'qwerty'
# Now create a class that has methods and attributes of both classes
class foo(FooBase, functions.foo): # Methods from FooBase take precedence
pass
or
class foo(functions.foo, FooBase): # Methods from functions.foo take precedence
pass
This takes advantage of pythons capability for multiple inheritance to create a new class with methods from both sources.
You want execfile(). Although you really don't, since redefining a class, uh... redefines it.
monkey patching in python doesn't work in nearly the same way. This is normally considered poor form, but if you want to do it anyways, you can do this:
# File 'functions.py'
class foo(asd.fgh):
def __init__(self):
print 'foo'
the imported module remains unchanged. In the importing module, we do things quite differently.
# File 'main.py'
import functions
def qwerty(self):
print 'qwerty'
functions.foo.qwerty = qwerty
Note that there is no additional class definition, just a bare function. we then add the function as an attribute of the class.