I'm working on a text based game. I've tried to make this as organized and professional as possible by trying to follow all conventions.
I have a Map class, shown below:
import logging
#local imports
import Npc
class Map:
def __init__(self, name, npcs = []):
self.name = name
connections = []
if all(isinstance(item, Npc) for item in npcs):
self.npcs = npcs
else:
raise Exception("An NPC was not an instance of NPC")
def addConnection(self, connection):
if(connection == self):
return
self.name = connection.name
self.connections.append(connection)
My Main class creates two instances of these maps named forest, and village.
The point of this code is to add village into the connections array of forest:
village = Map("Village")
forest = Map("Forest")
forest.addConnection(village)
It seems simple enough. But for some reason, when forest.addConnection(village) is run, or even if i do forest.connections.append(village), the Map instance "village" gets added to the connections array of both forest, and village.
According to the debugger, after forest.addConnection(village) is run,
my two objects look as shown:
village (Map)
|------> name="village"
|------> connections = [village]
forest (Map)
|------> name="forest"
|------> connections = [village]
Why is this happening? Nowhere in my code do I add anything to village's connections array. Is there something about object oriented programming in Python I'm not understanding? Should I make village and forest classes that inherit/extend the Map class?
Thanks in advance for all the help.
Try to avoid call a constructor as default argument of a function.
This is the cause of your issue.
Exemple :
>>> class Map():
... def __init__(self, a=list()): # do __init__(self, a=[]) produce same result
... print(a)
... a.append("hello")
...
>>> b = Map()
[]
>>> b = Map()
['hello']
>>> b = Map()
['hello', 'hello']
>>> b = Map()
['hello', 'hello', 'hello']
>>> b = Map()
['hello', 'hello', 'hello', 'hello']
So insead of doing :
def __init__(self, name, npcs = []):
self.name = name
...
do
def __init__(self, name, npcs = None):
if npcs is None:
npcs = []
self.name = name
...
Found the issue. #iElden got me looking in the right place.
In the constructor, I set connections = [], not self.connections = [].
Thanks for the responses!
So Python isn't my strong suit and I've encountered what I view to be a strange issue. I've narrowed the problem down to a few lines of code, simplifying it to make asking this question easier. I have a list of objects, this object:
class FinalRecord():
ruid = 0
drugs = {}
I create them in the shell like this:
finalRecords = []
fr = FinalRecord()
fr.ruid = 7
finalRecords.append(fr)
fr2 = FinalRecord()
fr2.ruid = 10
finalRecords.append(fr2)
As soon as I want to change the drugs dict on one object, it changes it for the other one too
finalRecords[0].drugs["Avonex"] = "Found"
I print out this:
finalRecords[1].drugs
and it shows:
{'Avonex':'Found'}
When I'm expecting it to actually be empty. I know I'm not completely understand how Python is working with the objects, can anyone help me out here?
The reason for this is because drugs is a class attribute. So if you change it for one object it will in fact change in others.
If you are looking to not have this behaviour, then you are looking for instance attributes. Set drugs in your __init__ like this:
class FinalRecord():
def __init__(self):
self.ruid = 0
self.drugs = {}
Take note of the use of self, which is a reference to your object.
Here is some info on class vs instance attributes
So, full demo illustrating this behaviour:
>>> class FinalRecord():
... def __init__(self):
... self.ruid = 0
... self.drugs = {}
...
>>> obj1 = FinalRecord()
>>> obj2 = FinalRecord()
>>> obj1.drugs['stuff'] = 2
>>> print(obj1.drugs)
{'stuff': 2}
>>> print(obj2.drugs)
{}
You define drugs as a class attribute, not an instance attribute. Because of that, you are always modifying the same object. You should instead define drugs in the __init__ method. I would also suggest using ruid as an argument:
class FinalRecord():
def __init__(self, ruid):
self.ruid = ruid
self.drugs = {}
It could then be used as this:
fr = FinalRecord(7)
finalRecords.append(fr)
fr2 = FinalRecord(10)
finalRecords.append(fr2)
Or more simply:
finalRecords.append(FinalRecord(7))
finalRecords.append(FinalRecord(10))
Here I am pulling json data from different websites and I intend to embed some of the retrived information into instance variables. The trouble is, the json package retrieved keeps the info I want under different keys and list positions, and so each dictionary address is unique per website.
I am struggling to find a way of creating a new instance of a class and passing different dictionary keys to lookup for when new data is retrived.
Something like this would be too easy I feel...
import requests, json
class B(object):
def __init__(self, name, url, size, price):
self.name = name
self.url = url
self.size_address = size
self.price_address = price
self.size = 0
self.price = 0
self.data = {}
def retrieve(self):
#Data grab from web
try:
grab = requests.get(self.url, timeout=10)
self.data = tick.json()
except:
raise RuntimeError(self.name + 'Error')
def size(self):
self.size = data[self.size_address]
print self.size
def price(self):
self.price = data[self.price_address]
print self.price
>>> a = B('Dave','www.dave.com/api',['names'][0]['size'],[['names'][0]['prices'][0])
>>> a.size()
42030.20
I've had a look at abstract methods as well as binding functions written outside of the class definition. Binding functions looked to be promising but I couldn't create the same function for each class with different variables because I would be using the same name.
Hopefully someone can point me in the right direction.
One way to address this is through design-pattern decorators (thanks to martineau for the correction) employing method injection.
Suppose you start off with the following class:
class Info(object):
def __init__(self, data):
self.data = data
This class takes a dictionary data, and stores it. The problem is, the key might change among instances.
E.g.,
a1 = Info({'a': 3, 'b': 4})
a2 = Info({'c': 5, 'b': 4})
and we then realize that for a1, the key is a, but for a2, the key is c.
So, we can write a function like so:
import types
def patch_it(info, key):
def get_it(info):
return info.data[key]
info.get_it = types.MethodType(get_it, info)
which takes an Info object and a key, and injects into it a method get_it that returns, when called the dictionary's value of that key.
Here's an example:
>> a1 = Info({'a': 3, 'b': 4})
>> patch_it(a1, 'a')
>> a1.get_it()
3
>> a2 = Info({'c': 5, 'b': 4})
>> patch_it(a2, 'c')
>> a2.get_it()
5
Note
For the specific above example, it is an overkill - Why not add a member to info specifying what is the key? The point is that this method allows you to manipulate objects of Info, in arbitrarily complex ways, independently from the class itself.
Hello Stack Overflow!
I am executing a simple command in a program that compiles a report of all the books contained in a library. The library contains a list of shelves, each shelves contains a dictionary of books. However, despite my best efforts, I am always duplicating all my books and placing them on every shelf, instead of the shelf I've instructed the program to place the book on.
I expect I have missed out on some kind of fundamental rule with object creation and organization.
I believe the culprits are the enshelf and unshelf methods in the book class.
Thank you so much for your time,
Jake
Code below:
class book():
shelf_number = None
def __init__(self, title, author):
super(book, self).__init__()
self.title = title
self.author = author
def enshelf(self, shelf_number):
self.shelf_number = shelf_number
SPL.shelves[self.shelf_number].books[hash(self)] = self
def unshelf(self):
del SPL.shelves[self.shelf_number].books[hash(self)]
return self
def get_title(self):
return self.title
def get_author(self):
return self.author
class shelf():
books = {}
def __init__(self):
super(shelf, self).__init__()
def get_books(self):
temp_list = []
for k in self.books.keys():
temp_list.append(self.books[k].get_title())
return temp_list
class library():
shelves = []
def __init__(self, name):
super(library, self).__init__()
self.name = name
def make_shelf(self):
temp = shelf()
self.shelves.append(temp)
def remove_shelf(shelf_number):
del shelves[shelf_number]
def report_all_books(self):
temp_list = []
for x in range(0,len(self.shelves)):
temp_list.append(self.shelves[x].get_books())
print(temp_list)
#---------------------------------------------------------------------------------------
#----------------------SEATTLE PUBLIC LIBARARY -----------------------------------------
#---------------------------------------------------------------------------------------
SPL = library("Seattle Public Library")
for x in range(0,3):
SPL.make_shelf()
b1 = book("matterhorn","karl marlantes")
b2 = book("my life","bill clinton")
b3 = book("decision points","george bush")
b1.enshelf(0)
b2.enshelf(1)
b3.enshelf(2)
print(SPL.report_all_books())
b1.unshelf()
b2.unshelf()
b3.unshelf()
OUTPUT:
[['decision points', 'my life', 'matterhorn'], ['decision points', 'my life', 'matterhorn'], ['decision points', 'my life', 'matterhorn']]
None
[Finished in 0.1s]
..instead of [["decision points"],["my life"],["matterhorn"]]
Use dict.pop() instead of del.
Add self.books = {} to shelf's __init__. Don't declare books outside of the __init__, because if you do so, all of the instances of that class are going to refer to the same thing. Instead, this makes each instance have its own dictionary, which is of course what you want since a book can't be in two shelves at once.
Do the same for library and its shelves and book and its shelf_number.
Pass a library instance as an argument to enshelf and unshelf. When you refer to SPL from within your objects' methods, Python finds that there is no local SPL defined, so it searches for one outside of the local scope; but if you were to try to assign something to SPL or do some other sort of mutative business, you would get an UnboundLocalError.
Bonuses:
class book(object), class shelf(object), and class library(object). (Won't fix your problem, but you should do that anyway.)
You don't need to hash the keys before using them, they will be hashed (if they are hashable, but if you're hashing them, then they are).
There is no need to call super() unless you are inheriting from something, in which case you can delegate a method call to a parent or sibling using it - but you aren't doing that.
get_books() can be implemented as nothing more than return [self.books[k].get_title() for k in self.books.iterkeys()]
Likewise for report_all_books(): return [shlf.get_books() for shlf in self.shelves]. Note that I am not iterating over the indices, but rather over the elements themselves. Try for c in "foobar": print(c) in the interactive shell if you want to see for yourself.
I've got a bad smell in my code. Perhaps I just need to let it air out for a bit, but right now it's bugging me.
I need to create three different input files to run three Radiative Transfer Modeling (RTM) applications, so that I can compare their outputs. This process will be repeated for thousands of sets of inputs, so I'm automating it with a python script.
I'd like to store the input parameters as a generic python object that I can pass to three other functions, who will each translate that general object into the specific parameters needed to run the RTM software they are responsible. I think this makes sense, but feel free to criticize my approach.
There are many possible input parameters for each piece of RTM software. Many of them over-lap. Most of them are kept at sensible defaults, but should be easily changed.
I started with a simple dict
config = {
day_of_year: 138,
time_of_day: 36000, #seconds
solar_azimuth_angle: 73, #degrees
solar_zenith_angle: 17, #degrees
...
}
There are a lot of parameters, and they can be cleanly categorized into groups, so I thought of using dicts within the dict:
config = {
day_of_year: 138,
time_of_day: 36000, #seconds
solar: {
azimuth_angle: 73, #degrees
zenith_angle: 17, #degrees
...
},
...
}
I like that. But there are a lot of redundant properties. The solar azimuth and zenith angles, for example, can be found if the other is known, so why hard-code both? So I started looking into python's builtin property. That lets me do nifty things with the data if I store it as object attributes:
class Configuration(object):
day_of_year = 138,
time_of_day = 36000, #seconds
solar_azimuth_angle = 73, #degrees
#property
def solar_zenith_angle(self):
return 90 - self.solar_azimuth_angle
...
config = Configuration()
But now I've lost the structure I had from the second dict example.
Note that some of the properties are less trivial than my solar_zenith_angle example, and might require access to other attributes outside of the group of attributes it is a part of. For example I can calculate solar_azimuth_angle if I know the day of year, time of day, latitude, and longitude.
What I'm looking for:
A simple way to store configuration data whose values can all be accessed in a uniform way, are nicely structured, and may exist either as attributes (real values) or properties (calculated from other attributes).
A possibility that is kind of boring:
Store everything in the dict of dicts I outlined earlier, and having other functions run over the object and calculate the calculatable values? This doesn't sound fun. Or clean. To me it sounds messy and frustrating.
An ugly one that works:
After a long time trying different strategies and mostly getting no where, I came up with one possible solution that seems to work:
My classes: (smells a bit func-y, er, funky. def-initely.)
class SubConfig(object):
"""
Store logical groupings of object attributes and properties.
The parent object must be passed to the constructor so that we can still
access the parent object's other attributes and properties. Useful if we
want to use them to compute a property in here.
"""
def __init__(self, parent, *args, **kwargs):
super(SubConfig, self).__init__(*args, **kwargs)
self.parent = parent
class Configuration(object):
"""
Some object which holds many attributes and properties.
Related configurations settings are grouped in SubConfig objects.
"""
def __init__(self, *args, **kwargs):
super(Configuration, self).__init__(*args, **kwargs)
self.root_config = 2
class _AConfigGroup(SubConfig):
sub_config = 3
#property
def sub_property(self):
return self.sub_config * self.parent.root_config
self.group = _AConfigGroup(self) # Stinky?!
How I can use them: (works as I would like)
config = Configuration()
# Inspect the state of the attributes and properties.
print("\nInitial configuration state:")
print("config.rootconfig: %s" % config.root_config)
print("config.group.sub_config: %s" % config.group.sub_config)
print("config.group.sub_property: %s (calculated)" % config.group.sub_property)
# Inspect whether the properties compute the correct value after we alter
# some attributes.
config.root_config = 4
config.group.sub_config = 5
print("\nState after modifications:")
print("config.rootconfig: %s" % config.root_config)
print("config.group.sub_config: %s" % config.group.sub_config)
print("config.group.sub_property: %s (calculated)" % config.group.sub_property)
The behavior: (output of execution of all of the above code, as expected)
Initial configuration state:
config.rootconfig: 2
config.group.sub_config: 3
config.group.sub_property: 6 (calculated)
State after modifications:
config.rootconfig: 4
config.group.sub_config: 5
config.group.sub_property: 20 (calculated)
Why I don't like it:
Storing configuration data in class definitions inside of the main object's __init__() doesn't feel elegant. Especially having to instantiate them immediately after definition like that. Ugh. I can deal with that for the parent class, sure, but doing it in a constructor...
Storing the same classes outside the main Configuration object doesn't feel elegant either, since properties in the inner classes may depend on the attributes of Configuration (or their siblings inside it).
I could deal with defining the functions outside of everything, so inside having things like
#property
def solar_zenith_angle(self):
return calculate_zenith(self.solar_azimuth_angle)
but I can't figure out how to do something like
#property
def solar.zenith_angle(self):
return calculate_zenith(self.solar.azimuth_angle)
(when I try to be clever about it I always run into <property object at 0xXXXXX>)
So what is the right way to go about this? Am I missing something basic or taking a very wrong approach? Does anyone know a clever solution?
Help! My python code isn't beautiful! I must be doing something wrong!
Phil,
Your hesitation about func-y config is very familiar to me :)
I suggest you to store your config not as a python file but as a structured data file. I personally prefer YAML because it looks clean, just as you designed in the very beginning. Of course, you will need to provide formulas for the auto calculated properties, but it is not too bad unless you put too much code. Here is my implementation using PyYAML lib.
The config file (config.yml):
day_of_year: 138
time_of_day: 36000 # seconds
solar:
azimuth_angle: 73 # degrees
zenith_angle: !property 90 - self.azimuth_angle
The code:
import yaml
yaml.add_constructor("tag:yaml.org,2002:map", lambda loader, node:
type("Config", (object,), loader.construct_mapping(node))())
yaml.add_constructor("!property", lambda loader, node:
property(eval("lambda self: " + loader.construct_scalar(node))))
config = yaml.load(open("config.yml"))
print "LOADED config.yml"
print "config.day_of_year:", config.day_of_year
print "config.time_of_day:", config.time_of_day
print "config.solar.azimuth_angle:", config.solar.azimuth_angle
print "config.solar.zenith_angle:", config.solar.zenith_angle, "(calculated)"
print
config.solar.azimuth_angle = 65
print "CHANGED config.solar.azimuth_angle = 65"
print "config.solar.zenith_angle:", config.solar.zenith_angle, "(calculated)"
The output:
LOADED config.yml
config.day_of_year: 138
config.time_of_day: 36000
config.solar.azimuth_angle: 73
config.solar.zenith_angle: 17 (calculated)
CHANGED config.solar.azimuth_angle = 65
config.solar.zenith_angle: 25 (calculated)
The config can be of any depth and properties can use any subgroup values. Try this for example:
a: 1
b:
c: 3
d: some text
e: true
f:
g: 7.01
x: !property self.a + self.b.c + self.b.f.g
Assuming you already loaded this config:
>>> config
<__main__.Config object at 0xbd0d50>
>>> config.a
1
>>> config.b
<__main__.Config object at 0xbd3bd0>
>>> config.b.c
3
>>> config.b.d
'some text'
>>> config.b.e
True
>>> config.b.f
<__main__.Config object at 0xbd3c90>
>>> config.b.f.g
7.01
>>> config.x
11.01
>>> config.b.f.g = 1000
>>> config.x
1004
UPDATE
Let us have a property config.b.x which uses both self, parent and subgroup attributes in its formula:
a: 1
b:
x: !property self.parent.a + self.c + self.d.e
c: 3
d:
e: 5
Then we just need to add a reference to parent in subgroups:
import yaml
def construct_config(loader, node):
attrs = loader.construct_mapping(node)
config = type("Config", (object,), attrs)()
for k, v in attrs.iteritems():
if v.__class__.__name__ == "Config":
setattr(v, "parent", config)
return config
yaml.add_constructor("tag:yaml.org,2002:map", construct_config)
yaml.add_constructor("!property", lambda loader, node:
property(eval("lambda self: " + loader.construct_scalar(node))))
config = yaml.load(open("config.yml"))
And let's see how it works:
>>> config.a
1
>>> config.b.c
3
>>> config.b.d.e
5
>>> config.b.parent == config
True
>>> config.b.d.parent == config.b
True
>>> config.b.x
9
>>> config.a = 1000
>>> config.b.x
1008
Well, here's an ugly way to at least make sure your properties get called:
class ConfigGroup(object):
def __init__(self, config):
self.config = config
def __getattribute__(self, name):
v = object.__getattribute__(self, name)
if hasattr(v, '__get__'):
return v.__get__(self, ConfigGroup)
return v
class Config(object):
def __init__(self):
self.a = 10
self.group = ConfigGroup(self)
self.group.a = property(lambda group: group.config.a*2)
Of course, at this point you might as well forego property entirely and just check if the attribute is callable in __getattribute__.
Or you could go all out and have fun with metaclasses:
def config_meta(classname, parents, attrs):
defaults = {}
groups = {}
newattrs = {'defaults':defaults, 'groups':groups}
for name, value in attrs.items():
if name.startswith('__'):
newattrs[name] = value
elif isinstance(value, type):
groups[name] = value
else:
defaults[name] = value
def init(self):
for name, value in defaults.items():
self.__dict__[name] = value
for name, value in groups.items():
group = value()
group.config = self
self.__dict__[name] = group
newattrs['__init__'] = init
return type(classname, parents, newattrs)
class Config2(object):
__metaclass__ = config_meta
a = 10
b = 2
class group(object):
c = 5
#property
def d(self):
return self.c * self.config.a
Use it like this:
>>> c2.a
10
>>> c2.group.d
50
>>> c2.a = 6
>>> c2.group.d
30
Final edit (?): if you don't want to have to "backtrack" using self.config in subgroup property definitions, you can use the following instead:
class group_property(property):
def __get__(self, obj, objtype=None):
return super(group_property, self).__get__(obj.config, objtype)
def __set__(self, obj, value):
super(group_property, self).__set__(obj.config, value)
def __delete__(self, obj):
return super(group_property, self).__del__(obj.config)
class Config2(object):
...
class group(object):
...
#group_property
def e(config):
return config.group.c * config.a
group_property receives the base config object instead of the group object, so paths always start from the root. Therefore, e is equivalent to the previously defined d.
BTW, supporting nested groups is left as an exercise for the reader.
Wow, I just read an article about descriptors on r/python today, but I don't think hacking descriptors is going to give you what you want.
The only thing I know that handles sub-configurations like that is flatland. Here's how it would work in Flatland anyhow.
But you could do:
class Configuration(Form):
day_of_year = Integer
time_of_day = Integer
class solar(Form):
azimuth_angle = Integer
solar_angle = Integer
Then load the dictionary in
config = Configuration({
day_of_year: 138,
time_of_day: 36000, #seconds
solar: {
azimuth_angle: 73, #degrees
zenith_angle: 17, #degrees
...
},
...
})
I love flatland, but I'm not sure you gain much by using it.
You could add a metaclass or decorator to your class definition.
something like
def instantiate(klass):
return klass()
class Configuration(object):
#instantiate
class solar(object):
#property
def azimuth_angle(self):
return self.azimuth_angle
That might be better. Then create a nice __init__ on Configuration that can load all the data from a dictionary. I dunno maybe someone else has a better idea.
Here's something a little more complete (without as much magic as LaC's answer, but slightly less generic).
def instantiate(clazz): return clazz()
#dummy functions for testing
calc_zenith_angle = calc_azimuth_angle = lambda(x): 3
class Solar(object):
def __init__(self):
if getattr(self,'azimuth_angle',None) is None and getattr(self,'zenith_angle',None) is None:
return AttributeError("must have either azimuth_angle or zenith_angle provided")
if getattr(self,'zenith_angle',None) is None:
self.zenith_angle = calc_zenith_angle(self.azimuth_angle)
elif getattr(self,'azimuth_angle',None) is None:
self.azimuth_angle = calc_azimuth_angle(self.zenith_angle)
class Configuration(object):
day_of_year = 138
time_of_day = 3600
#instantiate
class solar(Solar):
azimuth_angle = 73
#zenith_angle = 17 #not defined
#if you don't want auto-calculation to be done automagically
class ConfigurationNoAuto(object):
day_of_year = 138
time_of_day = 3600
#instantiate
class solar(Solar):
azimuth_angle = 73
#property
def zenith_angle(self):
return calc_zenith_angle(self.azimuth_angle)
config = Configuration()
config_no_auto = ConfigurationNoAuto()
>>> config.day_of_year
138
>>> config_no_auto.day_of_year
138
>>> config_no_auto.solar.azimuth_angle
73
>>> config_no_auto.solar.zenith_angle
3
>>> config.solar.zenith_angle
3
>>> config.solar.azimuth_angle
7
I think I would rather subclass dict so that it fell back to a default if no data was available. Something like this:
class fallbackdict(dict):
...
defaults = { 'pi': 3.14 }
x_config = fallbackdict(defaults)
x_config.update({
'planck': 6.62606957e-34
})
The other aspect can be addressed with callables. Wether this is elegant or ugly depends on wether datatype declarations are useful:
pi: (float, 3.14)
calc = lambda v: v[0](v[1])
x_config.update({
'planck': (double, 6.62606957e-34),
'calculated': (lambda x: 1.0 - calc(x_config['planck']), None)
})
Depending on the circumstances, the lambda might be broken out if it is used many times.
Don't know if it is better, but it mostly preserves the dictionary style.