I need to append a dictionary to a dictionary of dictionaries - python

Ok, I have a dictionary called food. food has two elements, both of which are dictionaries themselves, veg and dairy. veg has two elements root : Turnip, and stem : Asparagus. dairy has cheese : Cheddar and yogurt : Strawberry. I also have a new dictionary fruit which has red : Cherry and yellow : Banana.
food['veg']['root'] == 'Turnip'
food['dairy']['cheese'] == 'Cheddar'
etc and
fruit['red'] == 'Cherry'
Now I would like to add the "fruit" dictionary to the "food" dictionary in its entirety, so that I will have:
food['fruit']['red'] == 'Cherry'
I know that I could do something like this:
food['fruit'] = fruit
But that seems clumsy. I would like to do something like
food.Append(fruit)
But that doesn't do what I need.
(Edited to removed the initial capitals from variable names, since that seemed to be causing a distraction.)

Food['Fruit'] = Fruit is the right and proper way (apart from the capitalized names).
As #kindall wisely notes in the comment, there can be several names referencing the same dictionary, which is why one can't build a function that maps the object to its name and uses that as the new key in your food dict.

I know what you are trying to do. You are trying to avoid DRY in python. Sadly python, and many other languages, are very bad at this.
The main issue though is that You are mixing the names of your variables with the values in your program. It is a common faux pas in many programming languages.
If you really wanted to do this, the thing you have bound to the fruit=... variable would need to know its name.
You are trying to say, in python "graft this dict onto this other dict".
Python demands that you say it like "graft this dict onto this other dict, attaching it at "Fruit""
The only way around this is that the name "Fruit" already exist somewhere. But it doesn't: it only exists in a variable name, which is not in your program.
There are two solutions:
You could avoid ever creating a fruit=... variable, and instead directly graft it only the "Fruit" name. So you would only have to type "Fruit" once, but instead of not typing the somedict["Fruit"] like you want, we're avoiding typing the variable name. This is accomplished by programming in anonymous style:
somedict["Fruit"] = {'red':'apple', 'yellow':'banana'}
Sadly, python will not let you do this if your construction requires statements; you can only get away with this if you just have a literal. The other solution is:
You could create a function which does this for you:
graft(somedict, "Fruit", {'red':'apple', 'yellow':'banana'})
Sadly this too would not work if your construction required any kind of statement. You could create a variable x={'red':...} or something, but that defeats the whole purpose. The third way, which you shouldn't use, is locals(), but that still requires you to refer to the name.
In conclusion, IF you require significant for loops and functions and if statements, what you are trying to do is impossible in python, unless you change the entire way you construct your fruit=... dictionary. It would be possible with combinations of multiline lambdas, dictionary comprehensions (dict((k,v) for k,v in ...)), inline IFTRUE if EXPR else IFFALSE, etc. The danger of this style is that it very quickly becomes hard to read.
It is possible if you are able to express your dictionary as a literal dictionary, or a dictionary comprehension, or the output of a function which you have already written. In fact it's fairly easy (see other answers). Unfortunately the original question does not say how you are building these dictionaries.
Assuming those answers don't answer your question (that is, you are building these in a really complicated manner), you can write "meta" code: code that will make your dictionary for you, and abuse reflection. However the best solution in python is to just try to make your dictionary with iteration. For example:
foods = {
"Fruit": {...},
"Meats": makeMeats(),
}
for name,data in ...:
foods[name] = someProcessing(data)
foods.update(dataFromSomeFile) #almost same as last two lines

Is there any reason you're using the square brackets? Because you can represent this data structure with nested literal dictionaries:
food = {
'veg': {
'red': 'tomato',
'green': 'lettuce',
},
'fruit': {
'red': 'cherry',
'green': 'grape',
},
}

You can't do append because a dictionary is not a list: it has no order. What you want to do is update the dictionary with a new key/value pair. You need to use:
Food['Fruit'] = Fruit
or, alternatively:
Food.update({'Fruit': Fruit})
An unrelated note: it's not Python coding style to write variables with capitals. Fruit would be written as fruit instead.

The basic problem you have is that your dictionary does not know its own name. Which is normal; generally, any number of names can be bound to a Python object, and no single name is in any way privileged over any others. In other words, in a = {}; b = a, both a and b are names for the same dictionary, and a is not the "real" name of the dictionary just because it was assigned first. And in fact, a dictionary has no way to even know what name is on the left side of the variable.
So one alternative is to have the dictionary contain its own name as a key. For example:
fruit = {"_name": "fruit"}
fruit["red"] = "cherry"
food[fruit["_name"]] = fruit
Well, that didn't help much, did it? It did in a way, because fruit is now not a string any longer in the attachment to the food dictionary, so at least Python will give you an error message if you mistype it. But you're actually typing "fruit" even more than before: you now have to type it when you create the dictionary in addition to when you attach it to another dictionary. And there is a fair bit more typing in general.
Also, having the name as an item in the dictionary is kind of inconvenient; when you are iterating over the dictionary, you have to write code to skip it.
You could write a function to do the attachment for you:
def attach(main, other):
main[other["_name"]] = other
Then you don't have to repeat yourself when you attach the sub-dictionary to the main one:
fruit = {"_name": "fruit"}
fruit["red"] = "cherry"
attach(food, fruit)
And of course, now you can actually create a dictionary subclass that knows its own name and can attach a named subdictionary. As a bonus, we can make the name an attribute of the dictionary rather than storing it in the dictionary, which will keep the actual dictionary cleaner.
class NamedDict(dict):
def __init__(self, name="", seq=(), **kwargs):
dict.__init__(self, seq, **kwargs)
self.__name__ = name
def attach(self, other):
self[other.__name__] = other
food = NamedDict("food")
fruit = NamedDict("fruit")
fruit["red"] = "cherry"
food.attach(fruit)
But we still have one repeat, when the NamedDict is initially defined: food = NamedDict("food") for example. How do we dispense with that?
It is possible, though unwieldy and probably not worth the trouble. Python has two kinds of objects that have an "intrinsic" name: classes and functions. In other words:
class Foo:
pass
The above not only creates a variable named Foo in the current namespace, the class's name is also conveniently stored in the class's __name__ attribute. (Functions do something similar.) By abusing classes and metaclasses, we can exploit the underlying machinery to completely avoid repeating ourselves—with the minor drawback of having to write our dictionaries as though they were classes!
class NamedDict(dict):
class __metaclass__(type):
def __new__(meta, name, bases, attrs):
if "NamedDict" not in globals(): # we're defining the base class
return type.__new__(meta, name, bases, attrs)
else:
attrs.pop("__module__", None) # Python adds this; do not want!
return meta.NamedDict(name, **attrs)
class NamedDict(dict):
def __init__(self, name, seq=(), **kwargs):
dict.__init__(self, seq, **kwargs)
self.__name__ = name
def attach(self, other):
self[other.__name__] = other
__call__ = NamedDict
Now, instead of defining our dictionaries the usual way, we declare them as subclasses of NamedDict. Thanks to the metaclass, subclassing the outer NamedDict class actually creates instances of the inner NamedDict class (which is the same as before). The attributes of the subclass we define, if any, become items in the dictionary, like keyword arguments of dict().
class food(NamedDict): pass
class fruit(NamedDict): red = "cherry"
# or, defining each item separately:
class fruit(NamedDict): pass
fruit["red"] = "cherry"
food.attach(fruit)
As a bonus, you can still define a NamedDict the "regular" way, by instantiating it as a class:
fruit = NamedDict("fruit", red="cherry")
Be warned, though: the "class that's really a dictionary" is a pretty non-standard idiom for Python and I would suggest that you not ever actually do this, as other programmers will not find it at all clear. Still, this is how it can be done in Python.

You are probably looking for defaultdict. It works by adding a new empty dictionary whenever you ask for a new first level type.
Example shown below:
from collections import defaultdict
basic_foods = {'Veg' : {'Root' : 'Turnip'}, 'Dairy' : {'Cheese' : 'Cheddar'}}
foods = defaultdict(dict, basic_foods)
foods['Fruit']['Red'] = "Cherry"
print foods['Fruit']

Related

Dynamically update all instances of multiple input function

I'm creating a program with a class that has 3 input attributes. The program calls a function that creates many of these objects with their inputs being given based on some other criteria not important to this question.
As I further develop my program, I may want to add more and more attributes to the class. This means that I have to go and find all instances of the function I am using to create these objects, and change the input arguments.
For example, my program may have many of these:
create_character(blue, pizza, running)
where inputs correspond to character's favorite color, food, and activity. Later, I may want to add a fourth input, such as favorite movie, or possibly a fifth or sixth or ninety-ninth input.
Do professional programmers have any advice for structuring their code so that they don't have to go through and individually change each line that the create_character function is called so that it now has the new, correct number of inputs?
Find and replace seems fine, but this makes error possible, and also seems tedious. I'm anticipating calling this function at least 50 times.
I can think of a few options for how you could design your class to make easier to extend later new kinds of "favorite" things.
The first approach is to make most (or all) of the arguments optional. That is, you should specify a default value for each one (which might be None if there's not a real value that could apply as a default). This way, when you add an extra argument, the existing places that call the function without the new argument will still work, they'll just get the default value.
Another option would be to use a container (like a dictionary) to hold the values, rather than using a separate variable or argument for each one. For instance, in your example could represent the character's favorites using a dictionary like favorites = {'color': blue, 'food': pizza, 'activity': running} (assuming the those values are defined somewhere), and then you could pass the dictionary around instead of the separate items. If you use the get method of the dictionary, you can also make this type of design use default values (favorites.get('movie') will return None if you haven't updated the code that creates the dictionary to add a 'movie' key yet).
You can take advantage of argument/keyword argument unpacking to support dynamically-changing function parameters. And also factory function/classes that generate the function you need:
def create_character(required1, required2, *opt_args, **kwargs):
""" create_character must always be called with required1 and required2
but can receive *opt_args sequence that stores arbitrary number of
positional args. kwargs hold a dict of optional keyword args """
for i, pos_arg in enumerate(opt_args):
# pos_arg walks opt_args sequence
print "position: {}, value: {}".format(i+3, pos_arg)
for keyword, value in kwargs:
print "Keyword was: {}, Value was: {}".format(keyword, value)
pos_args = (1,2,3)
create_character('this is required','this is also required', *pos_args)
""" position: 3, value: 1
position: 4, value: 2
position: 5, value: 3 """
a_dict = {
'custom_arg1': 'custom_value1',
'custom_arg2': 'custom_value2',
'custom_arg3': 'custom_value3'
}
create_character('this is required','this is also required', **a_dict)
""" Keyword was: custom_arg2, value: custom_value2
Keyword was: custom_arg3, value: custom_value3
Keyword was: custom_arg1, value: custom_value1 """
I really like the list or dictionary input method, but it was still messy and allowed for the possibility of error. What I ended up doing was this:
I changed the class object to have no inputs. Favorites were first assigned with random, default, or unspecified options.
After the class object was created, I then edited the attributes of the object, as so:
self.favorite_movie = "unspecified"
self.favorite_activity = "unspecified"
new_character = (character())
new_character.favorite_movie = "Dr. Strangelove"
I think that the downside to this approach is that it should be slower than inputting the variables directly. The upside is that this is easy to change in the future. Perhaps when the program is finished, it will make more sense to then convert to #Blckknight 's method, and give the input as a list or dictionary.

What is difference between str.format_map(mapping) and str.format

I don't understand the str.format_map(mapping) method. I only know it is similar to str.format(*args, **kwargs) method and you can also pass a dictionary as an argument (please see my example).
Example:
print ("Test: argument1={arg1} and argument2={arg2}".format_map({'arg1':"Hello",'arg2':123}))
Can someone explain to me the difference between str.format_map(mapping) and str.format(*args, **kwargs) methods and why do I need the str.format_map(mapping) method?
str.format(**kwargs) makes a new dictionary in the process of calling. str.format_map(kwargs) does not. In addition to being slightly faster, str.format_map() allows you to use a dict subclass (or other object that implements mapping) with special behavior, such as gracefully handling missing keys. This special behavior would be lost otherwise when the items were copied to a new dictionary.
See: https://docs.python.org/3/library/stdtypes.html#str.format_map
Here's another thing you can't do with .format(**kwargs):
>>> class UserMap:
def __getitem__(self, key):
return input(f"Enter a {key}: ")
>>> mad_lib = "I like to {verb} {plural noun} and {plural noun}.".format_map(UserMap())
Enter a verb: smell
Enter a plural noun: oranges
Enter a plural noun: pythons
>>> mad_lib
'I like to smell oranges and pythons.'
Calling .format(**UserMap()) wouldn't even work because in order to unpack the **kwargs into a dictionary, Python needs to know what all of the keys in the mapping are, which isn't even defined.
Another one:
>>> class NumberSquarer:
def __getitem__(self, key):
return str(int(key.lstrip('n'))**2)
>>> "{n17} is a big number, but {n20} is even bigger.".format_map(NumberSquarer())
'289 is a big number, but 400 is even bigger.'
It would be impossible to unpack **NumberSquarer() since it has infinitely many keys!
str.format(**mapping) when called creates a new dictionary, whereas str.format_map(mapping) doesn't. The format_map(mapping) lets you pass missing keys. This is useful when working per se with the dict subclass.
class Foo(dict): # inheriting the dict class
def __missing__(self,key):
return key
print('({x},{y})'.format_map(Foo(x='2'))) # missing key y
print('({x},{y})'.format_map(Foo(y='3'))) # missing key x

Is there a reason not to send super().__init__() a dictionary instead of **kwds?

I just started building a text based game yesterday as an exercise in learning Python (I'm using 3.3). I say "text based game," but I mean more of a MUD than a choose-your-own adventure. Anyway, I was really excited when I figured out how to handle inheritance and multiple inheritance using super() yesterday, but I found that the argument-passing really cluttered up the code, and required juggling lots of little loose variables. Also, creating save files seemed pretty nightmarish.
So, I thought, "What if certain class hierarchies just took one argument, a dictionary, and just passed the dictionary back?" To give you an example, here are two classes trimmed down to their init methods:
class Actor:
def __init__(self, in_dict,**kwds):
super().__init__(**kwds)
self._everything = in_dict
self._name = in_dict["name"]
self._size = in_dict["size"]
self._location = in_dict["location"]
self._triggers = in_dict["triggers"]
self._effects = in_dict["effects"]
self._goals = in_dict["goals"]
self._action_list = in_dict["action list"]
self._last_action = ''
self._current_action = '' # both ._last_action and ._current_action get updated by .update_action()
class Item(Actor):
def __init__(self,in_dict,**kwds)
super().__init__(in_dict,**kwds)
self._can_contain = in_dict("can contain") #boolean entry
self._inventory = in_dict("can contain") #either a list or dict entry
class Player(Actor):
def __init__(self, in_dict,**kwds):
super().__init__(in_dict,**kwds)
self._inventory = in_dict["inventory"] #entry should be a Container object
self._stats = in_dict["stats"]
Example dict that would be passed:
playerdict = {'name' : '', 'size' : '0', 'location' : '', 'triggers' : None, 'effects' : None, 'goals' : None, 'action list' = None, 'inventory' : Container(), 'stats' : None,}
(The None's get replaced by {} once the dictionary has been passed.)
So, in_dict gets passed to the previous class instead of a huge payload of **kwds.
I like this because:
It makes my code a lot neater and more manageable.
As long as the dicts have at least some entry for the key called, it doesn't break the code. Also, it doesn't matter if a given argument never gets used.
It seems like file IO just got a lot easier (dictionaries of player data stored as dicts, dictionaries of item data stored as dicts, etc.)
I get the point of **kwds (EDIT: apparently I didn't), and it hasn't seemed cumbersome when passing fewer arguments. This just appears to be a comfortable way of dealing with a need for a large number of attributes at the the creation of each instance.
That said, I'm still a major python noob. So, my question is this: Is there an underlying reason why passing the same dict repeatedly through super() to the base class would be a worse idea than just toughing it out with nasty (big and cluttered) **kwds passes? (e.g. issues with the interpreter that someone at my level would be ignorant of.)
EDIT:
Previously, creating a new Player might have looked like this, with an argument passed for each attribute.
bob = Player('bob', Location = 'here', ... etc.)
The number of arguments needed blew up, and I only included the attributes that really needed to be present to not break method calls from the Engine object.
This is the impression I'm getting from the answers and comments thus far:
There's nothing "wrong" with sending the same dictionary along, as long as nothing has the opportunity to modify its contents (Kirk Strauser) and the dictionary always has what it's supposed to have (goncalopp). The real answer is that the question was amiss, and using in_dict instead of **kwds is redundant.
Would this be correct? (Also, thanks for the great and varied feedback!)
I'm not sure I understand your question exactly, because I don't see how the code looked before you made the change to use in_dict. It sounds like you have been listing out dozens of keywords in the call to super (which is understandably not what you want), but this is not necessary. If your child class has a dict with all of this information, it can be turned into kwargs when you make the call with **in_dict. So:
class Actor:
def __init__(self, **kwds):
class Item(Actor):
def __init__(self, **kwds)
self._everything = kwds
super().__init__(**kwds)
I don't see a reason to add another dict for this, since you can just manipulate and pass the dict created for kwds anyway
Edit:
As for the question of the efficiency of using the ** expansion of the dict versus listing the arguments explicitly, I did a very unscientific timing test with this code:
import time
def some_func(**kwargs):
for k,v in kwargs.items():
pass
def main():
name = 'felix'
location = 'here'
user_type = 'player'
kwds = {'name': name,
'location': location,
'user_type': user_type}
start = time.time()
for i in range(10000000):
some_func(**kwds)
end = time.time()
print 'Time using expansion:\t{0}s'.format(start - end)
start = time.time()
for i in range(10000000):
some_func(name=name, location=location, user_type=user_type)
end = time.time()
print 'Time without expansion:\t{0}s'.format(start - end)
if __name__ == '__main__':
main()
Running this 10,000,000 times gives a slight (and probably statistically meaningless) advantage passing around a dict and using **.
Time using expansion: -7.9877269268s
Time without expansion: -8.06108212471s
If we print the IDs of the dict objects (kwds outside and kwargs inside the function), you will see that python creates a new dict for the function to use in either case, but in fact the function only gets one dict forever. After the initial definition of the function (where the kwargs dict is created) all subsequent calls are just updating the values of that dict belonging to the function, no matter how you call it. (See also this enlightening SO question about how mutable default parameters are handled in python, which is somewhat related)
So from a performance perspective, you can pick whichever makes sense to you. It should not meaningfully impact how python operates behind the scenes.
I've done that myself where in_dict was a dict with lots of keys, or a settings object, or some other "blob" of something with lots of interesting attributes. That's perfectly OK if it makes your code cleaner, particularly if you name it clearly like settings_object or config_dict or similar.
That shouldn't be the usual case, though. Normally it's better to explicitly pass a small set of individual variables. It makes the code much cleaner and easier to reason about. It's possible that a client could pass in_dict = None by accident and you wouldn't know until some method tried to access it. Suppose Actor.__init__ didn't peel apart in_dict but just stored it like self.settings = in_dict. Sometime later, Actor.method comes along and tries to access it, then boom! Dead process. If you're calling Actor.__init__(var1, var2, ...), then the caller will raise an exception much earlier and provide you with more context about what actually went wrong.
So yes, by all means: feel free to do that when it's appropriate. Just be aware that it's not appropriate very often, and the desire to do it might be a smell telling you to restructure your code.
This is not python specific, but the greatest problem I can see with passing arguments like this is that it breaks encapsulation. Any class may modify the arguments, and it's much more difficult to tell which arguments are expected in each class - making your code difficult to understand, and harder to debug.
Consider explicitly consuming the arguments in each class, and calling the super's __init__ on the remaining. You don't need to make them explicit:
class ClassA( object ):
def __init__(self, arg1, arg2=""):
pass
class ClassB( ClassA ):
def __init__(self, arg3, arg4="", *args, **kwargs):
ClassA.__init__(self, *args, **kwargs)
ClassB(3,4,1,2)
You can also leave the variables uninitialized and use methods to set them. You can then use different methods in the different classes, and all subclasses will have access to the superclass methods.

creating variables from external data in python script

I want to read an external data source (excel) and create variables containing the data. Suppose the data is in columns and each column has a header with the variable name.
My first idea is to write a function so i can easily reuse it. Also, I could easily give some additional keyword arguments to make the function more versatile.
The problem I'm facing is that I want to refer to the data in python (interactively) via the variable names. I don't know how to do that (with a function). The only solution I see is returning the variable names and the data from my function (eg as lists), and do something like this:
def get_data()
(...)
return names, values
names, values = get_data(my_excel)
for n,v in zip(names, values):
exec(''.join([n, '= v']))
Can I get the same result directly?
Thanks,
Roel
Use a dictionary to store your mapping from name to value instead of creating local variable.
def get_data(excel_document):
mapping = {}
mapping['name1'] = 'value1'
# ...
return mapping
mapping = get_data(my_excel)
for name, value in mapping:
# use them
If you really want to populate variables from the mapping, you can modify globals() (or locals()), but it is generally considered bad practice.
mapping = get_data(my_excel)
globals().update(mapping)
If you just want to set local variables for each name in names, use:
for n, v in zip(names, values):
locals()[n] = v
If you'd rather like to have a single object to access the data, which is much cleaner, simply use a dict, and return that from your function.
def get_data():
(...)
return dict(zip(names, values))
To access the value of the name "a", simply use get_data()["a"].
Finally, if you want to access the data as attributes of an object, you can update the __dict__ of an object (unexpected behaviour may occur if any of your column names are equal to any special python methods).
class Data(object):
def __init__(self, my_excel):
(...)
self.__dict__.update(zip(names, values))
data = Data("test.xls")
print data.a
The traditional approach would be to stuff the key/value pairs into a dict so that you can easily pass the whole structure around to other functions. If you really want to store them as attributes instead of dict keys, consider creating a class to hold them:
class Values(object): pass
store = Values()
for key, value in zip(names, values):
setattr(store, key, value)
That keeps the variables in their own namespace, separate from your running code. That's almost always a Good Thing. What if you get a spreadsheet with a header called "my_excel"? Suddenly you've lost access to your original my_excel object, which would be very inconvenient if you needed it again.
But in any case, you should never use exec unless you know exactly what you're doing. And even then, don't use exec. For instance, I know how your code works and send you a spreadsheet with "os.system('echo rm -rf *')" in a cell. You probably don't really want to execute that.

PyYAML parse into arbitary object

I have the following Python 2.6 program and YAML definition (using PyYAML):
import yaml
x = yaml.load(
"""
product:
name : 'Product X'
sku : 123
features :
- size : '10x30cm'
weight : '10kg'
"""
)
print type(x)
print x
Which results in the following output:
<type 'dict'>
{'product': {'sku': 123, 'name': 'Product X', 'features': [{'weight': '10kg', 'size': '10x30cm'}]}}
It is possible to create an object with fields from x?
I would like to the following:
print x.features[0].size
I am aware that it is possible to create and instance from an existing class, but that is not what I want for this particular scenario.
Edit:
Updated the confusing part about a 'strongly typed object'.
Changed access to features to a indexer as suggested Alex Martelli
So you have a dictionary with string keys and values that can be numbers, nested dictionaries, lists, and you'd like to wrap that into an instance which lets you use attribute access in lieu of dict indexing, and "call with an index" in lieu of list indexing -- not sure what "strongly typed" has to do with this, or why you think .features(0) is better than .features[0] (such a more natural way to index a list!), but, sure, it's feasible. For example, a simple approach might be:
def wrap(datum):
# don't wrap strings
if isinstance(datum, basestring):
return datum
# don't wrap numbers, either
try: return datum + 0
except TypeError: pass
return Fourie(datum)
class Fourie(object):
def __init__(self, data):
self._data = data
def __getattr__(self, n):
return wrap(self._data[n])
def __call__(self, n):
return wrap(self._data[n])
So x = wrap(x['product']) should give you your wish (why you want to skip that level when your overall logic would obviously require x.product.features(0).size, I have no idea, but clearly that skipping's better applied at the point of call rather than hard-coded in the wrapper class or the wrapper factory function I've just shown).
Edit: as the OP says he does want features[0] rather than features(0), just change the last two lines to
def __getitem__(self, n):
return wrap(self._data[n])
i.e., define __getitem__ (the magic method underlying indexing) instead of __call__ (the magic method underlying instance-call).
The alternative to "an existing class" (here, Fourie) would be to create a new class on the fly based on introspecting the wrapped dict -- feasible, too, but seriously dark-gray, if not actually black, magic, and without any real operational advantage that I can think of.
If the OP can clarify exactly why he may be hankering after the meta-programming peaks of creating classes on the fly, what advantage he believes he might be getting that way, etc, I'll show how to do it (and, probably, I'll also show why the craved-for advantage will not in fact be there;-). But simplicity is an important quality in any programming endeavor, and using "deep dark magic" when plain, straightforward code like the above works just fine, is generally not the best of ideas!-)

Categories

Resources