Can PyYAML dump dict items in non-alphabetical order? - python

I'm using yaml.dump to output a dict. It prints out each item in alphabetical order based on the key.
>>> d = {"z":0,"y":0,"x":0}
>>> yaml.dump( d, default_flow_style=False )
'x: 0\ny: 0\nz: 0\n'
Is there a way to control the order of the key/value pairs?
In my particular use case, printing in reverse would (coincidentally) be good enough. For completeness though, I'm looking for an answer that shows how to control the order more precisely.
I've looked at using collections.OrderedDict but PyYAML doesn't (seem to) support it. I've also looked at subclassing yaml.Dumper, but I haven't been able to figure out if it has the ability to change item order.

If you upgrade PyYAML to 5.1 version, now, it supports dump without sorting the keys like this:
yaml.dump(data, sort_keys=False)
As shown in help(yaml.Dumper), sort_keys defaults to True:
Dumper(stream, default_style=None, default_flow_style=False,
canonical=None, indent=None, width=None, allow_unicode=None,
line_break=None, encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, sort_keys=True)
(These are passed as kwargs to yaml.dump)

There's probably a better workaround, but I couldn't find anything in the documentation or the source.
Python 2 (see comments)
I subclassed OrderedDict and made it return a list of unsortable items:
from collections import OrderedDict
class UnsortableList(list):
def sort(self, *args, **kwargs):
pass
class UnsortableOrderedDict(OrderedDict):
def items(self, *args, **kwargs):
return UnsortableList(OrderedDict.items(self, *args, **kwargs))
yaml.add_representer(UnsortableOrderedDict, yaml.representer.SafeRepresenter.represent_dict)
And it seems to work:
>>> d = UnsortableOrderedDict([
... ('z', 0),
... ('y', 0),
... ('x', 0)
... ])
>>> yaml.dump(d, default_flow_style=False)
'z: 0\ny: 0\nx: 0\n'
Python 3 or 2 (see comments)
You can also write a custom representer, but I don't know if you'll run into problems later on, as I stripped out some style checking code from it:
import yaml
from collections import OrderedDict
def represent_ordereddict(dumper, data):
value = []
for item_key, item_value in data.items():
node_key = dumper.represent_data(item_key)
node_value = dumper.represent_data(item_value)
value.append((node_key, node_value))
return yaml.nodes.MappingNode(u'tag:yaml.org,2002:map', value)
yaml.add_representer(OrderedDict, represent_ordereddict)
But with that, you can use the native OrderedDict class.

For Python 3.7+, dicts preserve insertion order. Since PyYAML 5.1.x, you can disable the sorting of keys (#254). Unfortunately, the sorting keys behaviour does still default to True.
>>> import yaml
>>> yaml.dump({"b":1, "a": 2})
'a: 2\nb: 1\n'
>>> yaml.dump({"b":1, "a": 2}, sort_keys=False)
'b: 1\na: 2\n'
My project oyaml is a monkeypatch/drop-in replacement for PyYAML. It will preserve dict order by default in all Python versions and PyYAML versions.
>>> import oyaml as yaml # pip install oyaml
>>> yaml.dump({"b":1, "a": 2})
'b: 1\na: 2\n'
Additionally, it will dump the collections.OrderedDict subclass as normal mappings, rather than Python objects.
>>> from collections import OrderedDict
>>> d = OrderedDict([("b", 1), ("a", 2)])
>>> import yaml
>>> yaml.dump(d)
'!!python/object/apply:collections.OrderedDict\n- - - b\n - 1\n - - a\n - 2\n'
>>> yaml.safe_dump(d)
RepresenterError: ('cannot represent an object', OrderedDict([('b', 1), ('a', 2)]))
>>> import oyaml as yaml
>>> yaml.dump(d)
'b: 1\na: 2\n'
>>> yaml.safe_dump(d)
'b: 1\na: 2\n'

One-liner to rule them all:
yaml.add_representer(dict, lambda self, data: yaml.representer.SafeRepresenter.represent_dict(self, data.items()))
That's it. Finally. After all those years and hours, the mighty represent_dict has been defeated by giving it the dict.items() instead of just dict
Here is how it works:
This is the relevant PyYaml source code:
if hasattr(mapping, 'items'):
mapping = list(mapping.items())
try:
mapping = sorted(mapping)
except TypeError:
pass
for item_key, item_value in mapping:
To prevent the sorting we just need some Iterable[Pair] object that does not have .items().
dict_items is a perfect candidate for this.
Here is how to do this without affecting the global state of the yaml module:
#Using a custom Dumper class to prevent changing the global state
class CustomDumper(yaml.Dumper):
#Super neat hack to preserve the mapping key order. See https://stackoverflow.com/a/52621703/1497385
def represent_dict_preserve_order(self, data):
return self.represent_dict(data.items())
CustomDumper.add_representer(dict, CustomDumper.represent_dict_preserve_order)
return yaml.dump(component_dict, Dumper=CustomDumper)

This is really just an addendum to #Blender's answer. If you look in the PyYAML source, at the representer.py module, You find this method:
def represent_mapping(self, tag, mapping, flow_style=None):
value = []
node = MappingNode(tag, value, flow_style=flow_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
if hasattr(mapping, 'items'):
mapping = mapping.items()
mapping.sort()
for item_key, item_value in mapping:
node_key = self.represent_data(item_key)
node_value = self.represent_data(item_value)
if not (isinstance(node_key, ScalarNode) and not node_key.style):
best_style = False
if not (isinstance(node_value, ScalarNode) and not node_value.style):
best_style = False
value.append((node_key, node_value))
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
If you simply remove the mapping.sort() line, then it maintains the order of items in the OrderedDict.
Another solution is given in this post. It's similar to #Blender's, but works for safe_dump. The common element is the converting of the dict to a list of tuples, so the if hasattr(mapping, 'items') check evaluates to false.
Update:
I just noticed that The Fedora Project's EPEL repo has a package called python2-yamlordereddictloader, and there's one for Python 3 as well. The upstream project for that package is likely cross-platform.

There are two things you need to do to get this as you want:
you need to use something else than a dict, because it doesn't keep the items ordered
you need to dump that alternative in the appropriate way.¹
import sys
import ruamel.yaml
from ruamel.yaml.comments import CommentedMap
d = CommentedMap()
d['z'] = 0
d['y'] = 0
d['x'] = 0
ruamel.yaml.round_trip_dump(d, sys.stdout)
output:
z: 0
y: 0
x: 0
¹ This was done using ruamel.yaml a YAML 1.2 parser, of which I am the author.

If safe_dump (i.e. dump with Dumper=SafeDumper) is used, then calling yaml.add_representer has no effect. In such case it is necessary to call add_representer method explicitly on SafeRepresenter class:
yaml.representer.SafeRepresenter.add_representer(
OrderedDict, ordered_dict_representer
)

I was also looking for an answer to the question "how to dump mappings with the order preserved?" I couldn't follow the solution given above as i am new to pyyaml and python. After spending some time on the pyyaml documentation and other forums i found this.
You can use the tag
!!omap
to dump the mappings by preserving the order. If you want to play with the order i think you have to go for keys:values
The links below can help for better understanding.
https://bitbucket.org/xi/pyyaml/issue/13/loading-and-then-dumping-an-omap-is-broken
http://yaml.org/type/omap.html

The following setting makes sure the content is not sorted in the output:
yaml.sort_base_mapping_type_on_output = False

Related

Is it possible to remove unecessary nested structure in yaml file?

I need to set a param that is deep inside a yaml object like below:
executors:
hpc01:
context:
cp2k:
charge: 0
Is it possible to make it more clear, for example
executors: hpc01: context: cp2k: charge: 0
I am using ruamel.yaml in Python to parse the file and it fails to parse the example. Is there some yaml dialect can support such style, or is there better way to write such configuration in standard yaml spec?
since all json is valid yaml...
executors: {"hpc01" : {"context": {"cp2k": {"charge": 0}}}}
should be valid...
a little proof:
from ruamel.yaml import YAML
a = YAML().load('executors: {"hpc01" : {"context": {"cp2k": {"charge": 0}}}}')
b = YAML().load('''executors:
hpc01:
context:
cp2k:
charge: 0''')
if a == b:
print ("equal")
will print: equal.
What you are proposing is invalid YAML, since the colon + space is parsed as a value indicator. Since
YAML can have mappings as keys for other mappings, you would get all kinds of interpretation issues, such as
should
a: b: c
be interpreted as a mapping with key a: b and value c or as a mapping with key a and value b: c.
If you want to write everything on one line, and don't want the overhead of YAML's flow-style, I suggest
you use the fact that the value indicator expects a space after the colon and do a little post-processing:
import sys
import ruamel.yaml
yaml_str = """\
before: -1
executors:hpc01:context:cp2k:charge: 0
after: 1
"""
COLON = ':'
def unfold_keys(d):
if isinstance(d, dict):
replace = []
for idx, (k, v) in enumerate(d.items()):
if COLON in k:
for segment in reversed(k.split(COLON)):
v = {segment: v}
replace.append((idx, k, v))
else:
unfold_keys(v)
for idx, oldkey, kv in replace:
del d[oldkey]
v = list(kv.values())[0]
# v.refold = True
d.insert(idx, list(kv.keys())[0], v)
elif isinstance(d, list):
for elem in d:
unfold_keys
return d
yaml = ruamel.yaml.YAML()
data = unfold_keys(yaml.load(yaml_str))
yaml.dump(data, sys.stdout)
which gives:
before: -1
executors:
hpc01:
context:
cp2k:
charge: 0
after: 1
Since ruamel.yaml parses mappings in the default mode to CommentedMap instances which have .insert() method,
you can actually preserve the position of the "unfolded" key in the mapping.
You can of course use another character (e.g. underscore). You can also reverse the process by uncommenting the line # v.refold = True and provide another recursive function that walks over the data and checks on that attribute and does the reverse
of unfold_keys(), just before dumping.

How to not remove duplicates automatically when using method json.loads? [duplicate]

I need to parse a json file which unfortunately for me, does not follow the prototype. I have two issues with the data, but i've already found a workaround for it so i'll just mention it at the end, maybe someone can help there as well.
So i need to parse entries like this:
"Test":{
"entry":{
"Type":"Something"
},
"entry":{
"Type":"Something_Else"
}
}, ...
The json default parser updates the dictionary and therfore uses only the last entry. I HAVE to somehow store the other one as well, and i have no idea how to do this. I also HAVE to store the keys in the several dictionaries in the same order they appear in the file, thats why i am using an OrderedDict to do so. it works fine, so if there is any way to expand this with the duplicate entries i'd be grateful.
My second issue is that this very same json file contains entries like that:
"Test":{
{
"Type":"Something"
}
}
Json.load() function raises an exception when it reaches that line in the json file. The only way i worked around this was to manually remove the inner brackets myself.
Thanks in advance
You can use JSONDecoder.object_pairs_hook to customize how JSONDecoder decodes objects. This hook function will be passed a list of (key, value) pairs that you usually do some processing on, and then turn into a dict.
However, since Python dictionaries don't allow for duplicate keys (and you simply can't change that), you can return the pairs unchanged in the hook and get a nested list of (key, value) pairs when you decode your JSON:
from json import JSONDecoder
def parse_object_pairs(pairs):
return pairs
data = """
{"foo": {"baz": 42}, "foo": 7}
"""
decoder = JSONDecoder(object_pairs_hook=parse_object_pairs)
obj = decoder.decode(data)
print obj
Output:
[(u'foo', [(u'baz', 42)]), (u'foo', 7)]
How you use this data structure is up to you. As stated above, Python dictionaries won't allow for duplicate keys, and there's no way around that. How would you even do a lookup based on a key? dct[key] would be ambiguous.
So you can either implement your own logic to handle a lookup the way you expect it to work, or implement some sort of collision avoidance to make keys unique if they're not, and then create a dictionary from your nested list.
Edit: Since you said you would like to modify the duplicate key to make it unique, here's how you'd do that:
from collections import OrderedDict
from json import JSONDecoder
def make_unique(key, dct):
counter = 0
unique_key = key
while unique_key in dct:
counter += 1
unique_key = '{}_{}'.format(key, counter)
return unique_key
def parse_object_pairs(pairs):
dct = OrderedDict()
for key, value in pairs:
if key in dct:
key = make_unique(key, dct)
dct[key] = value
return dct
data = """
{"foo": {"baz": 42, "baz": 77}, "foo": 7, "foo": 23}
"""
decoder = JSONDecoder(object_pairs_hook=parse_object_pairs)
obj = decoder.decode(data)
print obj
Output:
OrderedDict([(u'foo', OrderedDict([(u'baz', 42), ('baz_1', 77)])), ('foo_1', 7), ('foo_2', 23)])
The make_unique function is responsible for returning a collision-free key. In this example it just suffixes the key with _n where n is an incremental counter - just adapt it to your needs.
Because the object_pairs_hook receives the pairs exactly in the order they appear in the JSON document, it's also possible to preserve that order by using an OrderedDict, I included that as well.
Thanks a lot #Lukas Graf, i got it working as well by implementing my own version of the hook function
def dict_raise_on_duplicates(ordered_pairs):
count=0
d=collections.OrderedDict()
for k,v in ordered_pairs:
if k in d:
d[k+'_dupl_'+str(count)]=v
count+=1
else:
d[k]=v
return d
Only thing remaining is to automatically get rid of the double brackets and i am done :D Thanks again
If you would prefer to convert those duplicated keys into an array, instead of having separate copies, this could do the work:
def dict_raise_on_duplicates(ordered_pairs):
"""Convert duplicate keys to JSON array."""
d = {}
for k, v in ordered_pairs:
if k in d:
if type(d[k]) is list:
d[k].append(v)
else:
d[k] = [d[k],v]
else:
d[k] = v
return d
And then you just use:
dict = json.loads(yourString, object_pairs_hook=dict_raise_on_duplicates)

More pythonic way to replace keywords in a string?

I am attempting to wrap an API with the following function. The API has end points that look similar to this:
/users/{ids}
/users/{ids}/permissions
The idea is that I'll be able to pass a dictionary to my function that contains a list of ids and those will be formatted as the API expects:
users = {'ids': [1, 2, 3, 5]}
call_api('/users/{ids}/permissions', users)
Then in call_api, I currently do something like this
def call_api(url, data):
for k, value in data.items():
if "{" + key + "}" in url:
url = url.replace("{"+k+"}", ';'.join(str(x) for x in value))
data.pop(k, None)
This works, but I can't imagine that if statement is efficient.
How can I improve it and have it work in both Python 2.7 and Python 3.5?
I've also been told that changing the dictionary while iterating is bad, but in my tests I've never had an issue. I am poping the value, because I later check if there are unexpected parameters (ie. anything left in data). Is what I'm doing now the right way?
Instead of modifying a dictionary as you iterate over it, creating another object to hold the unused keys is probably the way to go. In Python 3.4+, at least, removing keys during iteration will raise a
RuntimeError: dictionary changed size during iteration.
def call_api(url, data):
unused_keys = set()
for k, value in data.items():
key_pattern = "{" + k + "}"
if key_pattern in url:
formatted_value = ';'.join(map(str, value))
url = url.replace(key_pattern, formatted_value)
else:
unused_keys.add(k)
Also, if you think that you're more likely to run into an unused key, reversing the conditions might be the way to go.
Here is the way to do it. First, the string is parsed for the keys. It then remembers all keys not used in the url and saves it in the side. Lastly, it formats the url with the given parameters of the dict. The function returns the unused variables and the formatted url. If you wish you can remove the unused variables from the dict by iterating over them and deleting from the dict.
Here's some documentation with examples regarding the format syntax.
import string
users = {'ids': [1, 2, 3, 5]}
def call_api(url, data):
data_set = set(data)
formatter = string.Formatter()
used_set = {f[1] for f in formatter.parse(url) if f[1] is not None}
unused_set = data_set - used_set
formatted = url.format(**{k: ";".join(str(x) for x in v)
for k, v in data.items()})
return unused_set, formatted
print(call_api('/users/{ids}/permissions', users))
You could use re.subn which returns the number of replacements made:
import re
def call_api(url, data):
for k, value in list(data.items()):
url, n = re.subn(r'\{%s\}' % k, ';'.join(str(x) for x in value), url)
if n:
del data[k]
Note that for compatibilty with both python2 and python3, it is also necessary to create a copy of the list of items when destructively iterating over the dict.
EDIT:
It seems the main bottleneck is checking that the key is in the url. The in operator is easily the most efficient way to do this, and is much faster than a regex for the simple pattern that is being used here. Recording the unused keys separately is also more efficient than destructive iteration, but it doesn't make as much difference (relatively speaking).
So: there's not much wrong with the original solution, but the one given by #wegry is the most efficient.
The formatting keys can be found with a RegEx and then compared to the keys in the dictionary. Your string is already setup to use str.format, so you apply a transformation to the values in data, and then apply that transformation.
import re
from toolz import valmap
def call_api(url, data):
unused = set(data) - set(re.findall('\{(\w+)\}', url))
url = url.format_map(valmap(lambda v: ';'.join(map(str, v)), data))
return url, unused
The usage looks like:
users = {'ids': [1, 2, 3, 5], 'unused_key': 'value'}
print(call_api('/users/{ids}/permissions', users))
# ('/users/1;2;3;5/permissions', {'unused_key'})
This isn't going to time that well, but it's concise. As noted in one of the comments, it seems unlikely that this method is be a bottleneck.

Dumping a dictionary to a YAML file while preserving order

I've been trying to dump a dictionary to a YAML file. The problem is that the program that imports the YAML file needs the keywords in a specific order. This order is not alphabetically.
import yaml
import os
baseFile = 'myfile.dat'
lyml = [{'BaseFile': baseFile}]
lyml.append({'Environment':{'WaterDepth':0.,'WaveDirection':0.,'WaveGamma':0.,'WaveAlpha':0.}})
CaseName = 'OrderedDict.yml'
CaseDir = r'C:\Users\BTO\Documents\Projects\Mooring code testen'
CaseFile = os.path.join(CaseDir, CaseName)
with open(CaseFile, 'w') as f:
yaml.dump(lyml, f, default_flow_style=False)
This produces a *.yml file which is formatted like this:
- BaseFile: myfile.dat
- Environment:
WaterDepth: 0.0
WaveAlpha: 0.0
WaveDirection: 0.0
WaveGamma: 0.0
But what I want is that the order is preserved:
- BaseFile: myfile.dat
- Environment:
WaterDepth: 0.0
WaveDirection: 0.0
WaveGamma: 0.0
WaveAlpha: 0.0
Is this possible?
yaml.dump has a sort_keys keyword argument that is set to True by default. Set it to False to not reorder:
with open(CaseFile, 'w') as f:
yaml.dump(lyml, f, default_flow_style=False, sort_keys=False)
Use an OrderedDict instead of dict. Run the below setup code at the start. Now yaml.dump, should preserve the order. More details here and here
def setup_yaml():
""" https://stackoverflow.com/a/8661021 """
represent_dict_order = lambda self, data: self.represent_mapping('tag:yaml.org,2002:map', data.items())
yaml.add_representer(OrderedDict, represent_dict_order)
setup_yaml()
Example: https://pastebin.com/raw.php?i=NpcT6Yc4
PyYAML supports representer to serialize a class instance to a YAML node.
yaml.YAMLObject uses metaclass magic to register a constructor, which transforms a YAML node to a class instance, and a representer, which serializes a class instance to a YAML node.
Add following lines above your code:
def represent_dictionary_order(self, dict_data):
return self.represent_mapping('tag:yaml.org,2002:map', dict_data.items())
def setup_yaml():
yaml.add_representer(OrderedDict, represent_dictionary_order)
setup_yaml()
Then you can use OrderedDict to preserve the order in yaml.dump():
import yaml
from collections import OrderedDict
def represent_dictionary_order(self, dict_data):
return self.represent_mapping('tag:yaml.org,2002:map', dict_data.items())
def setup_yaml():
yaml.add_representer(OrderedDict, represent_dictionary_order)
setup_yaml()
dic = OrderedDict()
dic['a'] = 1
dic['b'] = 2
dic['c'] = 3
print(yaml.dump(dic))
# {a: 1, b: 2, c: 3}
Your difficulties are a result of assumptions on multiple levels that are incorrect and, depending on your YAML parser, might not be transparently resolvable.
In Python's dict the keys are unordered (at least for Python < 3.6). And even though the keys have some order in the source file, as soon as they are in the dict they aren't:
d = {'WaterDepth':0.,'WaveDirection':0.,'WaveGamma':0.,'WaveAlpha':0.}
for key in d:
print key
gives:
WaterDepth
WaveGamma
WaveAlpha
WaveDirection
If you want your keys ordered you can use the collections.OrderedDict type (or my own ruamel.ordereddict type which is in C and more than an order of magnitude faster), and you have to add the keys ordered, either as a list of tuples:
from ruamel.ordereddict import ordereddict
# from collections import OrderedDict as ordereddict # < this will work as well
d = ordereddict([('WaterDepth', 0.), ('WaveDirection', 0.), ('WaveGamma', 0.), ('WaveAlpha', 0.)])
for key in d:
print key
which will print the keys in the order they were specified in the source.
The second problem is that even if a Python dict has some key ordering that happens to be what you want, the YAML specification does explicitly say that mappings are unordered and that is the way e.g. PyYAML implements the dumping of Python dict to YAML mapping (And the other way around).
Also, if you dump an ordereddict or OrderedDict you normally don't get the plain YAML mapping that you indicate you want, but some tagged YAML entry.
As losing the order is often undesirable, in your case because your reader assumes some order, in my case because that made it difficult to compare versions because key ordering would not be consistent after insertion/deletion, I implemented round-trip consistency in ruamel.yaml so you can do:
import sys
import ruamel.yaml as yaml
yaml_str = """\
- BaseFile: myfile.dat
- Environment:
WaterDepth: 0.0
WaveDirection: 0.0
WaveGamma: 0.0
WaveAlpha: 0.0
"""
data = yaml.load(yaml_str, Loader=yaml.RoundTripLoader)
print(data)
yaml.dump(data, sys.stdout, Dumper=yaml.RoundTripDumper)
which gives you exactly your output result. data works as a dict (and so does `data['Environment'], but underneath they are smarter constructs that preserve order, comments, YAML anchor names etc). You can of course change these (adding/deleting key-value pairs), which is easy, but you can also build these from scratch:
import sys
import ruamel.yaml as yaml
from ruamel.yaml.comments import CommentedMap
baseFile = 'myfile.dat'
lyml = [{'BaseFile': baseFile}]
lyml.append({'Environment': CommentedMap([('WaterDepth', 0.), ('WaveDirection', 0.), ('WaveGamma', 0.), ('WaveAlpha', 0.)])})
yaml.dump(data, sys.stdout, Dumper=yaml.RoundTripDumper)
Which again prints the contents with keys in the order you want them.
I find the later less readable, than when starting from a YAML string, but it does construct the lyml data structure somewhat faster.
oyaml is a python library which preserves dict ordering when dumping.
It is specifically helpful in more complex cases where the dictionary is nested and may contain lists.
Once installed:
import oyaml as yaml
with open(CaseFile, 'w') as f:
f.write(yaml.dump(lyml))

python: getting sub-dicts in dicts dynamically?

Say I want to write a function which will return an arbitrary value from a dict, like: mydict['foo']['bar']['baz'], or return an empty string if it doesn't. However, I don't know if mydict['foo'] will necessarily exist, let alone mydict['foo']['bar']['baz'].
I'd like to do something like:
safe_nested(dict, element):
try:
return dict[element]
except KeyError:
return ''
But I don't know how to approach writing code that will accept the lookup path in the function. I started going down the route of accepting a period-separated string (like foo.bar.baz) so this function could recursively try to get the next sub-dict, but this didn't feel very Pythonic. I'm wondering if there's a way to pass in both the dict (mydict) and the sub-structure I'm interested in (['foo']['bar']['baz']), and have the function try to access this or return an empty string if it encounters a KeyError.
Am I going about this in the right way?
You should use the standard defaultdict: https://docs.python.org/2/library/collections.html#collections.defaultdict
For how to nest them, see: defaultdict of defaultdict, nested or Multiple levels of 'collection.defaultdict' in Python
I think this does what you want:
from collections import defaultdict
mydict = defaultdict(lambda: defaultdict(lambda: defaultdict(str)))
You might also want to check out addict.
>>> from addict import Dict
>>> addicted = Dict()
>>> addicted.a = 2
>>> addicted.b.c.d.e
{}
>>> addicted
{'a': 2, 'b': {'c': {'d': {'e': {}}}}}
It returns an empty Dict, not an empty string, but apart from that it looks like it does what you ask for in the question.

Categories

Resources