I have a dictionary that links various species in a parent-daughter decay chain. For example:
d = {
'A':{'daughter':['B']},
'B':{'daughter':['C']},
'C':{'daughter':['D']},
'D':{'daughter':['None']},
'E':{'daughter':['F']},
'F':{'daughter':['G']},
'G':{'daughter':['H']},
'H':{'daughter':[None]}
}
In this dictionary, the top level key is the 'parent' and the 'daughter' (i.e. what the parent decays to in the chain) is defined as a key:value item in the dictionary attached to the parent key. When None is given for the daughter, that is considered to be the end of the chain.
I want a function to return a sub dictionary containing the items in the chain according to the users input for the starting parent. I would also like to know the position of each item in the chain. In the sub-dictionary this can be a second field ('position').
For example, if the user wants to start the chain at 'A', I would like the function to return:
{'A':{'position':1, 'daughter':['B']},
'B':{'position':2, 'daughter':['C']},
'C':{'position':3, 'daughter':['D']},
'D':{'position':4, 'daughter':['None']}}
Similarly, if the starting value was 'E' I would like it to return:
{'F':{'position':1, 'daughter':['G']},
'G':{'position':3, 'daughter':['H']},
'H':{'position':4, 'daughter':['None']}}
This is relatively easy when the linking is one-to-one i.e. one item decays into another, into another etc.
If I now use a more complex example, as below, you can see that 'B' actually decays into both 'C' and 'D' and from there onwards the chains are separate.
A => B => C => E => G and A => B => D => F => H
d = {
'A':{'daughter':['B']},
'B':{'daughter':['C', 'D']},
'C':{'daughter':['E']},
'D':{'daughter':['F']},
'E':{'daughter':['G']},
'F':{'daughter':['H']},
'G':{'daughter':[None]},
'H':{'daughter':[None]}
}
In this case I would like a function to return the following output. You'll notice because of the diversion of the two chains, the position values are close to the level in the heirachy e.g. C=3 and D = 4, but not exactly the same. I don't want to follow the C chain all the way down, and then repeat for the D chain.
{'A':{'position':1, 'daughter':['B']},
'B':{'position':2, 'daughter':['C']},
'C':{'position':3, 'daughter':['E']},
'D':{'position':4, 'daughter':['F']}
'E':{'position':5, 'daughter':['G']}
'F':{'position':6, 'daughter':['H']}
'G':{'position':8, 'daughter':['None']}
'H':{'position':9, 'daughter':['None']}
}
Any thoughts? The function should be able to cope with more than one diversion in the chain.
Mark
If you don't want to go all the way down from C, then breadth-first search may help。
def bfs(d, start):
answer = {}
queue = [start]
head = 0
while head < len(queue):
# Fetch the first element from queue
now = queue[head]
answer[now] = {
'position': head+1,
'daughter': d[now]['daughter']
}
# Add daughters to the queue
for nxt in d[now]['daughter']:
if nxt == None:
continue
queue.append(nxt)
head += 1
return answer
d = {
'A': {'daughter': ['B']},
'B': {'daughter': ['C', 'D']},
'C': {'daughter': ['E']},
'D': {'daughter': ['F']},
'E': {'daughter': ['G']},
'F': {'daughter': ['H']},
'G': {'daughter': [None]},
'H': {'daughter': [None]}
}
print(bfs(d, 'A'))
Related
I'm creating a simple container system, in which my objects (all children of a class called GeneralWidget) are grouped in some containers, which are in another set of containers, and so on until all is in one Global container.
I have a custom class, called GeneralContainer, in which I had to override the __str__ method to provide a describing name for my container, So I know what kind of objects or containers are stored inside of him.
I am currently writing another class called ObjectTracker in which all positions of my Objects are stored, so when a new object is created, It gives a list with its' name in it in the __init__ method to it's "parent" in my hieracy, which adds itself to the list and passes it on. At some point this list with all objects that are above the new created instance of GeneralWidget will reach the global GeneralWidget (containing all containers and widgets) , which can access the ObjectTracker-object in my main().
This is the bachground of my problem. My ObjectTracker has got a dictionary, in which every "First Level container" is a key, and all objects inside such a container are stored in dictionarys as well. So I have
many encapsulated dictionarys.
As I don't know how many levels of containers there will be, I need a dynamic syntax that is independent of the number of dictionarys I need to pass unil I get to the place in the BIG dictionary that I want. A (static) call inside my ObjectRepository class would need to look something like this:
self._OBJECTREPOSITORY[firstlevelcontainer12][secondlevel8][lastlevel4] = myNewObject
with firstlevelcontainer12 containing secondlevel8 which contains lastlevel4 in which the new object should be placed
But I know neither how the containers will be called, nor how many there will be, so I decided to use exec() and compose a string with all names in it. I will post my actual code here, the definition of ObjectTracker:
class ObjectTracker:
def __init__(self):
self._NAMEREPOSITORY = {}
def addItem(self, pathAsList):
usableList = list(reversed(pathAsList))
string = "self._NAMEREPOSITORY"
for thing in usableList:
if usableList[-1] != [thing]:
string += "[" + str(thing) + "]"
else:
string += "] = " + str(thing)
print(string)
exec(string)
The problem is that I have overridden the __str__ method of the class GeneralContainer and GeneralWidgetTo gie back a describing name. This came in VERY handy at many occasions but now it has become a big problem. The code above only works if the custom name is the same as the name of the instance of the object (of course, I get why!)
The question is : Does a built-in function exist to do the following:
>>> alis = ExampoleClass()
>>> DOESTHISEXIST(alis)
'alis'
If no, how can I write a custom one without destroying my well working naming system?
Note: Since I'm not exactly sure what you want, I'll attempt provide a general solution.
First off, avoid eval/exec like the black plague. There are serious problems one encounters when using them, and there's almost always a better way. This is the way I propose below:
You seems to want a way to find a certain point a nested dictionary given a list of specific keys. This can be done quite easily using a for-loop and recursively traversing said dictionary. For example:
>>> def get_value(dictionary, keys):
value = dictionary
for key in keys:
value = value[key]
return value
>>> d = {'a': 1, 'b': {'c': 2, 'd': 3, 'e': {'f': 4, }, 'g': 5}}
>>> get_value(d, ('b', 'e', 'f'))
4
>>>
If you need to assign to a specific part of a certain nested dictionary, this can also be done using the above code:
>>> dd = get_value(d, ('b', 'e')) # grab a dictionary object
>>> dd
{'f': 4}
>>> dd['h'] = 6
>>> # the d dictionary is changed.
>>> d
{'a': 1, 'b': {'c': 2, 'd': 3, 'e': {'f': 4, 'h': 6}, 'g': 5}}
>>>
Below is a formalized version of the function above, with error testing and documentation (in a custom style):
NO_VALUE = object()
def traverse_mapping(mapping, keys, default=NO_VALUE):
"""
Description
-----------
Given a - often nested - mapping structure and a list of keys, use the
keys to recursively traverse the given dictionary and retrieve a certian
keys value.
If the function reaches a point where the mapping can no longer be
traversed (i.e. the current value retrieved from the current mapping
structure is its self not a mapping type) or a given key is found to
be non-existent, a default value can be provided to return. If no
default value is given, exceptions will be allowed to raise as normal
(a TypeError or KeyError respectively.)
Examples (In the form of a Python IDLE session)
-----------------------------------------------
>>> d = {'a': 1, 'b': {'c': 2, 'd': 3, 'e': {'f': 4, }, 'g': 5}}
>>> traverse_mapping(d, ('b', 'e', 'f'))
4
>>> inner_d = traverse_mapping(d, ('b', 'e'))
>>> inner_d
{'f': 4}
>>> inner_d['h'] = 6
>>> d
{'a': 1, 'b': {'c': 2, 'd': 3, 'e': {'f': 4, 'h': 6}, 'g': 5}}
>>> traverse_mapping(d, ('b', 'e', 'x'))
Traceback (most recent call last):
File "<pyshell#14>", line 1, in <module>
traverse_mapping(d, ('b', 'e', 'x'))
File "C:\Users\Christian\Desktop\langtons_ant.py", line 33, in traverse_mapping
value = value[key]
KeyError: 'x'
>>> traverse_mapping(d, ('b', 'e', 'x'), default=0)
0
>>>
Parameters
----------
- mapping : mapping
Any map-like structure which supports key-value lookup.
- keys : iterable
An iterable of keys to be using in traversing the given mapping.
"""
value = mapping
for key in keys:
try:
value = value[key]
except (TypeError, KeyError):
if default is not NO_VALUE:
return default
raise
return value
I think you might be looking for vars().
a = 5
# prints the value of a
print(vars()['a'])
# prints all the currently defined variables
print(vars())
# this will throw an error since b is not defined
print(vars()['b'])
I have the following (very simplified) dict. The get_details function is an API call that I would like to avoid doing twice.
ret = {
'a': a,
'b': [{
'c': item.c,
'e': item.get_details()[0].e,
'h': [func_h(detail) for detail in item.get_details()],
} for item in items]
}
I could of course rewrite the code like this:
b = []
for item in items:
details = item.get_details()
b.append({
'c': item.c,
'e': details[0].e,
'h': [func_h(detail) for detail in details],
})
ret = {
'a': a,
'b': b
}
but would like to use the first approach since it seems more pythonic.
You could use an intermediary generator to extract the details from your items. Something like this:
ret = {
'a': a,
'b': [{
'c': item.c,
'e': details[0].e,
'h': [func_h(detail) for detail in details],
} for (item, details) in ((item, item.get_details()) for item in items)]
}
I don't find the second one particularly un-pythonic; you have a complex initialization, and you shouldn't expect to boil down to a single simple expression. That said, you don't need the temporary list b; you can work directly with ret['b']:
ret = {
'a': a,
'b': []
}
for item in items:
details = item.get_details()
d = details[0]
ret['b'].append({
'c': item.c,
'e': d.e,
'h': map(func_h, details)
})
This is also a case where I would choose map over a list comprehension. (If this were Python 3, you would need to wrap that in an additional call to list.)
I wouldn't try too hard to be more pythonic if it means looking like your first approach. I would take your second approach a step further, and just use a separate function:
ret = {
'a': a,
'b': get_b_from_items(items)
}
I think that's as clean as it can get. Use comments/docstrings to indicate what 'b' is, test the function, and then the next person who comes along can quickly read and trust your code. I know you know how to write the function, but for the sake of completeness, here's how I would do it:
# and add this in where you want it
def get_b_from_items(items):
"""Return a list of (your description here)."""
result = []
for item in items:
details = item.get_details()
result.append({
'c': item.c,
'e': details[0].e,
'h': [func_h(detail) for detail in details],
})
return result
That is plenty pythonic (note the docstring-- very pythonic), and very readable. And of course, it has the advantage of being slightly more granularly testable, complex logic abstracted away from the higher level logic, and all the other advantages of using functions.
Introduction
Following dictionary has three levels of keys and then a value.
d = {
1:{
'A':{
'i': 100,
'ii': 200
},
'B':{
'i': 300
}
},
2:{
'A':{
'ii': 500
}
}
}
Examples that need to be added.
d[1]['B']['ii'] = 600 # OK
d[2]['C']['iii'] = 700 # Keyerror on 'C'
d[3]['D']['iv'] = 800 # Keyerror on 3
Problem Statement
I wanted to create code that would create the necessary nested keys and avoid any key errors.
Solution 1
The first solution I came up with, was:
def NewEntry_1(d, lv1, lv2, lv3, value):
if lv1 in d:
if lv2 in d:
d[lv1][lv2][lv3] = value
else:
d[lv1][lv2] = {lv3: value}
else:
d[lv1] = {lv2: {lv3: value}}
Seems legit, but embedding this in order pieces of code made it mind boggling. I explored Stackoverflow for other solutions and was reading on the get() and setdefault() functions.
Solution 2
There is plenty of material to find about get() and setdefault(), but not so much on nested dictionaries. Ultimately I was able to come up with:
def NewEntry_2(d, lv1, lv2, lv3, value):
return d.setdefault(lv1, {}).setdefault(lv2,{}).setdefault(lv3, value)
It is one line of code so it is not really necessary to make it a function. Easily modifiable to include operations:
d[lv1][lv2][lv3] = d.setdefault(lv1, {}).setdefault(lv2,{}).setdefault(lv3, 0) + value
Seems perfect?
Question
When adding large quantities of entries and doing many modifications, is option 2 better than option 1? Or should I define function 1 and call it? The answers I'm looking should take into account speed and/or potential for errors.
Examples
NewEntry_1(d, 1, 'B', 'ii', 600)
# output = {1: {'A': {'i': 100, 'ii': 200}, 'B': {'i': 300, 'ii': 600}}, 2: {'A': {'ii': 500}}}
NewEntry_1(d, 2, 'C', 'iii', 700)
# output = {1: {'A': {'i': 100, 'ii': 200}, 'B': {'i': 300, 'ii': 600}}, 2: {'A': {'ii': 500}, 'C': {'iii': 700}}}
NewEntry_1(d, 3, 'D', 'iv', 800)
# output = {1: {'A': {'i': 100, 'ii': 200}, 'B': {'i': 300, 'ii': 600}}, 2: {'A': {'ii': 500}, 'C': {'iii': 700}}, 3: {'D': {'iv': 800}}}
More background
I'm a business analyst exploring using Python for creating Graph DB that would help me with very specific analysis. The dictionary structure is used to story the influence one node has on one of its neighbors:
lv1 is Node From
lv2 is Node To
lv3 is Iteration
value is Influence (in
%)
In the first iteration Node 1 has direct influence on Node 2. In the second iteration Node 1 influences all the Nodes that Node 2 is influencing.
I'm aware of packages that can help me with it (networkx), but I'm trying to understand Python/GraphDB before I want to start using them.
As for the nested dictionaries, you should take a look at defaultdict. Using it will save you a lot of the function-calling overhead. The nested defaultdict construction resorts to lambda functions for their default factories:
d = defaultdict(lambda: defaultdict(lambda: defaultdict(int))) # new, shiny, empty
d[1]['B']['ii'] = 600 # OK
d[2]['C']['iii'] = 700 # OK
d[3]['D']['iv'] = 800 # OK
Update: A useful trick to know to create a deeply nested defaultdict is the following:
def tree():
return defaultdict(tree)
d = tree()
# now any depth is possible
# d[1][2][3][4][5][6][7][8] = 9
I have multiple dictionaries. There is a great deal of overlap between the dictionaries, but they are not identical.
a = {'a':1,'b':2,'c':3}
b = {'a':1,'c':3, 'd':4}
c = {'a':1,'c':3}
I'm trying to figure out how to break these down into the most primitive pieces and then reconstruct the dictionaries in the most efficient manner. In other words, how can I deconstruct and rebuild the dictionaries by typing each key/value pair the minimum number of times (ideally once). It also means creating the minimum number of sets that can be combined to create all possible sets that exist.
In the above example. It could be broken down into:
c = {'a':1,'c':3}
a = dict(c.items() + {'b':2})
b = dict(c.items() + {'d':4})
I'm looking for suggestions on how to approach this in Python.
In reality, I have roughly 60 dictionaries and many of them have overlapping values. I'm trying to minimize the number of times I have to type each k/v pair to minimize potential typo errors and make it easier to cascade update different values for specific keys.
An ideal output would be the most basic dictionaries needed to construct all dictionaries as well as the formula for reconstruction.
Here is a solution. It isn't the most efficient in any way, but it might give you an idea of how to proceed.
a = {'a':1,'b':2,'c':3}
b = {'a':1,'c':3, 'd':4}
c = {'a':1,'c':3}
class Cover:
def __init__(self,*dicts):
# Our internal representation is a link to any complete subsets, and then a dictionary of remaining elements
mtx = [[-1,{}] for d in dicts]
for i,dct in enumerate(dicts):
for j,odct in enumerate(dicts):
if i == j: continue # we're always a subset of ourself
# if everybody in A is in B, create the reference
if all( k in dct for k in odct.keys() ):
mtx[i][0] = j
dif = {key:value for key,value in dct.items() if key not in odct}
mtx[i][1].update(dif)
break
for i,m in enumerate(mtx):
if m[1] == {}: m[1] = dict(dicts[i].items())
self.mtx = mtx
def get(self, i):
r = { key:val for key, val in self.mtx[i][1].items()}
if (self.mtx[i][0] > 0): # if we had found a subset, add that
r.update(self.mtx[self.mtx[i][0]][1])
return r
cover = Cover(a,b,c)
print(a,b,c)
print('representation',cover.mtx)
# prints [[2, {'b': 2}], [2, {'d': 4}], [-1, {'a': 1, 'c': 3}]]
# The "-1" In the third element indicates this is a building block that cannot be reduced; the "2"s indicate that these should build from the 2th element
print('a',cover.get(0))
print('b',cover.get(1))
print('c',cover.get(2))
The idea is very simple: if any of the maps are complete subsets, substitute the duplication for a reference. The compression could certainly backfire for certain matrix combinations, and can be easily improved upon
Simple improvements
Change the first line of get, or using your more concise dictionary addition code hinted at in the question, might immediately improve readability.
We don't check for the largest subset, which may be worthwhile.
The implementation is naive and makes no optimizations
Larger improvements
One could also implement a hierarchical implementation in which "building block" dictionaries formed the root nodes and the tree was descended to build the larger dictionaries. This would only be beneficial if your data was hierarchical to start.
(Note: tested in python3)
Below a script to generate a script that reconstruct dictionaries.
For example consider this dictionary of dictionaries:
>>>dicts
{'d2': {'k4': 'k4', 'k1': 'k1'},
'd0': {'k2': 'k2', 'k4': 'k4', 'k1': 'k1', 'k3': 'k3'},
'd4': {'k4': 'k4', 'k0': 'k0', 'k1': 'k1'},
'd3': {'k0': 'k0', 'k1': 'k1'},
'd1': {'k2': 'k2', 'k4': 'k4'}}
For clarity, we continue with sets because the association key value can be done elsewhere.
sets= {k:set(v.keys()) for k,v in dicts.items()}
>>>sets
{'d2': {'k1', 'k4'},
'd0': {'k1', 'k2', 'k3', 'k4'},
'd4': {'k0', 'k1', 'k4'},
'd3': {'k0', 'k1'},
'd1': {'k2', 'k4'}}
Now compute the distances (number of keys to add or/and remove to go from one dict to another):
df=pd.DataFrame(dicts)
charfunc=df.notnull()
distances=pd.DataFrame((charfunc.values.T[...,None] != charfunc.values).sum(1),
df.columns,df.columns)
>>>>distances
d0 d1 d2 d3 d4
d0 0 2 2 4 3
d1 2 0 2 4 3
d2 2 2 0 2 1
d3 4 4 2 0 1
d4 3 3 1 1 0
Then the script that write the script. The idea is to begin with the shortest set, and then at each step to construct the nearest set from those already built:
script=open('script.py','w')
dicoto=df.count().argmin() # the shortest set
script.write('res={}\nres['+repr(dicoto)+']='+str(sets[dicoto])+'\ns=[\n')
done=[]
todo=df.columns.tolist()
while True :
done.append(dicoto)
todo.remove(dicoto)
if not todo : break
table=distances.loc[todo,done]
ito,ifrom=np.unravel_index(table.values.argmin(),table.shape)
dicofrom=table.columns[ifrom]
setfrom=sets[dicofrom]
dicoto=table.index[ito]
setto=sets[dicoto]
toadd=setto-setfrom
toremove=setfrom-setto
script.write(('('+repr(dicoto)+','+str(toadd)+','+str(toremove)+','
+repr(dicofrom)+'),\n').replace('set',''))
script.write("""]
for dt,ta,tr,df in s:
d=res[df].copy()
d.update(ta)
for k in tr: d.remove(k)
res[dt]=d
""")
script.close()
and the produced file script.py
res={}
res['d1']={'k2', 'k4'}
s=[
('d0',{'k1', 'k3'},(),'d1'),
('d2',{'k1'},{'k2'},'d1'),
('d4',{'k0'},(),'d2'),
('d3',(),{'k4'},'d4'),
]
for dt,ta,tr,df in s:
d=res[df].copy()
d.update(ta)
for k in tr: d.remove(k)
res[dt]=d
Test :
>>> %run script.py
>>> res==sets
True
With random dicts like here, script size is about 80% of sets size for big dicts (Nd=Nk=100) . But for big overlap, the ratio would certainly be better.
Complement : a script to generate such dicts .
from pylab import *
import pandas as pd
Nd=5 # number of dicts
Nk=5 # number of keys per dict
index=['k'+str(j) for j in range(Nk)]
columns=['d'+str(i) for i in range(Nd)]
charfunc=pd.DataFrame(randint(0,2,(Nk,Nd)).astype(bool),index=index,columns=columns)
dicts={i : { j:j for j in charfunc.index if charfunc.ix[j,i]} for i in charfunc.columns}
I was hoping someone could provide me with some basic code help in Scala. I've written some demo code in Python.
Consider a list of elements, where an element can hold either an integer or a list of other elements. I'd like to recursively examine this structure and modify it while keeping the overall structure.
To represent this in python, I've made each 'element' a dictionary with one key ('i' for item). The value corresponding to that key is either an int, or a list of similar dicts. Thus,
lst = [{'i': 1}, {'i': 2}, {'i': [{'i': 5}, {'i': 6}]}, {'i': 3}]
def recurse(x):
if isinstance(x, list):
return [recurse(a) for a in x]
else:
if isinstance(x['i'], list):
return dict(i=[recurse(a) for a in x['i']])
else:
return dict(i=(x['i'] + 1))
print "Input:"
for i in lst:
print i
print "\nResult:\n%s" % recurse(lst)
>>>
Input:
{'i': 1}
{'i': 2}
{'i': [{'i': 5}, {'i': 6}]}
{'i': 3}
Result:
[{'i': 2}, {'i': 3}, {'i': [{'i': 6}, {'i': 7}]}, {'i': 4}]
I understand it's a bit of a weird way to go about doing things, but the data I have been provided is structured like that. I think my issue is that python lets you return different types from the same function, while I don't believe Scala does.
Also for the record, the Scala elements are represented as Elem(4), or Elem(List(Elem(3)..., so I assume pattern matching can come into it somewhat.
I would rather not call that a List of List, as that does not tell what those lists contains. The structure is a tree, more precisely a leafy tree, where there are data only in the leaves. That would be :
sealed trait Tree[+A]
case class Node[+A](children: Tree[A]*) extends Tree[A]
case class Leaf[+A](value: A) extends Tree[A]
then add a method map to apply a function on each value in the tree
sealed trait Tree[+A] {
def map[B](f: A => B): Tree[B]
}
case class Node[+A](children: Tree[A]*) extends Tree[A] {
def map[B](f : A => B) = Node(children.map(_.map(f)): _*)
}
case class Leaf[+A](value: A) extends Tree[A] {
def map[B](f: A => B) = Leaf(f(value))
}
Then your input is :
val input = Node(Leaf(1), Leaf(2), Node(Leaf(5), Leaf(6)), Leaf(3))
And if you call input.map(_ + 1) you get your output
The result display is somewhat ugly because of the varargs Tree[A]*. You may improve by adding in Node override def toString = "Node(" + children.mkString(", ") + ")"
You may prefer the method in one place only, either outside the classes, or directly in Tree. Here as a method in Tree
def map[B](f: A => B): Tree[B] = this match {
case Node(children # _*) => Node(children.map(_.map(f)): _*)
case Leaf(v) => Leaf(f(v))
}
Working the untyped way as in Python is not very scala like but may be done.
def recurse(x: Any) : Any = x match {
case list : List[_] => list.map(recurse(_))
case value : Int => value + 1
}
(putting values directly in the list. Your map (dictionary) with key "i" complicates it and force accepting a compiler warning, as we would have to force a cast that could not be checked, namely that a map accepts string as keys : case map: Map[String, _])
Using case Elem(content: Any) sounds to me as giving no extra safety compared to putting values directly in List, while being much more verbose, and none of the safety and clarity of calling it a tree and distinguishing nodes and leaves without being noticeably terser.
Well, here's something that works but a little bit ugly :
def make(o: Any) = Map('i' -> o) // m :: Any => Map[Char,Any]
val lst = List(make(1),make(2),make(List(make(5),make(6))),make(3)) // List[Any]
def recurce(x: Any): Any = x match {
case l: List[_] => l map recurce
case m: Map[Char,_] => val a = m('i')
a match {
case n: Int => make(n + 1)
case l: List[_] => make(l map recurce)
}
}
Example :
scala> recurce(lst)
res9: Any = List(Map((i,2)), Map((i,3)), Map((i,List(Map(i -> 6), Map(i -> 7)))), Map((i,4)))
This solution is more type safe, but without resorting to going the way of a tree (which isn't a bad solution, but someone already made it :). It will be a list of either an int or a list of int. As such, it can only have two levels -- if you want more, make a tree.
val lst = List(Left(1), Left(2), Right(List(5, 6)), Left(3))
def recurse(lst: List[Either[Int, List[Int]]]): List[Either[Int, List[Int]]] = lst match {
case Nil => Nil
case Left(n) :: tail => Left(n + 1) :: recurse(tail)
case Right(l) :: tail => Right(l map (_ + 1)) :: recurse(tail)
}