passing dynamically all dictionary <key, value> as function arguments - python

i got a given dictionary - for instance:
x = {'a': 1,'b': 2, 'c': 3}
what i would like to do is sending all keys and values to some function.
for instance:
func1(a=1,b=2,c=3)
(for other dictionary y = {'z': 8,'x': 9, 'w': 11,'p': 88}
the function call will be:
func1(z=8,x=9,w=11,p=88))
is it possible?
Thank you.

This is a built in feature of python, consider the following:
x = {'a': 1,'b': 2, 'c': 3}
func1(**x)
is the same as:
func1(a=1, b=2, c=3)
I recommend you read the documentation on defining fuctions

These exemples might be useful. However it really depends on what's the final function
How to pass dictionary items as function arguments in python?
https://www.geeksforgeeks.org/python-passing-dictionary-as-arguments-to-function/
How to pass dictionary as an argument of function and how to access them in the function

Related

What does defaultdict(list, {}) or dict({}) do?

I saw this online and I'm confused on what the second argument would do:
defaultdict(list, {})
Looking at what I get on the console, it seems to simply create a defaultdict where values are lists by default. If so, is this exactly equivalent to running defaultdict(list)?
From I read online:
The first argument provides the initial value for the default_factory attribute; it defaults to None. All remaining arguments are treated the same as if they were passed to the dict constructor, including keyword arguments.
which also makes me wonder about the difference between:
my_dict = dict({})
my_dict = dict()
the argument to the dict class in python is the instantiation values.. so passing an {} creates an empty dictionary.
Its the same case with defaultdict, except that the first argument is the default type of the values for every key.
dict({...}) just makes a dict:
>>> dict({'a': 1, 'b': 2})
{'a': 1, 'b': 2}
Which is equal to this:
>>> dict(a=1, b=2)
{'a': 1, 'b': 2}
or
>>> {'a': 1, 'b': 2}
{'a': 1, 'b': 2}
The same applies for defualtdict.

Python: Cast Dictionary to Tuple implicit by calling a method

I have a problem with Code from the Mininet open source project. I abstracted it to the following scenario:
def bar (*a, **b):
b.update({'a':(a,)})
foo(**b)
def foo (*a, **c):
print(c)
print(a)
if __name__ == '__main__':
bar(2,3,4, x=3, y=5)
Method bar is only a helper to create a function call of foo.
If I run it, print(a) has no output, because the values are still stored in the dictionary b. If I change *a to a, it works fine.
And here my question:
Is there a possibility which does not modify the Method foo which puts data into *a with data and the call of foo is like in line 3? This means only b in method bar is allowed to be modified.
I don't think it's possible to comply both your conditions.
If the method foo stays the same the parameter *a represents a list of arguments with no name (like bar(2,3,4) in your example). And the call to foo(**b) in line 3 means passing keyword args (like bar(x=3, y=5) in your example) so they will be passed to **c as it is the keyword argument in foo.
Ok So I want to make you understand instead of just giving the answer because StackOverflow is all about explaining the concept instead of writing code, let's understand few points :
*args and **kwargs are use as arbitrary number of arguments.
Kwargs ** syntax requires a mapping (such as a dictionary); each
key-value pair in the mapping becomes a keyword argument.
while
*args recieves a tuple containing the positional arguments.
Now let's take simple example to understand your problem:
def check(*a,**b):
print(a)
print(b)
Now *a require a tuple or list or positional argument while **b require dict format keyworded argument.
One more point before jumping into the problem:
*l idiom is to unpack argument lists,tuples,set
a,*b=(1,2,3,4,5)
print(a)
print(b)
output:
1
[2, 3, 4, 5]
Now if i call that function something like:
def check(*a,**b):
print("value_a {}".format(a))
print("value_b {}".format(b))
print(check({'y': 5, 'x': 3, 'a': 5,'r':56}))
Now *a will take the value because there is no keyworded argument :
value_a ({'x': 3, 'y': 5, 'a': 5, 'r': 56},)
But let's add few keywords argument:
print(check({'y': 5, 'x': 3, 'a': 5,'r':56},key_s=5,values=10))
output:
value_a ({'a': 5, 'y': 5, 'r': 56, 'x': 3},)
value_b {'key_s': 5, 'values': 10}
Let's start using unpack method :
print(check(*{'y': 5, 'x': 3, 'a': 5,'r':56}))
output will be:
value_a ('r', 'x', 'y', 'a')
value_b {}
None
Because as i shown you * will unpack the keys of dict and it will treat them as the positional argument not keyworded argument.
Now let's use ** unpacking which will unpack dict as key,value pairs :
I think you got the point , Now let's use ** for mapping dict so it will unpack the dict and each key, value will be keyword argument :
print(check(**{'y': 5, 'x': 3, 'a': 5,'r':56}))
output:
value_a ()
value_b {'y': 5, 'r': 56, 'a': 5, 'x': 3}
So everything is clear now I think.
Now let's back to your problem :
As you said if you are using:
def check(a,**b):
print("value_a {}".format(a))
print("value_b {}".format(b))
print(check(**{'y': 5, 'x': 3, 'a': 5,'r':56}))
You are getting result because it's taking 'a' value from dict , and rest of values are taken by **b
Now if you want to use *a and **b then you have to provide :
the positional argument for *a
Keyworded argument for **b
As you said you don't want to modify foo then, You can try something like this:
def bar (*a, **b):
foo((a,),**b)
def foo (*a, **c):
print(c)
print(a)
if __name__ == '__main__':
bar(2,3,4, x=3, y=5)
output:
{'y': 5, 'x': 3}
(((2, 3, 4),),)

Get the name of the instance of an object in python, when __str__ overridden?

I'm creating a simple container system, in which my objects (all children of a class called GeneralWidget) are grouped in some containers, which are in another set of containers, and so on until all is in one Global container.
I have a custom class, called GeneralContainer, in which I had to override the __str__ method to provide a describing name for my container, So I know what kind of objects or containers are stored inside of him.
I am currently writing another class called ObjectTracker in which all positions of my Objects are stored, so when a new object is created, It gives a list with its' name in it in the __init__ method to it's "parent" in my hieracy, which adds itself to the list and passes it on. At some point this list with all objects that are above the new created instance of GeneralWidget will reach the global GeneralWidget (containing all containers and widgets) , which can access the ObjectTracker-object in my main().
This is the bachground of my problem. My ObjectTracker has got a dictionary, in which every "First Level container" is a key, and all objects inside such a container are stored in dictionarys as well. So I have
many encapsulated dictionarys.
As I don't know how many levels of containers there will be, I need a dynamic syntax that is independent of the number of dictionarys I need to pass unil I get to the place in the BIG dictionary that I want. A (static) call inside my ObjectRepository class would need to look something like this:
self._OBJECTREPOSITORY[firstlevelcontainer12][secondlevel8][lastlevel4] = myNewObject
with firstlevelcontainer12 containing secondlevel8 which contains lastlevel4 in which the new object should be placed
But I know neither how the containers will be called, nor how many there will be, so I decided to use exec() and compose a string with all names in it. I will post my actual code here, the definition of ObjectTracker:
class ObjectTracker:
def __init__(self):
self._NAMEREPOSITORY = {}
def addItem(self, pathAsList):
usableList = list(reversed(pathAsList))
string = "self._NAMEREPOSITORY"
for thing in usableList:
if usableList[-1] != [thing]:
string += "[" + str(thing) + "]"
else:
string += "] = " + str(thing)
print(string)
exec(string)
The problem is that I have overridden the __str__ method of the class GeneralContainer and GeneralWidgetTo gie back a describing name. This came in VERY handy at many occasions but now it has become a big problem. The code above only works if the custom name is the same as the name of the instance of the object (of course, I get why!)
The question is : Does a built-in function exist to do the following:
>>> alis = ExampoleClass()
>>> DOESTHISEXIST(alis)
'alis'
If no, how can I write a custom one without destroying my well working naming system?
Note: Since I'm not exactly sure what you want, I'll attempt provide a general solution.
First off, avoid eval/exec like the black plague. There are serious problems one encounters when using them, and there's almost always a better way. This is the way I propose below:
You seems to want a way to find a certain point a nested dictionary given a list of specific keys. This can be done quite easily using a for-loop and recursively traversing said dictionary. For example:
>>> def get_value(dictionary, keys):
value = dictionary
for key in keys:
value = value[key]
return value
>>> d = {'a': 1, 'b': {'c': 2, 'd': 3, 'e': {'f': 4, }, 'g': 5}}
>>> get_value(d, ('b', 'e', 'f'))
4
>>>
If you need to assign to a specific part of a certain nested dictionary, this can also be done using the above code:
>>> dd = get_value(d, ('b', 'e')) # grab a dictionary object
>>> dd
{'f': 4}
>>> dd['h'] = 6
>>> # the d dictionary is changed.
>>> d
{'a': 1, 'b': {'c': 2, 'd': 3, 'e': {'f': 4, 'h': 6}, 'g': 5}}
>>>
Below is a formalized version of the function above, with error testing and documentation (in a custom style):
NO_VALUE = object()
def traverse_mapping(mapping, keys, default=NO_VALUE):
"""
Description
-----------
Given a - often nested - mapping structure and a list of keys, use the
keys to recursively traverse the given dictionary and retrieve a certian
keys value.
If the function reaches a point where the mapping can no longer be
traversed (i.e. the current value retrieved from the current mapping
structure is its self not a mapping type) or a given key is found to
be non-existent, a default value can be provided to return. If no
default value is given, exceptions will be allowed to raise as normal
(a TypeError or KeyError respectively.)
Examples (In the form of a Python IDLE session)
-----------------------------------------------
>>> d = {'a': 1, 'b': {'c': 2, 'd': 3, 'e': {'f': 4, }, 'g': 5}}
>>> traverse_mapping(d, ('b', 'e', 'f'))
4
>>> inner_d = traverse_mapping(d, ('b', 'e'))
>>> inner_d
{'f': 4}
>>> inner_d['h'] = 6
>>> d
{'a': 1, 'b': {'c': 2, 'd': 3, 'e': {'f': 4, 'h': 6}, 'g': 5}}
>>> traverse_mapping(d, ('b', 'e', 'x'))
Traceback (most recent call last):
File "<pyshell#14>", line 1, in <module>
traverse_mapping(d, ('b', 'e', 'x'))
File "C:\Users\Christian\Desktop\langtons_ant.py", line 33, in traverse_mapping
value = value[key]
KeyError: 'x'
>>> traverse_mapping(d, ('b', 'e', 'x'), default=0)
0
>>>
Parameters
----------
- mapping : mapping
Any map-like structure which supports key-value lookup.
- keys : iterable
An iterable of keys to be using in traversing the given mapping.
"""
value = mapping
for key in keys:
try:
value = value[key]
except (TypeError, KeyError):
if default is not NO_VALUE:
return default
raise
return value
I think you might be looking for vars().
a = 5
# prints the value of a
print(vars()['a'])
# prints all the currently defined variables
print(vars())
# this will throw an error since b is not defined
print(vars()['b'])

What is the difference between **kwargs and dict in Python 3.2?

It seems that many aspects of python are just duplicates of functionality. Is there some difference beyond the redundancy I am seeing in kwargs and dict within Python?
There is a difference in argument unpacking (where many people use kwargs) and passing dict as one of the arguments:
Using argument unpacking:
# Prepare function
def test(**kwargs):
return kwargs
# Invoke function
>>> test(a=10, b=20)
{'a':10,'b':20}
Passing a dict as an argument:
# Prepare function
def test(my_dict):
return my_dict
# Invoke function
>>> test(dict(a=10, b=20))
{'a':10,'b':20}
The differences are mostly:
readability (you can simply pass keyword arguments even if they weren't explicitly defined),
flexibility (you can support some keyword arguments explicitly and the rest using **kwargs),
argument unpacking helps you avoid unexpected changes to the object "containing" the arguments (which is less important, as Python in general assumes developers know what they are doing, which is a different topic),
It is right that in most cases you can just interchange dicts and **kwargs.
For example:
my_dict = {'a': 5, 'b': 6}
def printer1(adict):
return adict
def printer2(**kwargs):
return kwargs
#evaluate:
>>> printer1(my_dict)
{'a': 5, 'b': 6}
>>> printer2(**my_dict)
{'a': 5, 'b': 6}
However with kwargs you have more flexibility if you combine it with other arguments:
def printer3(a, b=0, **kwargs):
return a,b,kwargs
#evaluate
>>> printer3(**my_dict)
(5, 6, {})

Python "extend" for a dictionary

What is the best way to extend a dictionary with another one while avoiding the use of a for loop? For instance:
>>> a = { "a" : 1, "b" : 2 }
>>> b = { "c" : 3, "d" : 4 }
>>> a
{'a': 1, 'b': 2}
>>> b
{'c': 3, 'd': 4}
Result:
{ "a" : 1, "b" : 2, "c" : 3, "d" : 4 }
Something like:
a.extend(b) # This does not work
a.update(b)
Latest Python Standard Library Documentation
A beautiful gem in this closed question:
The "oneliner way", altering neither of the input dicts, is
basket = dict(basket_one, **basket_two)
Learn what **basket_two (the **) means here.
In case of conflict, the items from basket_two will override the ones from basket_one. As one-liners go, this is pretty readable and transparent, and I have no compunction against using it any time a dict that's a mix of two others comes in handy (any reader who has trouble understanding it will in fact be very well served by the way this prompts him or her towards learning about dict and the ** form;-). So, for example, uses like:
x = mungesomedict(dict(adict, **anotherdict))
are reasonably frequent occurrences in my code.
Originally submitted by Alex Martelli
Note: In Python 3, this will only work if every key in basket_two is a string.
Have you tried using dictionary comprehension with dictionary mapping:
a = {'a': 1, 'b': 2}
b = {'c': 3, 'd': 4}
c = {**a, **b}
# c = {"a": 1, "b": 2, "c": 3, "d": 4}
Another way of doing is by Using dict(iterable, **kwarg)
c = dict(a, **b)
# c = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
In Python 3.9 you can add two dict using union | operator
# use the merging operator |
c = a | b
# c = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
a.update(b)
Will add keys and values from b to a, overwriting if there's already a value for a key.
As others have mentioned, a.update(b) for some dicts a and b will achieve the result you've asked for in your question. However, I want to point out that many times I have seen the extend method of mapping/set objects desire that in the syntax a.extend(b), a's values should NOT be overwritten by b's values. a.update(b) overwrites a's values, and so isn't a good choice for extend.
Note that some languages call this method defaults or inject, as it can be thought of as a way of injecting b's values (which might be a set of default values) in to a dictionary without overwriting values that might already exist.
Of course, you could simple note that a.extend(b) is nearly the same as b.update(a); a=b. To remove the assignment, you could do it thus:
def extend(a,b):
"""Create a new dictionary with a's properties extended by b,
without overwriting.
>>> extend({'a':1,'b':2},{'b':3,'c':4})
{'a': 1, 'c': 4, 'b': 2}
"""
return dict(b,**a)
Thanks to Tom Leys for that smart idea using a side-effect-less dict constructor for extend.
Notice that since Python 3.9 a much easier syntax was introduced (Union Operators):
d1 = {'a': 1}
d2 = {'b': 2}
extended_dict = d1 | d2
>> {'a':1, 'b': 2}
Pay attention: in case first dict shared keys with second dict, position matters!
d1 = {'b': 1}
d2 = {'b': 2}
d1 | d2
>> {'b': 2}
Relevant PEP
You can also use python's collections.ChainMap which was introduced in python 3.3.
from collections import ChainMap
c = ChainMap(a, b)
c['a'] # returns 1
This has a few possible advantages, depending on your use-case. They are explained in more detail here, but I'll give a brief overview:
A chainmap only uses views of the dictionaries, so no data is actually copied. This results in faster chaining (but slower lookup)
No keys are actually overwritten so, if necessary, you know whether the data comes from a or b.
This mainly makes it useful for things like configuration dictionaries.
In terms of efficiency, it seems faster to use the unpack operation, compared with the update method.
Here an image of a test I did:

Categories

Resources