I want to call a module function with getattr (or maybe something else?) from a string like this:
import bar
funcStr = "myFunc(\"strParam\", 123, bar.myenum.val1, kwarg1=\"someString\", kwarg2=456, kwarg3=bar.myenum.val2)"
[function, args, kwargs] = someParsingFunction(funcStr)
# call module function
getattr(bar, function)(*args, **kwargs)
How can I extract the args and kwargs from the string, so I can pass them to getattr?
I tried a literal_eval approach with pythons ast module. But ast is not able to evaluate the enums of the module bar. And all other examples on SO pass a kwargs map with only strings in it. And they especially never parse the arguments from a string. Or is there another way to directly call the function from the string?
EDIT: A python script reads the function string from file. So using eval here is not advised.
EDIT2: Using python 3.6.3
EDIT3: Thanks to the first two answers I came up with two ideas. After parsing the args and kwargs out of the input string there are two possibilities for getting the right type of the arguments.
We could use ast.literal_eval(<value of the argument>). For arguments with standard type like in kwarg2 it will return the needed value. If this excepts, which will happen for the enums, then we we will use getattr on the bar module and get the enums. If this excepts as well, then the string is not valid.
We could use the inspect module and iterate through the parameters of myFunc. Then for every arg and kwarg we will check if the value is an instance of a myFunc parameter (type). If so, we will cast the arg/kwarg to the myFunc parameter type. Otherwise we raise an exception because the given arg/kwarg is not an instance of a myFunc parameter. This solution is more flexible than the first one.
Both solutions feel more like a workaround. First tests seem to work. I will post my results later here.
Does this help?
funcStr = r"""myFunc(\"strParam\", 123, bar.myenum.val1, kwarg1=\"someString\", kwarg2=456, kwarg3=bar.myenum.val2)"""
def someParsingFunction(s):
func, s1 = s.split('(', 1)
l = s1.replace('\\','').strip(')').split(', ')
arg_ = [x.strip('"') for x in l if '=' not in x]
kwarg_ = {x.split('=')[0]:x.split('=')[-1] for x in l if '=' in x}
return func, arg_, kwarg_
class bar:
def myFunc(self, *args, **kwargs):
print(*args)
print(kwargs)
[function, args, kwargs] = someParsingFunction(funcStr)
getattr(bar, function)(*args, **kwargs)
# 123 bar.myenum.val1
# {'kwarg1': '"someString"', 'kwarg2': '456', 'kwarg3': 'bar.myenum.val2'}
Alternatively
funcStr = r"""myFunc(\"strParam\", 123, bar.val1, kwarg1=\"someString\", kwarg2=456, kwarg3=bar.val2)"""
def someParsingFunction(s):
func, s1 = s.split('(', 1)
l = s1.replace('\\','').strip(')').split(', ')
arg_ = [x.strip('"') for x in l if '=' not in x]
kwarg_ = {x.split('=')[0]:x.split('=')[-1] for x in l if '=' in x}
return func, arg_, kwarg_
class Bar:
def __init__(self):
self.val1 = '111'
[function, args, kwargs] = someParsingFunction(funcStr)
bar = Bar()
obj_name = 'bar' + '.'
args = [bar.__getattribute__(x.split(obj_name)[-1]) if x.startswith(obj_name) else x for x in args]
print(args)
def get_bar_args(arg_str):
"""
example:
arg_str='bar.abc.def'
assumess 'bar' module is imported
"""
from functools import reduce
reduce(getattr, arg_str.split('.')[1:], bar)
def parseFuncString(func_str):
'''
example: func_str = "myFunc(\"strParam\", 123, bar.myenum.val1, kwarg1=\"someString\", kwarg2=456, kwarg3=bar.myenum.val2)"
'''
import re
all_args_str = re.search("(.*)\((.*)\)", func_str)
all_args = all_args_str.group(2).split(',')
all_args = [x.strip() for x in all_args]
kwargs = {kw.group(1): kw.group(2) for x in all_args if (kw:=re.search('(^\w+)=(.*)$', x))}
pargs = [x for x in all_args if not re.search('(^\w+)=(.*)$', x)]
pargs = [get_bar_args(x) if x.startswith('bar.') else x for x in pargs]
kwargs = {k: get_bar_args(v) if v.startswith('bar.') else v for k, v in kwargs.items()}
print(f'{all_args=}\n{kwargs=}\n{pargs=}')
func_name = func_str.split("(")[0]
return func_name, pargs, kwargs
Related
I made up this simple, contrived example of some code I ran into at work. I'm trying to better understand why slow_function_1 (+ the way its decorators are structured) would cache function results properly, but the decorator applied to slow_function_2 would not. In this example, I'm trying to access cache information after calling the method; however, I consistently get the following error: AttributeError: 'function' object has no attribute 'cache_info'. I've searched high and low to try to fix this, but to no avail. This AttributeError is raised for both slow_function_1.cache_info() and slow_function_2.cache_info()
How do I view the cache between function calls? If anyone has any insight on the original problem of why slow_function_1 and slow_function_2 differ in caching behavior, I would appreciate that as well.
Thank you in advance!
import functools
import time
def format_args(func):
def inner(*args, **kwargs):
formatted_args = [tuple(x) if type(x) == list else x for x in args]
return func(*formatted_args, **kwargs)
return inner
def formatted_cache(func):
def inner(*args, **kwargs):
formatted_args = [tuple(x) if type(x) == list else x for x in args]
return functools.lru_cache()(func)(*formatted_args, **kwargs)
return inner
#format_args
#functools.lru_cache
def slow_function_1(a: list, b: bool):
time.sleep(1)
print("executing slow function 1")
return sum(a)
#formatted_cache
def slow_function_2(a: list, b: bool):
time.sleep(1)
print("executing slow function 2")
return functools.reduce((lambda x, y: x*y), a)
example_list = [1,2,3,4,5,6,7,8,9,10,11,12]
example_bool = True
slow_function_1(example_list, example_bool)
print(slow_function_1.cache_info())
slow_function_1(example_list, example_bool)
print(slow_function_1.cache_info())
slow_function_2(example_list, example_bool)
print(slow_function_2.cache_info())
slow_function_2(example_list, example_bool)
print(slow_function_2.cache_info())
Now that I stared at it for a good time, I don't think it's really possible to do this with a decorator. You need a lru_cache object to access the cache and all that stuff, and you need a second function to format the arguments to be hashable before passing to the lru_cache object. The decorator can't return both at once, and they can't be nested in each other to make one function with the best of both worlds.
def formatted_cache(func):
# first we assume func only takes in hashable arguments
# so cachedfunc only takes in hashable arguments
cachedfunc = functools.lru_cache(func)
# inner formats lists to hashable tuples
# then passes it to cachedfunc
def inner(*args, **kwargs):
formatted_args = [tuple(x) if type(x) == list else x for x in args]
return cachedfunc(*formatted_args, **kwargs)
# oh no, we can only return one function, but neither is good enough
I think the only way to move forward is to just accept that these have to be done in separate functions because of lru_cache's limitation. It's not that awkward, actually, just a simple higher order function like map.
import functools
import time
def formatted_call(func, *args, **kwargs):
formatted_args = [tuple(x) if type(x) == list else x for x in args]
return func(*formatted_args, **kwargs)
#functools.lru_cache
def slow_function_2(a: list, b: bool):
time.sleep(1)
print("executing slow function 2")
return functools.reduce((lambda x, y: x*y), a)
example_list = [1,2,3,4,5,6,7,8,9,10,11,12]
example_bool = True
formatted_call(slow_function_2, example_list, example_bool)
print(slow_function_2.cache_info())
formatted_call(slow_function_2, example_list, example_bool)
print(slow_function_2.cache_info())
I am struggling to pass a list of functions with a list of corresponding parameters. I also checked here, but it wasn't very helpful. for example (a naive approach which doesn't work):
def foo(data, functions_list, **kwarg):
for func_i in functions_list:
print func_i(data, **kwarg)
def func_1(data, par_1):
return some_function_1(data, par_1)
def func_2(data, par_2_0, par_2_1):
return some_function_2(data, par_2_0, par_2_1)
foo(data, [func_1, func_2], par_1='some_par', par_2_0=5, par_2_1=11)
Importantly, par_1 cannot be used in func_2, so each function consumes a unique set of parameters.
You could use the function's name as the keyword arguments. When indexing kwargs, you'd use func_i.__name__ as the key.
def foo(data, function_list, **kwargs):
for func_i in function_list:
print(func_i(data, kwargs[func_i.__name__]))
And now,
foo(data, [func_1, func_2], func_1='some_par', func_2=[5, 11])
You could use inspect.getargspec (I assume you use Python 2, you shouldn't use that function in Python 3 because it has been deprecated) to find out which argument names a function has and build a new dictionary based on those:
import inspect
def foo(data, functions_list, **kwargs):
for func_i in functions_list:
newkwargs = {name: kwargs[name]
for name in inspect.getargspec(func_i).args
if name in kwargs}
print(func_i(data, **newkwargs))
def func_1(data, par_1):
return data, par_1
def func_2(data, par_2_0, par_2_1):
return data, par_2_0, par_2_1
>>> data = 10
>>> foo(data, [func_1, func_2], par_1='some_par', par_2_0=5, par_2_1=11)
(10, 'some_par')
(10, 5, 11)
But a better way would be to simply associate parameters with functions that doesn't rely on introspection.
If you want to keep the foo function with that exact same declaration and you don't mind each function receiving the whole set of parameters you could do it like this:
You just need to add to each 'my_*' function the **kwargs parameter.
def foo(data, functions_list, **kwargs):
for my_function in functions_list:
print(my_function(data, **kwargs))
def my_sum(a, b, **kwargs):
return a + b
def my_sub(a, c, **kwargs):
return a - c
foo(0, [my_sum, my_sub], b=3, c=10)
Python automatically parses kwargs setting the b and c parameters where it has the value.
Another approach can be like this:
def foo(data, function_list, **kwargs):
function_dict = {
"func_1": func_1,
"func_2": func_2
}
for func_i in function_list:
print function_dict[func_i](data, **kwargs)
def func_1(data, **arg):
filtered_argument = {key: value for key, value in arg.items() if key.startswith('par_1')}
return list([data, filtered_argument])
def func_2(data, **arg):
filtered_argument = {key: value for key, value in arg.items() if key.startswith('par_2_')}
return list([data, filtered_argument])
data = [1,2,3]
foo(data, ['func_1', 'func_2'], par_1='some_par', par_2_0=5, par_2_1=11)
Output:
[[1, 2, 3], {'par_1': 'some_par'}]
[[1, 2, 3], {'par_2_0': 5, 'par_2_1': 11}]
I am sure that you can improvise your current code as it gets ugly in this way.
I like #COLDSPEED's approach, but want to present yet another solution. Pass always 3 values: function, args, keyword args:
Usage:
foo(
func_1, ('some_par',), {},
func_2, (5, 11), {},
)
Implementation (Python3 syntax):
def foo(*args3):
while args3:
func, args, kwargs, *args3 = args3
func(*args, **kwargs)
An approach would be making the 3rd argument of foo a positional argument and pass in a list of args with functions list:
def foo(data, functions_list, args):
for func, arg in zip(functions_list, args):
print(func(data, arg))
def func1(data, par_1):
return 'func1 called with {}'.format(par_1)
def func2(data, par_2):
return 'func2 called with {}'.format(par_2)
foo('some_data', [func1, func2],
[
{'par_1_1': 11, 'par_1_2': 12},
{'par_2_1': 21, 'par_2_2': 22}
])
zip() is used to map each function with the corresponding args.
Output:
func1 called with {'par_1_1': 11, 'par_1_2': 12}
func2 called with {'par_2_1': 21, 'par_2_2': 22}
You can do it something like that, "close" each parameters for function in a list item and then let "foo" split it backwards:
def foo(data, functions_list, kwarg):
for func_i, args in zip(functions_list, kwarg):
func_i(data, **args)
def func_1(data, par_1):
print("func_1 %s %s" % (data, par_1))
def func_2(data, par_2_0, par_2_1):
print("func_2 %s "
"%s %s" % (data, par_2_0, par_2_1))
data = "Some Data"
foo(data, [func_1, func_2], [{"par_1":'some_par'}, {"par_2_0":5, "par_2_1":11}])
This is my code:
def testit(func, *nkwargs, **kwargs):
try:
retval = func(*nkwargs, **kwargs)
result = (True, retval)
except Exception, diag:
result = (False, str(diag))
return result
def test():
funcs = (int, long, float)
vals = (1234, 12.34, '1234', '12.34')
for eachFunc in funcs:
print '-' * 20
for eachVal in vals:
retval = testit(eachFunc, eachVal)
if retval[0]:
print '%s(%s) =' % \
(eachFunc.__name__, `eachVal`), retval[1]
else:
print '%s(%s) = FAILED:' % \
(eachFunc.__name__, `eachVal`), retval[1]
if __name__ == '__main__':
test()
What is the function of the func in the third line. I think it is a variable. How did it become a function name?
Python functions are first-class objects. This means you can assign such an object to a variable and pass it to another function. The very act of executing a def functionname(): ... statement assigns such an object to a name (functionname here):
>>> def foo(): return 42
...
>>> foo
<function foo at 0x107e9f0d0>
>>> bar = foo
>>> bar
<function foo at 0x107e9f0d0>
>>> bar()
42
So the expression
funcs = (int, long, float)
takes 3 built-in functions and puts those into a tuple. funcs[0]('10') would return the integer object 10, because funcs[0] is another reference to the int() function, so funcs[0]('10') would give you the exact same outcome as int('10').
Those function objects are passed to the testit() function in a loop with:
for eachFunc in funcs:
print '-' * 20
for eachVal in vals:
retval = testit(eachFunc, eachVal)
So eachFunc is bound to the function objects that the funcs tuple references, one by one.
testit() takes that function object as the func parameter, then calls it:
def testit(func, *nkwargs, **kwargs):
try:
retval = func(*nkwargs, **kwargs)
so this calls int(), long() and float() on various test values, from the values tuple.
I would like to define some generic decorators to check arguments before calling some functions.
Something like:
#checkArguments(types = ['int', 'float'])
def myFunction(thisVarIsAnInt, thisVarIsAFloat)
''' Here my code '''
pass
Side notes:
Type checking is just here to show an example
I'm using Python 2.7 but Python 3.0 whould be interesting too
EDIT 2021: funny that type checking did not go antipythonic in the long run with type hinting and mypy.
From the Decorators for Functions and Methods:
Python 2
def accepts(*types):
def check_accepts(f):
assert len(types) == f.func_code.co_argcount
def new_f(*args, **kwds):
for (a, t) in zip(args, types):
assert isinstance(a, t), \
"arg %r does not match %s" % (a,t)
return f(*args, **kwds)
new_f.func_name = f.func_name
return new_f
return check_accepts
Python 3
In Python 3 func_code has changed to __code__ and func_name has changed to __name__.
def accepts(*types):
def check_accepts(f):
assert len(types) == f.__code__.co_argcount
def new_f(*args, **kwds):
for (a, t) in zip(args, types):
assert isinstance(a, t), \
"arg %r does not match %s" % (a,t)
return f(*args, **kwds)
new_f.__name__ = f.__name__
return new_f
return check_accepts
Usage:
#accepts(int, (int,float))
def func(arg1, arg2):
return arg1 * arg2
func(3, 2) # -> 6
func('3', 2) # -> AssertionError: arg '3' does not match <type 'int'>
arg2 can be either int or float
On Python 3.3, you can use function annotations and inspect:
import inspect
def validate(f):
def wrapper(*args):
fname = f.__name__
fsig = inspect.signature(f)
vars = ', '.join('{}={}'.format(*pair) for pair in zip(fsig.parameters, args))
params={k:v for k,v in zip(fsig.parameters, args)}
print('wrapped call to {}({})'.format(fname, params))
for k, v in fsig.parameters.items():
p=params[k]
msg='call to {}({}): {} failed {})'.format(fname, vars, k, v.annotation.__name__)
assert v.annotation(params[k]), msg
ret = f(*args)
print(' returning {} with annotation: "{}"'.format(ret, fsig.return_annotation))
return ret
return wrapper
#validate
def xXy(x: lambda _x: 10<_x<100, y: lambda _y: isinstance(_y,float)) -> ('x times y','in X and Y units'):
return x*y
xy = xXy(10,3)
print(xy)
If there is a validation error, prints:
AssertionError: call to xXy(x=12, y=3): y failed <lambda>)
If there is not a validation error, prints:
wrapped call to xXy({'y': 3.0, 'x': 12})
returning 36.0 with annotation: "('x times y', 'in X and Y units')"
You can use a function rather than a lambda to get a name in the assertion failure.
As you certainly know, it's not pythonic to reject an argument only based on its type.
Pythonic approach is rather "try to deal with it first"
That's why I would rather do a decorator to convert the arguments
def enforce(*types):
def decorator(f):
def new_f(*args, **kwds):
#we need to convert args into something mutable
newargs = []
for (a, t) in zip(args, types):
newargs.append( t(a)) #feel free to have more elaborated convertion
return f(*newargs, **kwds)
return new_f
return decorator
This way, your function is fed with the type you expect
But if the parameter can quack like a float, it is accepted
#enforce(int, float)
def func(arg1, arg2):
return arg1 * arg2
print (func(3, 2)) # -> 6.0
print (func('3', 2)) # -> 6.0
print (func('three', 2)) # -> ValueError: invalid literal for int() with base 10: 'three'
I use this trick (with proper conversion method) to deal with vectors.
Many methods I write expect MyVector class as it has plenty of functionalities; but sometime you just want to write
transpose ((2,4))
The package typeguard provides a decorator for this, it reads the type information from type annotations, it requires Python >=3.5.2 though. I think the resulting code is quite nice.
#typeguard.typechecked
def my_function(this_var_is_an_int: int, this_var_is_a_float: float)
''' Here my code '''
pass
To enforce string arguments to a parser that would throw cryptic errors when provided with non-string input, I wrote the following, which tries to avoid allocation and function calls:
from functools import wraps
def argtype(**decls):
"""Decorator to check argument types.
Usage:
#argtype(name=str, text=str)
def parse_rule(name, text): ...
"""
def decorator(func):
code = func.func_code
fname = func.func_name
names = code.co_varnames[:code.co_argcount]
#wraps(func)
def decorated(*args,**kwargs):
for argname, argtype in decls.iteritems():
try:
argval = args[names.index(argname)]
except ValueError:
argval = kwargs.get(argname)
if argval is None:
raise TypeError("%s(...): arg '%s' is null"
% (fname, argname))
if not isinstance(argval, argtype):
raise TypeError("%s(...): arg '%s': type is %s, must be %s"
% (fname, argname, type(argval), argtype))
return func(*args,**kwargs)
return decorated
return decorator
I have a slightly improved version of #jbouwmans sollution, using python decorator module, which makes the decorator fully transparent and keeps not only signature but also docstrings in place and might be the most elegant way of using decorators
from decorator import decorator
def check_args(**decls):
"""Decorator to check argument types.
Usage:
#check_args(name=str, text=str)
def parse_rule(name, text): ...
"""
#decorator
def wrapper(func, *args, **kwargs):
code = func.func_code
fname = func.func_name
names = code.co_varnames[:code.co_argcount]
for argname, argtype in decls.iteritems():
try:
argval = args[names.index(argname)]
except IndexError:
argval = kwargs.get(argname)
if argval is None:
raise TypeError("%s(...): arg '%s' is null"
% (fname, argname))
if not isinstance(argval, argtype):
raise TypeError("%s(...): arg '%s': type is %s, must be %s"
% (fname, argname, type(argval), argtype))
return func(*args, **kwargs)
return wrapper
I think the Python 3.5 answer to this question is beartype. As explained in this post it comes with handy features. Your code would then look like this
from beartype import beartype
#beartype
def sprint(s: str) -> None:
print(s)
and results in
>>> sprint("s")
s
>>> sprint(3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 13, in func_beartyped
TypeError: sprint() parameter s=3 not of <class 'str'>
All of these posts seem out of date - pint now provides this functionality built in. See here. Copied here for posterity:
Checking dimensionality When you want pint quantities to be used as
inputs to your functions, pint provides a wrapper to ensure units are
of correct type - or more precisely, they match the expected
dimensionality of the physical quantity.
Similar to wraps(), you can pass None to skip checking of some
parameters, but the return parameter type is not checked.
>>> mypp = ureg.check('[length]')(pendulum_period)
In the decorator format:
>>> #ureg.check('[length]')
... def pendulum_period(length):
... return 2*math.pi*math.sqrt(length/G)
you could try with the pydantic validation_decorator. from the documentation pydantic:
Data validation and settings management using python type annotations.
pydantic enforces type hints at runtime, and provides user friendly
errors when data is invalid.
In benchmarks pydantic is faster than all other tested libraries.
from pydantic import validate_arguments, ValidationError
#validate_arguments
def repeat(s: str, count: int, *, separator: bytes = b'') -> bytes:
b = s.encode()
return separator.join(b for _ in range(count))
a = repeat('hello', 3)
print(a)
#> b'hellohellohello'
b = repeat('x', '4', separator=' ')
print(b)
#> b'x x x x'
try:
c = repeat('hello', 'wrong')
except ValidationError as exc:
print(exc)
"""
1 validation error for Repeat
count
value is not a valid integer (type=type_error.integer)
"""
For me, the codes shared above looks complicated. What I did for defining 'generic decorator' for type-check:
I used *args, **kwargs feature, little extra work when using function/method but easy to manage.
Appropriate example definition for test
argument_types = {
'name':str,
'count':int,
'value':float
}
Decoration Defination
//from functools import wraps
def azure_type(func):
#wraps(func)
def type_decorator(*args, **kwargs):
for key, value in kwargs.items():
if key in argument_types:
if type(value) != argument_types[key]:
#enter code here
return 'Error Message or what ever you like to do'
return func(*args, **kwargs)
return type_decorator
Simple sample in code
// all other definitions
#azure_type
def stt(name:str, value:float)->(int):
#some calculation and creation of int output
count_output = #something int
return count_output
// call the function:
stt(name='ati', value=32.90) #can test from that
I have a little problem.
I use argparse to parse my arguments, and it's working very well.
To have the args, I do :
p_args = parser.parse_args(argv)
args = dict(p_args._get_kwargs())
But the problem with p_args is that I don't know how to get these arguments ordered by their position in the command line, because it's a dict.
So is there any possibility to have the arguments in a tuple/list/ordered dict by their order in the command line?
To keep arguments ordered, I use a custom action like this:
import argparse
class CustomAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
if not 'ordered_args' in namespace:
setattr(namespace, 'ordered_args', [])
previous = namespace.ordered_args
previous.append((self.dest, values))
setattr(namespace, 'ordered_args', previous)
parser = argparse.ArgumentParser()
parser.add_argument('--test1', action=CustomAction)
parser.add_argument('--test2', action=CustomAction)
To use it, for example:
>>> parser.parse_args(['--test2', '2', '--test1', '1'])
Namespace(ordered_args=[('test2', '2'), ('test1', '1')], test1=None, test2=None)
If you need to know the order in which the arguments appear in your parser, you can set up the parser like this:
import argparse
parser = argparse.ArgumentParser(description = "A cool application.")
parser.add_argument('--optional1')
parser.add_argument('positionals', nargs='+')
parser.add_argument('--optional2')
args = parser.parse_args()
print args.positionals
Here's a quick example of running this code:
$ python s.py --optional1 X --optional2 Y 1 2 3 4 5
['1', '2', '3', '4', '5']
Note that args.positionals is a list with the positional arguments in order. See the argparse documentation for more information.
This is a bit fragile since it relies on understanding the internals of argparse.ArgumentParser, but in lieu of rewriting argparse.ArgumentParser.parse_known_args, here's what I use:
class OrderedNamespace(argparse.Namespace):
def __init__(self, **kwargs):
self.__dict__["_arg_order"] = []
self.__dict__["_arg_order_first_time_through"] = True
argparse.Namespace.__init__(self, **kwargs)
def __setattr__(self, name, value):
#print("Setting %s -> %s" % (name, value))
self.__dict__[name] = value
if name in self._arg_order and hasattr(self, "_arg_order_first_time_through"):
self.__dict__["_arg_order"] = []
delattr(self, "_arg_order_first_time_through")
self.__dict__["_arg_order"].append(name)
def _finalize(self):
if hasattr(self, "_arg_order_first_time_through"):
self.__dict__["_arg_order"] = []
delattr(self, "_arg_order_first_time_through")
def _latest_of(self, k1, k2):
try:
print self._arg_order
if self._arg_order.index(k1) > self._arg_order.index(k2):
return k1
except ValueError:
if k1 in self._arg_order:
return k1
return k2
This works through the knowledge that argparse.ArgumentParser.parse_known_args runs through the entire option list once setting default values for each argument. Meaning that user specified arguments begin the first time __setattr__ hits an argument that it's seen before.
Usage:
options, extra_args = parser.parse_known_args(sys.argv, namespace=OrderedNamespace())
You can check options._arg_order for the order of user specified command line args, or use options._latest_of("arg1", "arg2") to see which of --arg1 or --arg2 was specified later on the command line (which, for my purposes was what I needed: seeing which of two options would be the overriding one).
UPDATE: had to add _finalize method to handle pathological case of sys.argv() not containing any arguments in the list)
There is module especially made to handle this :
https://github.com/claylabs/ordered-keyword-args
without using orderedkwargs module
def multiple_kwarguments(first , **lotsofothers):
print first
for i,other in lotsofothers:
print other
return True
multiple_kwarguments("first", second="second", third="third" ,fourth="fourth" ,fifth="fifth")
output:
first
second
fifth
fourth
third
On using orderedkwargs module
from orderedkwargs import ordered kwargs
#orderedkwargs
def mutliple_kwarguments(first , *lotsofothers):
print first
for i, other in lotsofothers:
print other
return True
mutliple_kwarguments("first", second="second", third="third" ,fourth="fourth" ,fifth="fifth")
Output:
first
second
third
fourth
fifth
Note: Single asterik is required while using this module with decorator above the function.
I needed this because, for logging purposes, I liked to print the arguments after they were parsed. The problem was that the arguments are not printed in order, which was really annoying.
The custom action class just flat out did not work for me. I had other arguments which used a different action such as 'store_true' and default arguments also don't work since the custom action class is not called if the argument is not given in the command line. What worked for me was creating a wrapper class like this:
import collections
from argparse import ArgumentParser
class SortedArgumentParser():
def __init__(self, *args, **kwargs):
self.ap = ArgumentParser(*args, **kwargs)
self.args_dict = collections.OrderedDict()
def add_argument(self, *args, **kwargs):
self.ap.add_argument(*args, **kwargs)
# Also store dest kwarg
self.args_dict[kwargs['dest']] = None
def parse_args(self):
# Returns a sorted dictionary
unsorted_dict = self.ap.parse_args().__dict__
for unsorted_entry in unsorted_dict:
self.args_dict[unsorted_entry] = unsorted_dict[unsorted_entry]
return self.args_dict
The pros are that the add_argument method should have the exact same functionality as the original ArgumentParser. The cons are that if you want other methods you will have to write wrapped for all of them. Luckily for me all I ever used was add_argument and parse_args, so this served my purposes pretty well. You would also need to do more work if you wanted to use parent ArgumentParsers.
This is my simple solution based on the existing ones:
class OrderedNamespace(argparse.Namespace):
def __init__(self, **kwargs):
self.__dict__["_order"] = [None]
super().__init__(**kwargs)
def __setattr__(self, attr, value):
super().__setattr__(attr, value)
if attr in self._order:
self.__dict__["_order"].clear()
self.__dict__["_order"].append(attr)
def ordered(self):
if self._order and self._order[0] is None:
self._order.clear()
return ((attr, getattr(self, attr)) for attr in self._order)
parser = argparse.ArgumentParser()
parser.add_argument('--test1', default=1)
parser.add_argument('--test2')
parser.add_argument('-s', '--slong', action='store_false')
parser.add_argument('--test3', default=3)
args = parser.parse_args(['--test2', '2', '--test1', '1', '-s'], namespace=OrderedNamespace())
print(args)
print(args.test1)
for a, v in args.ordered():
print(a, v)
Output:
OrderedNamespace(_order=['test2', 'test1', 'slong'], slong=False, test1='1', test2='2', test3=3)
1
test2 2
test1 1
slong False
It allows actions in add_argument(), which is harder for customized action class solution.