I have a method to validate input:
def validate_user_input(*args):
for item in args:
if not re.match('^[a-zA-Z0-9_-]+$', item):
And I'm calling it like this:
validate_user_input(var1, var2, ..., var7)
But those are generated from user input, and some of those can be missing. What would be the proper way to do that, without creating tons of if statements?
Variables are assigned from a json input like so, and json input might not have some of the needed properties:
var1 = request.json.get('var1')
I assume they are <class 'NoneType'>
Here's the error: TypeError: expected string or buffer
If your request.json object is a dict or dict-like you can just pass a default value as second argument to get
If I understand correctly you are generating var_ variables by request.json.get('var_') which will either return a string which you want to validate or None if the field was missing.
If this is the case then you can just add a special case to validate_user_input for a None value:
def validate_user_input(*args):
for item in args:
if item is None:
continue #this is acceptable, don't do anything with it
elif not re.match('^[a-zA-Z0-9_-]+$', item):
...
Or it may make more sense to store all of the values you are interested in in a dictionary:
wanted_keys = {'var1','var2','var3'}
## set intersection works in python3
present_keys = wanted_keys & response.json.keys()
## or for python 2 use a basic list comp
#present_keys = [key for key in response.json.keys() if key in wanted_keys]
actual_data = {key: response.json[key] for key in present_keys}
Then you would pass actual_data.values() as the argument list to validate_user_input.
If it really is possible that some var-variables are undefined when you call validate_user_input, why not just initialize them all (e.g. to the empty string '' so that your regex fails) before actually defining them?
Related
An arbitrary typecasting function (shown below as cast) seems like a fairly straightforward function:
print(type(variable))
variable = cast(variable,type) # where type is any type included in __builtins__
print(type(variable))
And the result:
>>> <original_type>
>>> <type>
Does such a function exist in python? I can't seem to find any reference to it if it does. If this function does not exist, please explain the rationale for why it does not.
As one example usage, I have a config with arbitrarily many values, and a schema with the desired type of each. I want to check that specified value for each config variable can be cast as corresponding type specified in the schema. Treating each as a dict below for convenience:
for variable in config.keys():
val = config[variable]
type_name = schema[variable]
try:
config[variable] = cast(val,type_name)
except TypeError:
print("Schema checking failed for variable {}".format(variable))
Ok, I think the comments have covered the matter in enough detail so I'll just try to summarize my best understanding of them here. Most of this is by way of #juanpa.arrivillaga.
A standard python casting operation like int(x) (or more precisely, a type conversion operation), is actually a call to the __call__() function of an object. Types like int, float, str, etc are all object classes and are all instances of the metaclass type. A call to one of these instance of type e.g. int.__call__() calls the int object constructor which creates a new instance of that type and initializes it with the inputs to __call__().
In short, there is nothing special or different about the common python "type conversions" (e.g. int(x), str(40)) other than that the int and str objects are included in __builtins__.
And to answer the original question, if type_name is an instance of the type class then the type_name.__call__() function simply declares and initializes a new instance of that type. Thus, one can simply do:
# convert x to type type_name
x = type_name(x)
however this may cause an exception if x is not a valid input to the type_name constructor.
To cast a value in another type you can use the type itself, you can pass the type as an argument and call it into a function and you can get it from the builtins module if you sure that the type is a builtin:
value = "1"
value = int(value) # set value to 1
value = 1
value = str(value) # set value to "1"
def cast(value, type_):
return type_(value)
import buitlins
def builtin_cast(value, type_):
type_ = getattr(buitlins, type_, None)
if isinstance(type_, type):
return type_(value)
raise ValueError(f"{type_!r} is not a builtins type.")
value = cast("1", int) # set value to 1
value = cast(1, str) # set value to "1"
value = builtin_cast("1", "int") # set value to 1
value = builtin_cast(1, "str") # set value to "1"
In a few __init__ of different classes I have to use several times the construct
try:
self.member_name = kwargs['member_name']
except:
self.member_name = default_value
or as suggested by Moses Koledoye
self.member_name = kwargs.get('member_name', default_value)
I would like to have a method that inputs, say, the string 'member_name' and default_value and that the corresponding initialization gets produced. For example, if one inputs 'pi_approx' and 3.14 the resulting code is
self.pi_approx = kwargs.get('pi_approx', 3.14)
In this way I can replace a long sequence of these initializations by a loop along a list of all the required members and their default values.
This technique emulate a switch statement is not the same thing but kind of has a similar flavor.
I am not sure how to approach what I want to do.
Assuming that initializer(m_name, default_val) is the construction that gets replaced by
self.m_name = kwargs.get('m_name', default_val)
I would then used it by having a lists member_names = [m_name1, m_name2, m_name3] and default_values = [def_val1, def_val2, def_val3] and calling
for m_name, d_val in zip(member_names, default_values):
initializer(m_name, d_val)
This would replace long list of try's and also make the code a bit more readable.
If your try/except was meant to handle KeyError, then you can use the get method of the kwargs dict which allows you to supply a default value:
self.member_name = kwargs.get('member_name', default)
Which can be extended to your list of attribute names using setattr:
for m_name, d_val in zip(member_names, default_values):
setattr(self, m_name, kwargs.get(m_name, d_val))
I am trying to reverse dictionary. I have a dictionary called person_to_networks. This function works fine, but
my reverse function gives me an error. It says 'function' object is not subscriptable.
The text file contains person's name, networks and friends' names. Here is the text
def person_to_networks(profiles_file):
"""
(file open for reading) -> dic of {str: list of str}
"""
# initialize data to be processed
This is invert function :
def invert_networks_dict(person_to_networks):
"""
(dict of {str: list of str}) -> dict of {str: list of str}
Return a "network to people" dictionary based on the given
"person to networks" dictionary.
"""
I appreciate your help in advance!
It looks like you are passing the person_to_network() function object as an argument to the invert_networks_dict() function. You are then treating the person_to_network variable inside the local scope of the function as both a dictionary object and a callable function object.
Consequently you are raising an object not subscriptable exception, as functions are not subscriptable in Python (e.g. they do not support the __getitem__() method like dictionaries do). More on the what it means to be subscriptable here
So I expect what you want to do is assign the dictionary returned from calling the person_to_network() function into a new variable.
def invert_networks_dict(person_to_networks):
"""
(dict of {str: list of str}) -> dict of {str: list of str}
Return a "network to people" dictionary based on the given
'person to networks' dictionary.
"""
networks_to_person = {}
person_to_network_dict = person_to_networks(profiles_file)
for person in person_to_network_dict:
networks = person_to_networks_dict[person]
# etc
Note you don't need to pass the person_to_networks() function object to the invert_networks_dict() function. If Python doesn't find a variable match in the local function scope, it looks into any enclosing functions and then the global scope to find a match.
>>> def foo():
print "foo"
>>> def bar():
foo()
>>> bar()
foo
More on namespaces and scoping rules here.
Your first error is that you use the function person_to_networks as the dictionary name as well. You should give it a different name. Currently, when it is used as an argument, python sees that it has been defined as a function and tries to use it as such. I have changed the name below from person_to_networks (which is the name of the function) to person_dict to show what needs to be done for that. In that case you would call the two functions separately and pass the resulting dictionary in to the invert function. If you want to just call the invert function, pass in the pointer to the person_to_networks() function and call it to set the dictionary value, then use the dictionary value.
Note that the way you have it makes person the dictionary and not the key from that dictionary.
Note the line that I have put the ** indicator around below. This will reset the networks_to_person[networks] value so that you wipe out the previous value. The if will then append the same value to the dictionary. Also, you should make the initial value [person] so that future values can be appended. Also you may want to check that you have not already put person in that value of networks.
def invert_networks_dict(person_to_networks):
"""
(dict of {str: list of str}) -> dict of {str: list of str}
Return a "network to people" dictionary based on the given
"person to networks" dictionary.
"""
networks_to_person = {}
person_dict = person_to_networks(profiles_file)
for person in person_dict:
networks = person_dict[person] # networks is a list of values under the person key
# **networks_to_person[networks] = person** # This should not be here
for network in networks: # Now put the person under each network in networks list
if network not in networks_to_person: #Now use each network as a key
networks_to_person[network] = [person]
else:
networks_to_person[network].append(person) # Note that append needs ()
This means that every key value would have been the value [person, person] assuming you have fixed the first value to be [person] and not person.
I need a way to get a dictionary value if its key exists, or simply return None, if it does not.
However, Python raises a KeyError exception if you search for a key that does not exist. I know that I can check for the key, but I am looking for something more explicit. Is there a way to just return None if the key does not exist?
You can use dict.get()
value = d.get(key)
which will return None if key is not in d. You can also provide a different default value that will be returned instead of None:
value = d.get(key, "empty")
Wonder no more. It's built into the language.
>>> help(dict)
Help on class dict in module builtins:
class dict(object)
| dict() -> new empty dictionary
| dict(mapping) -> new dictionary initialized from a mapping object's
| (key, value) pairs
...
|
| get(...)
| D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.
|
...
Use dict.get
Returns the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError.
You should use the get() method from the dict class
d = {}
r = d.get('missing_key', None)
This will result in r == None. If the key isn't found in the dictionary, the get function returns the second argument.
If you want a more transparent solution, you can subclass dict to get this behavior:
class NoneDict(dict):
def __getitem__(self, key):
return dict.get(self, key)
>>> foo = NoneDict([(1,"asdf"), (2,"qwerty")])
>>> foo[1]
'asdf'
>>> foo[2]
'qwerty'
>>> foo[3] is None
True
I usually use a defaultdict for situations like this. You supply a factory method that takes no arguments and creates a value when it sees a new key. It's more useful when you want to return something like an empty list on new keys (see the examples).
from collections import defaultdict
d = defaultdict(lambda: None)
print d['new_key'] # prints 'None'
A one line solution would be:
item['key'] if 'key' in item else None
This is useful when trying to add dictionary values to a new list and want to provide a default:
eg.
row = [item['key'] if 'key' in item else 'default_value']
As others have said above, you can use get().
But to check for a key, you can also do:
d = {}
if 'keyname' in d:
# d['keyname'] exists
pass
else:
# d['keyname'] does not exist
pass
You could use a dict object's get() method, as others have already suggested. Alternatively, depending on exactly what you're doing, you might be able use a try/except suite like this:
try:
<to do something with d[key]>
except KeyError:
<deal with it not being there>
Which is considered to be a very "Pythonic" approach to handling the case.
For those using the dict.get technique for nested dictionaries, instead of explicitly checking for every level of the dictionary, or extending the dict class, you can set the default return value to an empty dictionary except for the out-most level. Here's an example:
my_dict = {'level_1': {
'level_2': {
'level_3': 'more_data'
}
}
}
result = my_dict.get('level_1', {}).get('level_2', {}).get('level_3')
# result -> 'more_data'
none_result = my_dict.get('level_1', {}).get('what_level', {}).get('level_3')
# none_result -> None
WARNING: Please note that this technique only works if the expected key's value is a dictionary. If the key what_level did exist in the dictionary but its value was a string or integer etc., then it would've raised an AttributeError.
I was thrown aback by what was possible in python2 vs python3. I will answer it based on what I ended up doing for python3. My objective was simple: check if a json response in dictionary format gave an error or not. My dictionary is called "token" and my key that I am looking for is "error". I am looking for key "error" and if it was not there setting it to value of None, then checking is the value is None, if so proceed with my code. An else statement would handle if I do have the key "error".
if ((token.get('error', None)) is None):
do something
You can use try-except block
try:
value = dict['keyname']
except IndexError:
value = None
d1={"One":1,"Two":2,"Three":3}
d1.get("Four")
If you will run this code there will be no 'Keyerror' which means you can use 'dict.get()' to avoid error and execute your code
If you have a more complex requirement that equates to a cache, this class might come in handy:
class Cache(dict):
""" Provide a dictionary based cache
Pass a function to the constructor that accepts a key and returns
a value. This function will be called exactly once for any key
required of the cache.
"""
def __init__(self, fn):
super()
self._fn = fn
def __getitem__(self, key):
try:
return super().__getitem__(key)
except KeyError:
value = self[key] = self._fn(key)
return value
The constructor takes a function that is called with the key and should return the value for the dictionary. This value is then stored and retrieved from the dictionary next time. Use it like this...
def get_from_database(name):
# Do expensive thing to retrieve the value from somewhere
return value
answer = Cache(get_from_database)
x = answer(42) # Gets the value from the database
x = answer(42) # Gets the value directly from the dictionary
If you can do it with False, then, there's also the hasattr built-in funtion:
e=dict()
hasattr(e, 'message'):
>>> False
import datetime, json
x = {'alpha': {datetime.date.today(): 'abcde'}}
print json.dumps(x)
The above code fails with a TypeError since keys of JSON objects need to be strings. The json.dumps function has a parameter called default that is called when the value of a JSON object raises a TypeError, but there seems to be no way to do this for the key. What is the most elegant way to work around this?
You can extend json.JSONEncoder to create your own encoder which will be able to deal with datetime.datetime objects (or objects of any type you desire) in such a way that a string is created which can be reproduced as a new datetime.datetime instance. I believe it should be as simple as having json.JSONEncoder call repr() on your datetime.datetime instances.
The procedure on how to do so is described in the json module docs.
The json module checks the type of each value it needs to encode and by default it only knows how to handle dicts, lists, tuples, strs, unicode objects, int, long, float, boolean and none :-)
Also of importance for you might be the skipkeys argument to the JSONEncoder.
After reading your comments I have concluded that there is no easy solution to have JSONEncoder encode the keys of dictionaries with a custom function. If you are interested you can look at the source and the methods iterencode() which calls _iterencode() which calls _iterencode_dict() which is where the type error gets raised.
Easiest for you would be to create a new dict with isoformatted keys like this:
import datetime, json
D = {datetime.datetime.now(): 'foo',
datetime.datetime.now(): 'bar'}
new_D = {}
for k,v in D.iteritems():
new_D[k.isoformat()] = v
json.dumps(new_D)
Which returns '{"2010-09-15T23:24:36.169710": "foo", "2010-09-15T23:24:36.169723": "bar"}'. For niceties, wrap it in a function :-)
http://jsonpickle.github.io/ might be what you want. When facing a similar issue, I ended up doing:
to_save = jsonpickle.encode(THE_THING, unpicklable=False, max_depth=4, make_refs=False)
you can do
x = {'alpha': {datetime.date.today().strftime('%d-%m-%Y'): 'abcde'}}
If you really need to do it, you can monkeypatch json.encoder:
from _json import encode_basestring_ascii # used when ensure_ascii=True (which is the default where you want everything to be ascii)
from _json import encode_basestring # used in any other case
def _patched_encode_basestring(o):
"""
Monkey-patching Python's json serializer so it can serialize keys that are not string!
You can monkey patch the ascii one the same way.
"""
if isinstance(o, MyClass):
return my_serialize(o)
return encode_basestring(o)
json.encoder.encode_basestring = _patched_encode_basestring
JSON only accepts the here mentioned data types for encoding. As #supakeen mentioned, you can extend the JSONEncoder class in order to encode any values inside a dictionary but no keys! If you want to encode keys, you have to do it on your own.
I used a recursive function in order to encode tuple-keys as strings and recover them later.
Here an example:
def _tuple_to_string(obj: Any) -> Any:
"""Serialize tuple-keys to string representation. A tuple wil be obtain a leading '__tuple__' string and decomposed in list representation.
Args:
obj (Any): Typically a dict, tuple, list, int, or string.
Returns:
Any: Input object with serialized tuples.
"""
# deep copy object to avoid manipulation during iteration
obj_copy = copy.deepcopy(obj)
# if the object is a dictionary
if isinstance(obj, dict):
# iterate over every key
for key in obj:
# set for later to avoid modification in later iterations when this var does not get overwritten
serialized_key = None
# if key is tuple
if isinstance(key, tuple):
# stringify the key
serialized_key = f"__tuple__{list(key)}"
# replace old key with encoded key
obj_copy[serialized_key] = obj_copy.pop(key)
# if the key was modified
if serialized_key is not None:
# do it again for the next nested dictionary
obj_copy[serialized_key] = _tuple_to_string(obj[key])
# else, just do it for the next dictionary
else:
obj_copy[key] = _tuple_to_string(obj[key])
return obj_copy
This will turn a tuple of the form ("blah", "blub") to "__tuple__["blah", "blub"]" so that you can dump it using json.dumps() or json.dump(). You can use the leading "__tuple"__ to detect them during decoding. Therefore, I used this function:
def _string_to_tuple(obj: Any) -> Any:
"""Convert serialized tuples back to original representation. Tuples need to have a leading "__tuple__" string.
Args:
obj (Any): Typically a dict, tuple, list, int, or string.
Returns:
Any: Input object with recovered tuples.
"""
# deep copy object to avoid manipulation during iteration
obj_copy = copy.deepcopy(obj)
# if the object is a dictionary
if isinstance(obj, dict):
# iterate over every key
for key in obj:
# set for later to avoid modification in later iterations when this var does not get overwritten
serialized_key = None
# if key is a serialized tuple starting with the "__tuple__" affix
if isinstance(key, str) and key.startswith("__tuple__"):
# decode it so tuple
serialized_key = tuple(key.split("__tuple__")[1].strip("[]").replace("'", "").split(", "))
# if key is number in string representation
if all(entry.isdigit() for entry in serialized_key):
# convert to integer
serialized_key = tuple(map(int, serialized_key))
# replace old key with encoded key
obj_copy[serialized_key] = obj_copy.pop(key)
# if the key was modified
if serialized_key is not None:
# do it again for the next nested dictionary
obj_copy[serialized_key] = _string_to_tuple(obj[key])
# else, just do it for the next dictionary
else:
obj_copy[key] = _string_to_tuple(obj[key])
# if another instance was found
elif isinstance(obj, list):
for item in obj:
_string_to_tuple(item)
return obj_copy
Insert you custom logic for en-/decoding your instance by changing the
if isinstance(key, tuple):
# stringify the key
serialized_key = f"__tuple__{list(key)}"
in the _tuple_to_string function or the corresponding code block from the _string_to_tuple function, respectively:
if isinstance(key, str) and key.startswith("__tuple__"):
# decode it so tuple
serialized_key = tuple(key.split("__tuple__")[1].strip("[]").replace("'", "").split(", "))
# if key is number in string representation
if all(entry.isdigit() for entry in serialized_key):
# convert to integer
serialized_key = tuple(map(int, serialized_key))
Then, you can use it as usual:
>>> dct = {("L1", "L1"): {("L2", "L2"): "foo"}}
>>> json.dumps(_tuple_to_string(dct))
... {"__tuple__['L1', 'L2']": {"__tuple__['L2', 'L2']": "foo"}}
Hope, I could help you!