EDIT 2 : since so many people are crying against the bad design this usecase can reveal. Readers of these question and answers should think twice before using it
I've trying to set a variable (not property) by it's name in Python :
foo = 'bar'
thefunctionimlookingfor('foo', 'baz')
print foot #should print baz
PS : the function to access a variable by its name (without eval) would be a plus !
EDIT : I do know dictionary exists, this kind of usage is discouraged, I've choose to use it for a very specific purpose (config file modification according to environment), that will let my code easier to read.
When you want variably-named variables, it's time to use a dictionary:
data = {}
foo = 'bar'
data[foo] = 'baz'
print data['bar']
Dynamically setting variables in the local scope is not possible in Python 2.x without using exec, and not possible at all in Python 3.x. You can change the global scope by modifying the dictionary returned by globals(), but you actually shouldn't. Simply use your own dictionary instead.
You can do something like:
def thefunctionimlookingfor(a, b):
globals()[a] = b
Usage:
>>> foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'q' is not defined
>>> thefunctionimlookingfor('foo', 'bar')
>>> foo
'bar'
But this is a terrible idea, as others have mentioned. Namespaces are a useful concept. Consider a redesign.
At the module level you can use setattr on the current module, which you can get from sys.modules:
setattr(sys.modules[__name__], 'name', 'value')
The locals() function returns a dictionary filled with the local variables.
locals()['foo'] = 'baz'
Are you looking for functions like these? They allow modifying the local namespace you happen to be in.
import sys
def get_var(name):
return sys._getframe(1).f_locals[name]
def set_var(name, value):
sys._getframe(1).f_locals[name] = value
def del_var(name):
del sys._getframe(1).f_locals[name]
Related
In Python 3.6, the new Variable Annotations were introduced in the language.
But, when a type does not exist, the two different things can happen:
>>> def test():
... a: something = 0
...
>>> test()
>>>
>>> a: something = 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'something' is not defined
Why is the non-existing type handling behavior different? Would not it potentially cause one to overlook the undefined types in the functions?
Notes
Tried with both Python 3.6 RC1 and RC2 - same behavior.
PyCharm highlights something as "unresolved reference" in both inside and outside the function.
The behaviour of the local variable (i.e. inside the function) is at least documented in the section Runtime Effects of Type Annotations:
Annotating a local variable will cause the interpreter to treat it as a local, even if it was never assigned to. Annotations for local variables will not be evaluated:
def f():
x: NonexistentName # No error.
And goes on to explain the difference for global variables:
However, if it is at a module or class level, then the type will be evaluated:
x: NonexistentName # Error!
class X:
var: NonexistentName # Error!
The behaviour seems surprising to me, so I can only offer my guess as to the reasoning: if we put the code in a module, then Python wants to store the annotations.
# typething.py
def test():
a: something = 0
test()
something = ...
a: something = 0
Then import it:
>>> import typething
>>> typething.__annotations__
{'a': Ellipsis}
>>> typething.test.__annotations__
{}
Why it's necessary to store it on the module object, but not on the function object - I don't have a good answer yet. Perhaps it is for performance reasons, since the annotations are made by static code analysis and those names might change dynamically:
...the value of having annotations available locally does not offset the cost of having to create and populate the annotations dictionary on every function call. Therefore annotations at function level are not evaluated and not stored.
The most direct answer for this (to complement #wim's answer) comes from the issue tracker on Github where the proposal was discussed:
[..] Finally, locals. Here I think we should not store the types -- the value of having the annotations available locally is just not enough to offset the cost of creating and populating the dictionary on each function call.
In fact, I don't even think that the type expression should be evaluated during the function execution. So for example:
def side_effect():
print("Hello world")
def foo():
a: side_effect()
a = 12
return a
foo()
should not print anything. (A type checker would also complain that side_effect() is not a valid type.)
From the BDFL himself :-) nor a dict created nor evaluation being performed.
Currently, function objects only store annotations as supplied in their definition:
def foo(a: int):
b: int = 0
get_type_hints(foo) # from typing
{'a': <class 'int'>}
Creating another dictionary for the local variable annotations was apparently deemed too costly.
You can go to https://www.python.org/ftp/python/3.6.0/ and download the RC2 version to test annotations but the released version as wim said is not yet released. I did however downloaded and tried your code using the Python3.6 interpreter and no errors showed up.
You can try write like this:
>>>a: 'something' = 0
This question already has answers here:
Can the python interpreter fail on redeclared functions?
(2 answers)
Closed 8 years ago.
For example:
def foo():
print 'first foo'
def foo():
print 'second foo'
foo()
silently produces: second foo
Today I copy/pasted a function definition in the same file and changed a few lines in the body of second definition but forgot to change the function name itself. I scratched my head for a long time looking at the output and it took me a while to figure it out.
How to force the interpreter throw at least a warning at redefinition of a function? Thanks in advance.
How about using pylint?
pylint your_code.py
Let your_code.py be
1 def dup():
2 print 'a'
3 def dup():
4 print 'a'
5
6 dup()
pylint shows
C: 1,0: Missing docstring
C: 1,0:dup: Missing docstring
E: 3,0:dup: function already defined line 1 <--- HERE!!!!
C: 3,0:dup: Missing docstring
...
If you are using Pydev, You can find duplication interactively.
When mouseover the second dup, It says Duplicated signature: dup.
It is one of features of Python. Functions are values just as integers, so you can pass them around and rebind to names, just as you would in C++ using function pointers.
Look at this code:
def foo(): # we define function and bind func object to name 'foo'
print "this if foo"
foo() # >>>this if foo
bar = foo # we bind name 'bar' to the same function object
def foo(): # new function object is created and bound to foo
print "this is new foo"
foo() # foo now points to new object
# >>>this is new foo
bar() # but old function object is still unmodified:
# >>>this if foo
Thus interpreter works fine. In fact it is common to redefine functions when you are working with interactive interpreter, until you get it right. Or when you use decorators.
If you want to be warned about redefining something in python, you can use 'lint' tools, like pylint (see function-redefined (E0102))
I think it is a similar behaviour for what happens with variables (called identifiers):
In [4]: a = 2
In [5]: a = 3
In [6]: a
Out[6]: 3
you don't see the interpreter whining about a being redefined.
EDIT Somebody commented below and I think it might help clarifying my answer:
[this is due to] function objects are not treated differently from other objects, and
that names defined via def aren't treated differently from names
defined via other means
See the language reference about def being a reserved identifier.
You need to know python philosophy of object. Everything in python is object. When you create a function, you actually create object of class function and name it as your function name.
When you re-define it you simply replace old object with new one simple as creating new variable of same name.
e.g.
>>> a=10
>>> print a
10
>>> a=20
>>> print a
20
same way you can check class of the function.
>>> def a():
... pass
...
>>> a.__class__
<type 'function'>
which indicates your function is actually a object or variable that can be replaced with any other object variable of same class.
Well, you could check if it exists like this
def foo():
pass
def check_foo(variable_dict):
if 'foo' in variable_dict:
print('Function foo already exists!')
else:
print('Function foo does not exist..')
>>> check_foo()
True
>>> del foo
>>> check_foo(locals())
False
I have a Class. In that class I have a function.
In the function, I have a string variable that holds definitions of several python functions.
I would like from the function to create the functions that are defined in the variable, such that they will be created in the global scope.
After this operation, I would like to be able to call to the new function from the global scope.
For example:
class MyClass:
def create_functions():
functions_to_create = """
def glob1():
return "G1"
def glob2():
return "G2"
"""
# ----> HERE IS THE MISSING PART, LIKE RUNNING exec in the global scope <----
# The following function should work:
def other_function_in_global_scope():
print "glob1: %s, glob2: %s" % (glob1(), glob2())
What should be in the MISSING PART?
Thanks in advance!!!
In python the overrides can monkey-patch anything anytime, but if you just evaluate a bit of code in global namespace, the risk of inadvertent symbol conflict. I'd suggest instead the customer would provide a module and your code would call functions in it if they were defined there (and default implementations otherwise).
That said, documentation suggests:
exec(functions_to_create, globals())
Several things first. What is your reason to creating a function to create other functions? What are you trying to do? There might be a better way. Also here is another way to so called create function that doesn't involve playing around with exec.
>>> def create_functions():
... global glob1
... def glob1():
... return "G1"
...
>>> glob1()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'glob1' is not defined
>>> create_functions()
>>> glob1()
'G1'
>>>
Edit
Injecting source code without exec (THIS IS NOT A GOOD IDEA AT ALL)
Have you customer submit their code then just do a custom import
Customer Submit Code
Save that code as say custom.py
In your code that you want to let the customer inject into do something like the following
import os
if os.path.exists("custom.py"):
import custom
custom.inject()
That way they can give you their code you call inject and they can change things.
I'm using the Python execfile() function as a simple-but-flexible way of handling configuration files -- basically, the idea is:
# Evaluate the 'filename' file into the dictionary 'foo'.
foo = {}
execfile(filename, foo)
# Process all 'Bar' items in the dictionary.
for item in foo:
if isinstance(item, Bar):
# process item
This requires that my configuration file has access to the definition of the Bar class. In this simple example, that's trivial; we can just define foo = {'Bar' : Bar} rather than an empty dict. However, in the real example, I have an entire module I want to load. One obvious syntax for that is:
foo = {}
eval('from BarModule import *', foo)
execfile(filename, foo)
However, I've already imported BarModule in my top-level file, so it seems like I should be able to just directly define foo as the set of things defined by BarModule, without having to go through this chain of eval and import.
Is there a simple idiomatic way to do that?
Maybe you can use the __dict__ defined by the module.
>>> import os
>>> str = 'getcwd()'
>>> eval(str,os.__dict__)
Use the builtin vars() function to get the attributes of an object (such as a module) as a dict.
The typical solution is to use getattr:
>>> s = 'getcwd'
>>> getattr(os, s)()
I am editing PROSS.py to work with .cif files for protein structures. Inside the existing PROSS.py, there is the following functions (I believe that's the correct name if it's not associated with any class?), just existing within the .py file:
...
def unpack_pdb_line(line, ATOF=_atof, ATOI=_atoi, STRIP=string.strip):
...
...
def read_pdb(f, as_protein=0, as_rna=0, as_dna=0, all_models=0,
unpack=unpack_pdb_line, atom_build=atom_build):
I am adding an optons parser for command line arguments, and one of the options is to specify an alternate method to use besides unpack_pdb_line. So the pertinent part of the options parser is:
...
parser.add_option("--un", dest="unpack_method", default="unpack_pdb_line", type="string", help="Unpack method to use. Default is unpack_pdb_line.")
...
unpack=options.unpack_method
However, options.unpack_method is a string and I need to use the function with the same name as the string inside options.unpack_method. How do I use getattr etc to convert the string into the actual function name?
Thanks,
Paul
Usually you just use a dict and store (func_name, function) pairs:
unpack_options = { 'unpack_pdb_line' : unpack_pdb_line,
'some_other' : some_other_function }
unpack_function = unpack_options[options.unpack_method]
If you want to exploit the dictionaries (&c) that Python's already keeping on your behalf, I'd suggest:
def str2fun(astr):
module, _, function = astr.rpartition('.')
if module:
__import__(module)
mod = sys.modules[module]
else:
mod = sys.modules['__main__'] # or whatever's the "default module"
return getattr(mod, function)
You'll probably want to check the function's signature (and catch exceptions to provide nicer error messages) e.g. via inspect, but this is a useful general-purpose function.
It's easy to add a dictionary of shortcuts, as a fallback, if some known functions full string names (including module/package qualifications) are unwieldy to express this way.
Note we don't use __import__'s result (it doesn't work right when the function is in a module inside some package, as __import__ returns the top-level name of the package... just accessing sys.modules after the import is more practical).
vars()["unpack_pdb_line"]() will work too.
or
globals() or locals() will also work similar way.
>>> def a():return 1
>>>
>>> vars()["a"]
<function a at 0x009D1230>
>>>
>>> vars()["a"]()
1
>>> locals()["a"]()
1
>>> globals()["a"]()
1
Cheers,
If you are taking input from a user, for the sake of security it is probably best to
use a hand-made dict which will accept only a well-defined set of admissible user inputs:
unpack_options = { 'unpack_pdb_line' : unpack_pdb_line,
'unpack_pdb_line2' : unpack_pdb_line2,
}
Ignoring security for a moment, let us note in passing that an easy way to
go from (strings of variable names) to (the value referenced by the variable name)
is to use the globals() builtin dict:
unpack_function=globals()['unpack_pdb_line']
Of course, that will only work if the variable unpack_pdb_line is in the global namespace.
If you need to reach into a packgae for a module, or a module for a variable, then
you could use this function
import sys
def str_to_obj(astr):
print('processing %s'%astr)
try:
return globals()[astr]
except KeyError:
try:
__import__(astr)
mod=sys.modules[astr]
return mod
except ImportError:
module,_,basename=astr.rpartition('.')
if module:
mod=str_to_obj(module)
return getattr(mod,basename)
else:
raise
You could use it like this:
str_to_obj('scipy.stats')
# <module 'scipy.stats' from '/usr/lib/python2.6/dist-packages/scipy/stats/__init__.pyc'>
str_to_obj('scipy.stats.stats')
# <module 'scipy.stats.stats' from '/usr/lib/python2.6/dist-packages/scipy/stats/stats.pyc'>
str_to_obj('scipy.stats.stats.chisquare')
# <function chisquare at 0xa806844>
It works for nested packages, modules, functions, or (global) variables.
function = eval_dottedname(name if '.' in name else "%s.%s" % (__name__, name))
Where eval_dottedname():
def eval_dottedname(dottedname):
"""
>>> eval_dottedname("os.path.join") #doctest: +ELLIPSIS
<function join at 0x...>
>>> eval_dottedname("sys.exit") #doctest: +ELLIPSIS
<built-in function exit>
>>> eval_dottedname("sys") #doctest: +ELLIPSIS
<module 'sys' (built-in)>
"""
return reduce(getattr, dottedname.split(".")[1:],
__import__(dottedname.partition(".")[0]))
eval_dottedname() is the only one among all answers that supports arbitrary names with multiple dots in them e.g., `'datetime.datetime.now'. Though it doesn't work for nested modules that require import, but I can't even remember an example from stdlib for such module.