In other languages, a general guideline that helps produce better code is always make everything as hidden as possible. If in doubt about whether a variable should be private or protected, it's better to go with private.
Does the same hold true for Python? Should I use two leading underscores on everything at first, and only make them less hidden (only one underscore) as I need them?
If the convention is to use only one underscore, I'd also like to know the rationale.
Here's a comment I left on JBernardo's answer. It explains why I asked this question and also why I'd like to know why Python is different from the other languages:
I come from languages that train you to think everything should be only as public as needed and no more. The reasoning is that this will reduce dependencies and make the code safer to alter. The Python way of doing things in reverse -- starting from public and going towards hidden -- is odd to me.
When in doubt, leave it "public" - I mean, do not add anything to obscure the name of your attribute. If you have a class with some internal value, do not bother about it. Instead of writing:
class Stack(object):
def __init__(self):
self.__storage = [] # Too uptight
def push(self, value):
self.__storage.append(value)
write this by default:
class Stack(object):
def __init__(self):
self.storage = [] # No mangling
def push(self, value):
self.storage.append(value)
This is for sure a controversial way of doing things. Python newbies hate it, and even some old Python guys despise this default - but it is the default anyway, so I recommend you to follow it, even if you feel uncomfortable.
If you really want to send the message "Can't touch this!" to your users, the usual way is to precede the variable with one underscore. This is just a convention, but people understand it and take double care when dealing with such stuff:
class Stack(object):
def __init__(self):
self._storage = [] # This is ok, but Pythonistas use it to be relaxed about it
def push(self, value):
self._storage.append(value)
This can be useful, too, for avoiding conflict between property names and attribute names:
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
What about the double underscore? Well, we use the double underscore magic mainly to avoid accidental overloading of methods and name conflicts with superclasses' attributes. It can be pretty valuable if you write a class to be extended many times.
If you want to use it for other purposes, you can, but it is neither usual nor recommended.
EDIT: Why is this so? Well, the usual Python style does not emphasize making things private - on the contrary! There are many reasons for that - most of them controversial... Let us see some of them.
Python has properties
Today, most OO languages use the opposite approach: what should not be used should not be visible, so attributes should be private. Theoretically, this would yield more manageable, less coupled classes because no one would change the objects' values recklessly.
However, it is not so simple. For example, Java classes have many getters that only get the values and setters that only set the values. You need, let us say, seven lines of code to declare a single attribute - which a Python programmer would say is needlessly complex. Also, you write a lot of code to get one public field since you can change its value using the getters and setters in practice.
So why follow this private-by-default policy? Just make your attributes public by default. Of course, this is problematic in Java because if you decide to add some validation to your attribute, it would require you to change all:
person.age = age;
in your code to, let us say,
person.setAge(age);
setAge() being:
public void setAge(int age) {
if (age >= 0) {
this.age = age;
} else {
this.age = 0;
}
}
So in Java (and other languages), the default is to use getters and setters anyway because they can be annoying to write but can spare you much time if you find yourself in the situation I've described.
However, you do not need to do it in Python since Python has properties. If you have this class:
class Person(object):
def __init__(self, name, age):
self.name = name
self.age = age
...and then you decide to validate ages, you do not need to change the person.age = age pieces of your code. Just add a property (as shown below)
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
Suppose you can do it and still use person.age = age, why would you add private fields and getters and setters?
(Also, see Python is not Java and this article about the harms of using getters and setters.).
Everything is visible anyway - and trying to hide complicates your work
Even in languages with private attributes, you can access them through some reflection/introspection library. And people do it a lot, in frameworks and for solving urgent needs. The problem is that introspection libraries are just a complicated way of doing what you could do with public attributes.
Since Python is a very dynamic language, adding this burden to your classes is counterproductive.
The problem is not being possible to see - it is being required to see
For a Pythonista, encapsulation is not the inability to see the internals of classes but the possibility of avoiding looking at it. Encapsulation is the property of a component that the user can use without concerning about the internal details. If you can use a component without bothering yourself about its implementation, then it is encapsulated (in the opinion of a Python programmer).
Now, if you wrote a class you can use it without thinking about implementation details, there is no problem if you want to look inside the class for some reason. The point is: your API should be good, and the rest is details.
Guido said so
Well, this is not controversial: he said so, actually. (Look for "open kimono.")
This is culture
Yes, there are some reasons, but no critical reason. This is primarily a cultural aspect of programming in Python. Frankly, it could be the other way, too - but it is not. Also, you could just as easily ask the other way around: why do some languages use private attributes by default? For the same main reason as for the Python practice: because it is the culture of these languages, and each choice has advantages and disadvantages.
Since there already is this culture, you are well-advised to follow it. Otherwise, you will get annoyed by Python programmers telling you to remove the __ from your code when you ask a question in Stack Overflow :)
First - What is name mangling?
Name mangling is invoked when you are in a class definition and use __any_name or __any_name_, that is, two (or more) leading underscores and at most one trailing underscore.
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
And now:
>>> [n for n in dir(Demo) if 'any' in n]
['_Demo__any_name', '_Demo__any_other_name_']
>>> Demo._Demo__any_name
'__any_name'
>>> Demo._Demo__any_other_name_
'__any_other_name_'
When in doubt, do what?
The ostensible use is to prevent subclassers from using an attribute that the class uses.
A potential value is in avoiding name collisions with subclassers who want to override behavior, so that the parent class functionality keeps working as expected. However, the example in the Python documentation is not Liskov substitutable, and no examples come to mind where I have found this useful.
The downsides are that it increases cognitive load for reading and understanding a code base, and especially so when debugging where you see the double underscore name in the source and a mangled name in the debugger.
My personal approach is to intentionally avoid it. I work on a very large code base. The rare uses of it stick out like a sore thumb and do not seem justified.
You do need to be aware of it so you know it when you see it.
PEP 8
PEP 8, the Python standard library style guide, currently says (abridged):
There is some controversy about the use of __names.
If your class is intended to be subclassed, and you have attributes that you do not want subclasses to use, consider naming them with double leading underscores and no trailing underscores.
Note that only the simple class name is used in the mangled name, so if a subclass chooses both the same class name and attribute name,
you can still get name collisions.
Name mangling can make certain uses, such as debugging and __getattr__() , less convenient. However the name mangling algorithm is well documented and easy to perform manually.
Not everyone likes name mangling. Try to balance the need to avoid accidental name clashes with potential use by advanced callers.
How does it work?
If you prepend two underscores (without ending double-underscores) in a class definition, the name will be mangled, and an underscore followed by the class name will be prepended on the object:
>>> class Foo(object):
... __foobar = None
... _foobaz = None
... __fooquux__ = None
...
>>> [name for name in dir(Foo) if 'foo' in name]
['_Foo__foobar', '__fooquux__', '_foobaz']
Note that names will only get mangled when the class definition is parsed:
>>> Foo.__test = None
>>> Foo.__test
>>> Foo._Foo__test
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'Foo' has no attribute '_Foo__test'
Also, those new to Python sometimes have trouble understanding what's going on when they can't manually access a name they see defined in a class definition. This is not a strong reason against it, but it's something to consider if you have a learning audience.
One Underscore?
If the convention is to use only one underscore, I'd also like to know the rationale.
When my intention is for users to keep their hands off an attribute, I tend to only use the one underscore, but that's because in my mental model, subclassers would have access to the name (which they always have, as they can easily spot the mangled name anyways).
If I were reviewing code that uses the __ prefix, I would ask why they're invoking name mangling, and if they couldn't do just as well with a single underscore, keeping in mind that if subclassers choose the same names for the class and class attribute there will be a name collision in spite of this.
I wouldn't say that practice produces better code. Visibility modifiers only distract you from the task at hand, and as a side effect force your interface to be used as you intended. Generally speaking, enforcing visibility prevents programmers from messing things up if they haven't read the documentation properly.
A far better solution is the route that Python encourages: Your classes and variables should be well documented, and their behaviour clear. The source should be available. This is far more extensible and reliable way to write code.
My strategy in Python is this:
Just write the damn thing, make no assumptions about how your data should be protected. This assumes that you write to create the ideal interfaces for your problems.
Use a leading underscore for stuff that probably won't be used externally, and isn't part of the normal "client code" interface.
Use double underscore only for things that are purely convenience inside the class, or will cause considerable damage if accidentally exposed.
Above all, it should be clear what everything does. Document it if someone else will be using it. Document it if you want it to be useful in a year's time.
As a side note, you should actually be going with protected in those other languages: You never know your class might be inherited later and for what it might be used. Best to only protect those variables that you are certain cannot or should not be used by foreign code.
You shouldn't start with private data and make it public as necessary. Rather, you should start by figuring out the interface of your object. I.e. you should start by figuring out what the world sees (the public stuff) and then figure out what private stuff is necessary for that to happen.
Other language make difficult to make private that which once was public. I.e. I'll break lots of code if I make my variable private or protected. But with properties in python this isn't the case. Rather, I can maintain the same interface even with rearranging the internal data.
The difference between _ and __ is that python actually makes an attempt to enforce the latter. Of course, it doesn't try really hard but it does make it difficult. Having _ merely tells other programmers what the intention is, they are free to ignore at their peril. But ignoring that rule is sometimes helpful. Examples include debugging, temporary hacks, and working with third party code that wasn't intended to be used the way you use it.
There are already a lot of good answers to this, but I'm going to offer another one. This is also partially a response to people who keep saying that double underscore isn't private (it really is).
If you look at Java/C#, both of them have private/protected/public. All of these are compile-time constructs. They are only enforced at the time of compilation. If you were to use reflection in Java/C#, you could easily access private method.
Now every time you call a function in Python, you are inherently using reflection. These pieces of code are the same in Python.
lst = []
lst.append(1)
getattr(lst, 'append')(1)
The "dot" syntax is only syntactic sugar for the latter piece of code. Mostly because using getattr is already ugly with only one function call. It just gets worse from there.
So with that, there can't be a Java/C# version of private, as Python doesn't compile the code. Java and C# can't check if a function is private or public at runtime, as that information is gone (and it has no knowledge of where the function is being called from).
Now with that information, the name mangling of the double underscore makes the most sense for achieving "private-ness". Now when a function is called from the 'self' instance and it notices that it starts with '__', it just performs the name mangling right there. It's just more syntactic sugar. That syntactic sugar allows the equivalent of 'private' in a language that only uses reflection for data member access.
Disclaimer: I have never heard anybody from the Python development say anything like this. The real reason for the lack of "private" is cultural, but you'll also notice that most scripting/interpreted languages have no private. A strictly enforceable private is not practical at anything except for compile time.
First: Why do you want to hide your data? Why is that so important?
Most of the time you don't really want to do it but you do because others are doing.
If you really really really don't want people using something, add one underscore in front of it. That's it... Pythonistas know that things with one underscore is not guaranteed to work every time and may change without you knowing.
That's the way we live and we're okay with that.
Using two underscores will make your class so bad to subclass that even you will not want to work that way.
The chosen answer does a good job of explaining how properties remove the need for private attributes, but I would also add that functions at the module level remove the need for private methods.
If you turn a method into a function at the module level, you remove the opportunity for subclasses to override it. Moving some functionality to the module level is more Pythonic than trying to hide methods with name mangling.
Following code snippet will explain all different cases :
two leading underscores (__a)
single leading underscore (_a)
no underscore (a)
class Test:
def __init__(self):
self.__a = 'test1'
self._a = 'test2'
self.a = 'test3'
def change_value(self,value):
self.__a = value
return self.__a
printing all valid attributes of Test Object
testObj1 = Test()
valid_attributes = dir(testObj1)
print valid_attributes
['_Test__a', '__doc__', '__init__', '__module__', '_a', 'a',
'change_value']
Here, you can see that name of __a has been changed to _Test__a to prevent this variable to be overridden by any of the subclass. This concept is known as "Name Mangling" in python.
You can access this like this :
testObj2 = Test()
print testObj2._Test__a
test1
Similarly, in case of _a, the variable is just to notify the developer that it should be used as internal variable of that class, the python interpreter won't do anything even if you access it, but it is not a good practise.
testObj3 = Test()
print testObj3._a
test2
a variable can be accesses from anywhere it's like a public class variable.
testObj4 = Test()
print testObj4.a
test3
Hope the answer helped you :)
At first glance it should be the same as for other languages (under "other" I mean Java or C++), but it isn't.
In Java you made private all variables that shouldn't be accessible outside. In the same time in Python you can't achieve this since there is no "privateness" (as one of Python principles says - "We're all adults"). So double underscore means only "Guys, do not use this field directly". The same meaning has singe underscore, which in the same time doesn't cause any headache when you have to inherit from considered class (just an example of possible problem caused by double underscore).
So, I'd recommend you to use single underscore by default for "private" members.
"If in doubt about whether a variable should be private or protected, it's better to go with private." - yes, same holds in Python.
Some answers here say about 'conventions', but don't give the links to those conventions. The authoritative guide for Python, PEP 8 states explicitly:
If in doubt, choose non-public; it's easier to make it public later than to make a public attribute non-public.
The distinction between public and private, and name mangling in Python have been considered in other answers. From the same link,
We don't use the term "private" here, since no attribute is really private in Python (without a generally unnecessary amount of work).
#EXAMPLE PROGRAM FOR Python name mangling
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
[n for n in dir(Demo) if 'any' in n] # GIVES OUTPUT AS ['_Demo__any_name',
# '_Demo__any_other_name_']
I'd like to make a copy of class, while updating all of its methods to refer a new set of __globals__
I was thinking something like below, however unlike types.FunctionType, the constructor for types.UnboundMethodType does not accept __globals__, any suggestions how to work around this?
def copy_class(old_class, new_module):
"""Copies a class, updating __globals__ of all methods to point to new_module"""
new_dict = {}
for name, entry in old_class.__dict__.items():
if isinstance(entry, types.UnboundMethodType):
entry = types.UnboundMethodType(name, None, old_class.__class__, globals=new_module.__dict__)
new_dict[name] = entry
return type(old_class.name, old_class.__bases__, new_dict)
The __dict__ values are functions, not unbound methods. The unbound method objects only get created on attribute access. If you are seeing unbound method objects in the __dict__, something weird happened with your class object before this function got to it.
I don't know about you, but I generally don't like to use types for anything other than type checking (which I don't do very often ;-). I'd much rather inspect...
I have to preface this code by saying that I hope you have a really good reason for wanting to do this ;-) -- to me, it seems like just subclassing and overriding class properties should get the job done much more elegantly ... However, If you really want to copy a class -- Why not just execute it's source again in the new namespace?
I've put together the following simple modules:
# test.py
# Just some test data
FOO = 1
class Bar(object):
def subclass_method(self):
print('Hello World!')
class Foo(Bar):
def method(self):
return FOO
And then something to do the heavy lifting:
import sys
import inspect
def copy_class(cls, new_globals):
source = inspect.getsource(cls)
globs = {}
globs.update(sys.modules[cls.__module__].__dict__)
globs.update(new_globals)
exec source in globs
return globs[cls.__name__]
# Check that it works...
import test
NewFoo = copy_class(test.Foo, {'FOO': 2})
print NewFoo().method()
NewFoo().subclass_method()
print test.Foo().method()
test.Foo().subclass_method()
This has some possibly desirable properties and undesirable... First, it only works on classes that are inspectable. That's pretty much anything user-defined so probably not too restrictive... It also might be a bit slower than other solutions that don't involve re-parsing the source string -- But again, it doesn't seem like this should be executed too frequently, so that's probably Ok.
Now the "advantages"...
If a global is requested by a function but not supplied, this will use the global from the old namespace. If this behavior isn't desireable (i.e. you'd rather have the NameError), you can modify the function easily to remove it.
The "copy" doesn't inherit from the original. For most purposes, that probably doesn't matter, but it's a bit weird to have the copy of something inherit from the original ...
Some people might see the exec in here and immediately think "Oh no! exec!?!?! The world is about to end!!!". Franky, that's a good default response. However, I argue that if you're copying a function that you plan to use later in the code, it is no more safe than using exec (after all, the function's code has already been executed).
In statically typed languages to solve the problem below I would use interface (or abstract class etc.). But I wonder if in python there is more "pythonic" way of doing it.
So, consider the situation:
class MyClass(object):
# ...
def my_function(self, value_or_value_provider):
if is_value_provider(value_or_value_provider):
self._value_provider = value_or_value_provider
else:
self._value_provider = StandardValueProvider(value_or_valie_provider)
# `StandardValueProvider` just always returns the same value.
Above "value provider" is custom class, which has get_value() method. Of course, the idea is that it can be implemented by the user.
Now, the question is: what is the best way to implement is_value_provider()? I.e., what is the best way to distinguish between "single value" and "value provider"?
The first idea, which came to my mind is to use inheritance: Introduce base class BaseValueProvider with empty implementation and tell in documentation, that custom "value providers" must inherit from it. Then in is_value_provider() function just check isinstance(objectToCheck, BaseValueProvider). What I don't like about this solution is that inheritance seems to be somehow redundant in this case (specifically in case of python), because we cannot even force one, who derive to implement get_value() method. Besides, for someone who wants to implement custom "value provider" this solution implies need to have a dependency on the module, which exposes BaseValueProvider.
The other solution would be to use "trait attribute". I.e. instead of checking base class, check existence of particular attribute with hasattr() function. We can check either existence of get_value() method itself. Or, if we afraid, that the name of the method is too common, we could check for dedicated trait attribute, like is_my_library_value_provider. Then in documentation tell, that any custom "value provider" must have not only get_value() method, but also is_my_library_value_provider. This second solution seems to be better, as it does not abuse inheritance and allows to implement custom "value providers" without being dependent on some additional library, which provides base class.
Could someone comment on which solution is preferable (or if there are other better ones), and why?
EDIT: Change the example slightly to reflect the fact, that value-provider is going to be stored and used later (probably, multiple times).
I highly suggest using hasattr().
Your code will be highly readable and via ducktyping you can later on make it work with other types you have in mind.
Regarding not getting confused with other objects having the get_value() function, Python idioms assume the coder is a responsible person and won't try to destoy the system he is implementing the code with, therefore a single hasattr(obj, "get_value") is enough. If the class has .get_value() it can be assumed as a value provider and not a value (else, the value is the value itself. .get_value() on a value, returning self is rather useless).
Specifically the ":int" part...
I assumed it somehow checked the type of the parameter at the time the function is called and perhaps raised an exception in the case of a violation. But the following run without problems:
def some_method(param:str):
print("blah")
some_method(1)
def some_method(param:int):
print("blah")
some_method("asdfaslkj")
In both cases "blah" is printed - no exception raised.
I'm not sure what the name of the feature is so I wasn't sure what to google.
EDIT: OK, so it's http://www.python.org/dev/peps/pep-3107/. I can see how it'd be useful in frameworks that utilize metadata. It's not what I assumed it was. Thanks for the responses!
FOLLOW-UP QUESTION - Any thoughts on whether it's a good idea or bad idea to define my functions as def some_method(param:int) if I really only can handle int inputs - even if, as pep 3107 explains, it's just metadata - no enforcement as I originally assumed? At least the consumers of the methods will see clearly what I intended. It's an alternative to documentation. Think this is good/bad/waste of time? Granted, good parameter naming (unlike my contrived example) usually makes it clear what types are meant to be passed in.
it's not used for anything much - it's just there for experimentation (you can read them from within python if you want, for example). they are called "function annotations" and are described in pep 3107.
i wrote a library that builds on it to do things like type checking (and more - for example you can map more easily from JSON to python objects) called pytyp (more info), but it's not very popular... (i should also add that the type checking part of pytyp is not at all efficient - it can be useful for tracking down a bug, but you wouldn't want to use it across an entire program).
[update: i would not recommend using function annotations in general (ie with no particular use in mind, just as docs) because (1) they might eventually get used in a way that you didn't expect and (2) the exact type of things is often not that important in python (more exactly, it's not always clear how best to specify the type of something in a useful way - objects can be quite complex, and often only "parts" are used by any one function, with multiple classes implementing those parts in different ways...). this is a consequence of duck typing - see the "more info" link for related discussion on how python's abstract base classes could be used to tackle this...]
Function annotations are what you make of them.
They can be used for documentation:
def kinetic_energy(mass: 'in kilograms', velocity: 'in meters per second'):
...
They can be used for pre-condition checking:
def validate(func, locals):
for var, test in func.__annotations__.items():
value = locals[var]
msg = 'Var: {0}\tValue: {1}\tTest: {2.__name__}'.format(var, value, test)
assert test(value), msg
def is_int(x):
return isinstance(x, int)
def between(lo, hi):
def _between(x):
return lo <= x <= hi
return _between
def f(x: between(3, 10), y: is_int):
validate(f, locals())
print(x, y)
>>> f(0, 31.1)
Traceback (most recent call last):
...
AssertionError: Var: y Value: 31.1 Test: is_int
Also see http://www.python.org/dev/peps/pep-0362/ for a way to implement type checking.
Not experienced in python, but I assume the point is to annotate/declare the parameter type that the method expects. Whether or not the expected type is rigidly enforced at runtime is beside the point.
For instance, consider:
intToHexString(param:int)
Although the language may technically allow you to call intToHexString("Hello"), it's not semantically meaningful to do so. Having the :int as part of the method declaration helps to reinforce that.
It's basically just used for documentation. When some examines the method signature, they'll see that param is labelled as an int, which will tell them the author of the method expected them to pass an int.
Because Python programmers use duck typing, this doesn't mean you have to pass an int, but it tells you the code is expecting something "int-like". So you'll probably have to pass something basically "numeric" in nature, that supports arithmetic operations. Depending on the method it may have to be usable as an index, or it may not.
However, because it's syntax and not just a comment, the annotation is visible to any code that wants to introspect it. This opens up the possibility of writing a typecheck decorator that can enforce strict type checking on arbitrary functions; this allows you to put the type checking logic in one place, and have each method declare which parameters it wants strictly type checked (by attaching a type annotation) with a minimum on syntax, in a way that is visible to client programmers who are browsing method definitions to find out the interface.
Or you could do other things with those annotations. No standardized meaning has yet been developed. Maybe if someone comes up with a killer feature that uses them and has huge adoption, then it'll one day become part of the Python language, but I suspect the flexibility of using them however you want will be too useful to ever do that.
You might also use the "-> returnValue" notation to indicate what type the function might return.
def mul(a:int, b:int) -> None:
print(a*b)
Suppose I need to create my own small DSL that would use Python to describe a certain data structure. E.g. I'd like to be able to write something like
f(x) = some_stuff(a,b,c)
and have Python, instead of complaining about undeclared identifiers or attempting to invoke the function some_stuff, convert it to a literal expression for my further convenience.
It is possible to get a reasonable approximation to this by creating a class with properly redefined __getattr__ and __setattr__ methods and use it as follows:
e = Expression()
e.f[e.x] = e.some_stuff(e.a, e.b, e.c)
It would be cool though, if it were possible to get rid of the annoying "e." prefixes and maybe even avoid the use of []. So I was wondering, is it possible to somehow temporarily "redefine" global name lookups and assignments? On a related note, maybe there are good packages for easily achieving such "quoting" functionality for Python expressions?
I'm not sure it's a good idea, but I thought I'd give it a try. To summarize:
class PermissiveDict(dict):
default = None
def __getitem__(self, item):
try:
return dict.__getitem__(self, item)
except KeyError:
return self.default
def exec_with_default(code, default=None):
ns = PermissiveDict()
ns.default = default
exec code in ns
return ns
You might want to take a look at the ast or parser modules included with Python to parse, access and transform the abstract syntax tree (or parse tree, respectively) of the input code. As far as I know, the Sage mathematical system, written in Python, has a similar sort of precompiler.
In response to Wai's comment, here's one fun solution that I've found. First of all, to explain once more what it does, suppose that you have the following code:
definitions = Structure()
definitions.add_definition('f[x]', 'x*2')
definitions.add_definition('f[z]', 'some_function(z)')
definitions.add_definition('g.i', 'some_object[i].method(param=value)')
where adding definitions implies parsing the left hand sides and the right hand sides and doing other ugly stuff. Now one (not necessarily good, but certainly fun) approach here would allow to write the above code as follows:
#my_dsl
def definitions():
f[x] = x*2
f[z] = some_function(z)
g.i = some_object[i].method(param=value)
and have Python do most of the parsing under the hood.
The idea is based on the simple exec <code> in <environment> statement, mentioned by Ian, with one hackish addition. Namely, the bytecode of the function must be slightly tweaked and all local variable access operations (LOAD_FAST) switched to variable access from the environment (LOAD_NAME).
It is easier shown than explained: http://fouryears.eu/wp-content/uploads/pydsl/
There are various tricks you may want to do to make it practical. For example, in the code presented at the link above you can't use builtin functions and language constructions like for loops and if statements within a #my_dsl function. You can make those work, however, by adding more behaviour to the Env class.
Update. Here is a slightly more verbose explanation of the same thing.