I was trying to create a python wrapper for an tk extension, so I looked at Tkinter.py to learn how to do it.
While looking at that file, I found the following pattern appears a lot of times: an internal method (hinted by the leading "_" in the method name) is defined, then a public method is defined just to be the internal method.
I want to know what's the benefit of doing this.
For example, in the code for class Misc:
def _register(self, func, subst=None, needcleanup=1):
# doc string and implementations is removed since it's not relevant
register = _register
Thank you.
Sometimes, you may want to change a method's behavior. For example, I could do this (hypothetically within the Misc class):
def _another_register(self, func, subst=None, needcleanup=1):
...
def change_register(self):
self.register = self._another_register
def restore_register(self):
self.register = self._register
This can be a pretty handy way to alter the behavior of certain pieces of code without subclassing (but it's generally not advisable to do this kind of thing except within the class itself).
From PEP8
In addition, the following special forms using leading or trailing
underscores are recognized (these can generally be combined with any case
convention):
_single_leading_underscore: weak
"internal use" indicator. E.g. "from
M import *" does not import objects
whose name starts with an underscore.
Well, I'm supposing, there could be another internal callable, that could've been used, it just didn't make it to the version you have. Generally, I think it is a good idea - you expose one symbol publically and internally it can be anything, a real method, a stubbed out method, a debug version of the method, anything.
Related
In other languages, a general guideline that helps produce better code is always make everything as hidden as possible. If in doubt about whether a variable should be private or protected, it's better to go with private.
Does the same hold true for Python? Should I use two leading underscores on everything at first, and only make them less hidden (only one underscore) as I need them?
If the convention is to use only one underscore, I'd also like to know the rationale.
Here's a comment I left on JBernardo's answer. It explains why I asked this question and also why I'd like to know why Python is different from the other languages:
I come from languages that train you to think everything should be only as public as needed and no more. The reasoning is that this will reduce dependencies and make the code safer to alter. The Python way of doing things in reverse -- starting from public and going towards hidden -- is odd to me.
When in doubt, leave it "public" - I mean, do not add anything to obscure the name of your attribute. If you have a class with some internal value, do not bother about it. Instead of writing:
class Stack(object):
def __init__(self):
self.__storage = [] # Too uptight
def push(self, value):
self.__storage.append(value)
write this by default:
class Stack(object):
def __init__(self):
self.storage = [] # No mangling
def push(self, value):
self.storage.append(value)
This is for sure a controversial way of doing things. Python newbies hate it, and even some old Python guys despise this default - but it is the default anyway, so I recommend you to follow it, even if you feel uncomfortable.
If you really want to send the message "Can't touch this!" to your users, the usual way is to precede the variable with one underscore. This is just a convention, but people understand it and take double care when dealing with such stuff:
class Stack(object):
def __init__(self):
self._storage = [] # This is ok, but Pythonistas use it to be relaxed about it
def push(self, value):
self._storage.append(value)
This can be useful, too, for avoiding conflict between property names and attribute names:
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
What about the double underscore? Well, we use the double underscore magic mainly to avoid accidental overloading of methods and name conflicts with superclasses' attributes. It can be pretty valuable if you write a class to be extended many times.
If you want to use it for other purposes, you can, but it is neither usual nor recommended.
EDIT: Why is this so? Well, the usual Python style does not emphasize making things private - on the contrary! There are many reasons for that - most of them controversial... Let us see some of them.
Python has properties
Today, most OO languages use the opposite approach: what should not be used should not be visible, so attributes should be private. Theoretically, this would yield more manageable, less coupled classes because no one would change the objects' values recklessly.
However, it is not so simple. For example, Java classes have many getters that only get the values and setters that only set the values. You need, let us say, seven lines of code to declare a single attribute - which a Python programmer would say is needlessly complex. Also, you write a lot of code to get one public field since you can change its value using the getters and setters in practice.
So why follow this private-by-default policy? Just make your attributes public by default. Of course, this is problematic in Java because if you decide to add some validation to your attribute, it would require you to change all:
person.age = age;
in your code to, let us say,
person.setAge(age);
setAge() being:
public void setAge(int age) {
if (age >= 0) {
this.age = age;
} else {
this.age = 0;
}
}
So in Java (and other languages), the default is to use getters and setters anyway because they can be annoying to write but can spare you much time if you find yourself in the situation I've described.
However, you do not need to do it in Python since Python has properties. If you have this class:
class Person(object):
def __init__(self, name, age):
self.name = name
self.age = age
...and then you decide to validate ages, you do not need to change the person.age = age pieces of your code. Just add a property (as shown below)
class Person(object):
def __init__(self, name, age):
self.name = name
self._age = age if age >= 0 else 0
#property
def age(self):
return self._age
#age.setter
def age(self, age):
if age >= 0:
self._age = age
else:
self._age = 0
Suppose you can do it and still use person.age = age, why would you add private fields and getters and setters?
(Also, see Python is not Java and this article about the harms of using getters and setters.).
Everything is visible anyway - and trying to hide complicates your work
Even in languages with private attributes, you can access them through some reflection/introspection library. And people do it a lot, in frameworks and for solving urgent needs. The problem is that introspection libraries are just a complicated way of doing what you could do with public attributes.
Since Python is a very dynamic language, adding this burden to your classes is counterproductive.
The problem is not being possible to see - it is being required to see
For a Pythonista, encapsulation is not the inability to see the internals of classes but the possibility of avoiding looking at it. Encapsulation is the property of a component that the user can use without concerning about the internal details. If you can use a component without bothering yourself about its implementation, then it is encapsulated (in the opinion of a Python programmer).
Now, if you wrote a class you can use it without thinking about implementation details, there is no problem if you want to look inside the class for some reason. The point is: your API should be good, and the rest is details.
Guido said so
Well, this is not controversial: he said so, actually. (Look for "open kimono.")
This is culture
Yes, there are some reasons, but no critical reason. This is primarily a cultural aspect of programming in Python. Frankly, it could be the other way, too - but it is not. Also, you could just as easily ask the other way around: why do some languages use private attributes by default? For the same main reason as for the Python practice: because it is the culture of these languages, and each choice has advantages and disadvantages.
Since there already is this culture, you are well-advised to follow it. Otherwise, you will get annoyed by Python programmers telling you to remove the __ from your code when you ask a question in Stack Overflow :)
First - What is name mangling?
Name mangling is invoked when you are in a class definition and use __any_name or __any_name_, that is, two (or more) leading underscores and at most one trailing underscore.
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
And now:
>>> [n for n in dir(Demo) if 'any' in n]
['_Demo__any_name', '_Demo__any_other_name_']
>>> Demo._Demo__any_name
'__any_name'
>>> Demo._Demo__any_other_name_
'__any_other_name_'
When in doubt, do what?
The ostensible use is to prevent subclassers from using an attribute that the class uses.
A potential value is in avoiding name collisions with subclassers who want to override behavior, so that the parent class functionality keeps working as expected. However, the example in the Python documentation is not Liskov substitutable, and no examples come to mind where I have found this useful.
The downsides are that it increases cognitive load for reading and understanding a code base, and especially so when debugging where you see the double underscore name in the source and a mangled name in the debugger.
My personal approach is to intentionally avoid it. I work on a very large code base. The rare uses of it stick out like a sore thumb and do not seem justified.
You do need to be aware of it so you know it when you see it.
PEP 8
PEP 8, the Python standard library style guide, currently says (abridged):
There is some controversy about the use of __names.
If your class is intended to be subclassed, and you have attributes that you do not want subclasses to use, consider naming them with double leading underscores and no trailing underscores.
Note that only the simple class name is used in the mangled name, so if a subclass chooses both the same class name and attribute name,
you can still get name collisions.
Name mangling can make certain uses, such as debugging and __getattr__() , less convenient. However the name mangling algorithm is well documented and easy to perform manually.
Not everyone likes name mangling. Try to balance the need to avoid accidental name clashes with potential use by advanced callers.
How does it work?
If you prepend two underscores (without ending double-underscores) in a class definition, the name will be mangled, and an underscore followed by the class name will be prepended on the object:
>>> class Foo(object):
... __foobar = None
... _foobaz = None
... __fooquux__ = None
...
>>> [name for name in dir(Foo) if 'foo' in name]
['_Foo__foobar', '__fooquux__', '_foobaz']
Note that names will only get mangled when the class definition is parsed:
>>> Foo.__test = None
>>> Foo.__test
>>> Foo._Foo__test
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object 'Foo' has no attribute '_Foo__test'
Also, those new to Python sometimes have trouble understanding what's going on when they can't manually access a name they see defined in a class definition. This is not a strong reason against it, but it's something to consider if you have a learning audience.
One Underscore?
If the convention is to use only one underscore, I'd also like to know the rationale.
When my intention is for users to keep their hands off an attribute, I tend to only use the one underscore, but that's because in my mental model, subclassers would have access to the name (which they always have, as they can easily spot the mangled name anyways).
If I were reviewing code that uses the __ prefix, I would ask why they're invoking name mangling, and if they couldn't do just as well with a single underscore, keeping in mind that if subclassers choose the same names for the class and class attribute there will be a name collision in spite of this.
I wouldn't say that practice produces better code. Visibility modifiers only distract you from the task at hand, and as a side effect force your interface to be used as you intended. Generally speaking, enforcing visibility prevents programmers from messing things up if they haven't read the documentation properly.
A far better solution is the route that Python encourages: Your classes and variables should be well documented, and their behaviour clear. The source should be available. This is far more extensible and reliable way to write code.
My strategy in Python is this:
Just write the damn thing, make no assumptions about how your data should be protected. This assumes that you write to create the ideal interfaces for your problems.
Use a leading underscore for stuff that probably won't be used externally, and isn't part of the normal "client code" interface.
Use double underscore only for things that are purely convenience inside the class, or will cause considerable damage if accidentally exposed.
Above all, it should be clear what everything does. Document it if someone else will be using it. Document it if you want it to be useful in a year's time.
As a side note, you should actually be going with protected in those other languages: You never know your class might be inherited later and for what it might be used. Best to only protect those variables that you are certain cannot or should not be used by foreign code.
You shouldn't start with private data and make it public as necessary. Rather, you should start by figuring out the interface of your object. I.e. you should start by figuring out what the world sees (the public stuff) and then figure out what private stuff is necessary for that to happen.
Other language make difficult to make private that which once was public. I.e. I'll break lots of code if I make my variable private or protected. But with properties in python this isn't the case. Rather, I can maintain the same interface even with rearranging the internal data.
The difference between _ and __ is that python actually makes an attempt to enforce the latter. Of course, it doesn't try really hard but it does make it difficult. Having _ merely tells other programmers what the intention is, they are free to ignore at their peril. But ignoring that rule is sometimes helpful. Examples include debugging, temporary hacks, and working with third party code that wasn't intended to be used the way you use it.
There are already a lot of good answers to this, but I'm going to offer another one. This is also partially a response to people who keep saying that double underscore isn't private (it really is).
If you look at Java/C#, both of them have private/protected/public. All of these are compile-time constructs. They are only enforced at the time of compilation. If you were to use reflection in Java/C#, you could easily access private method.
Now every time you call a function in Python, you are inherently using reflection. These pieces of code are the same in Python.
lst = []
lst.append(1)
getattr(lst, 'append')(1)
The "dot" syntax is only syntactic sugar for the latter piece of code. Mostly because using getattr is already ugly with only one function call. It just gets worse from there.
So with that, there can't be a Java/C# version of private, as Python doesn't compile the code. Java and C# can't check if a function is private or public at runtime, as that information is gone (and it has no knowledge of where the function is being called from).
Now with that information, the name mangling of the double underscore makes the most sense for achieving "private-ness". Now when a function is called from the 'self' instance and it notices that it starts with '__', it just performs the name mangling right there. It's just more syntactic sugar. That syntactic sugar allows the equivalent of 'private' in a language that only uses reflection for data member access.
Disclaimer: I have never heard anybody from the Python development say anything like this. The real reason for the lack of "private" is cultural, but you'll also notice that most scripting/interpreted languages have no private. A strictly enforceable private is not practical at anything except for compile time.
First: Why do you want to hide your data? Why is that so important?
Most of the time you don't really want to do it but you do because others are doing.
If you really really really don't want people using something, add one underscore in front of it. That's it... Pythonistas know that things with one underscore is not guaranteed to work every time and may change without you knowing.
That's the way we live and we're okay with that.
Using two underscores will make your class so bad to subclass that even you will not want to work that way.
The chosen answer does a good job of explaining how properties remove the need for private attributes, but I would also add that functions at the module level remove the need for private methods.
If you turn a method into a function at the module level, you remove the opportunity for subclasses to override it. Moving some functionality to the module level is more Pythonic than trying to hide methods with name mangling.
Following code snippet will explain all different cases :
two leading underscores (__a)
single leading underscore (_a)
no underscore (a)
class Test:
def __init__(self):
self.__a = 'test1'
self._a = 'test2'
self.a = 'test3'
def change_value(self,value):
self.__a = value
return self.__a
printing all valid attributes of Test Object
testObj1 = Test()
valid_attributes = dir(testObj1)
print valid_attributes
['_Test__a', '__doc__', '__init__', '__module__', '_a', 'a',
'change_value']
Here, you can see that name of __a has been changed to _Test__a to prevent this variable to be overridden by any of the subclass. This concept is known as "Name Mangling" in python.
You can access this like this :
testObj2 = Test()
print testObj2._Test__a
test1
Similarly, in case of _a, the variable is just to notify the developer that it should be used as internal variable of that class, the python interpreter won't do anything even if you access it, but it is not a good practise.
testObj3 = Test()
print testObj3._a
test2
a variable can be accesses from anywhere it's like a public class variable.
testObj4 = Test()
print testObj4.a
test3
Hope the answer helped you :)
At first glance it should be the same as for other languages (under "other" I mean Java or C++), but it isn't.
In Java you made private all variables that shouldn't be accessible outside. In the same time in Python you can't achieve this since there is no "privateness" (as one of Python principles says - "We're all adults"). So double underscore means only "Guys, do not use this field directly". The same meaning has singe underscore, which in the same time doesn't cause any headache when you have to inherit from considered class (just an example of possible problem caused by double underscore).
So, I'd recommend you to use single underscore by default for "private" members.
"If in doubt about whether a variable should be private or protected, it's better to go with private." - yes, same holds in Python.
Some answers here say about 'conventions', but don't give the links to those conventions. The authoritative guide for Python, PEP 8 states explicitly:
If in doubt, choose non-public; it's easier to make it public later than to make a public attribute non-public.
The distinction between public and private, and name mangling in Python have been considered in other answers. From the same link,
We don't use the term "private" here, since no attribute is really private in Python (without a generally unnecessary amount of work).
#EXAMPLE PROGRAM FOR Python name mangling
class Demo:
__any_name = "__any_name"
__any_other_name_ = "__any_other_name_"
[n for n in dir(Demo) if 'any' in n] # GIVES OUTPUT AS ['_Demo__any_name',
# '_Demo__any_other_name_']
Is it pythonic to store the expected exceptions of a funcion as attributes of the function itself? or just a stinking bad practice.
Something like this
class MyCoolError(Exception):
pass
def function(*args):
"""
:raises: MyCoolError
"""
# do something here
if some_condition:
raise MyCoolError
function.MyCoolError = MyCoolError
And there in other module
try:
function(...)
except function.MyCoolError:
#...
Pro: Anywhere I have a reference to my function, I have also a reference to the exception it can raise, and I don't have to import it explicitly.
Con: I "have" to repeat the name of the exception to bind it to the function. This could be done with a decorator, but it is also added complexity.
EDIT
Why I am doing this is because I append some methods in an irregular way to some classes, where I think that a mixin it is not worth it. Let's call it "tailored added functionality". For instance let's say:
Class A uses method fn1 and fn2
Class B uses method fn2 and fn3
Class C uses fn4 ...
And like this for about 15 classes.
So when I call obj_a.fn2(), I have to import explicitly the exception it may raise (and it is not in the module where classes A, B or C, but in another one where the shared methods live)... which I think it is a little bit annoying. Appart from that, the standard style in the project I'm working in forces to write one import per line, so it gets pretty verbose.
In some code I have seen exceptions stored as class attributes, and I have found it pretty useful, like:
try:
obj.fn()
except obj.MyCoolError:
....
I think it is not Pythonic. I also think that it does not provide a lot of advantage over the standard way which should be to just import the exception along with the function.
There is a reason (besides helping the interpreter) why Python programs use import statements to state where their code comes from; it helps finding the code of the facilities (e. g. your exception in this case) you are using.
The whole idea has the smell of the declaration of exceptions as it is possible in C++ and partly mandatory in Java. There are discussions amongst the language lawyers whether this is a good idea or a bad one, and in the Python world the designers decided against it, so it is not Pythonic.
It also raises a whole bunch of further questions. What happens if your function A is using another function B which then, later, is changed so that it can throw an exception (a valid thing in Python). Are you willing to change your function A then to reflect that (or catch it in A)? Where would you want to draw the line — is using int(text) to convert a string to int reason enough to "declare" that a ValueError can be thrown?
All in all I think it is not Pythonic, no.
Can you explain the concept stubbing out functions or classes taken from this article?
class Loaf:
pass
This class doesn't define any methods or attributes, but syntactically, there needs to be something in the definition, so you use pass. This is a Python reserved word that just means “move along, nothing to see here”. It's a statement that does nothing, and it's a good placeholder when you're stubbing out functions or classes.`
thank you
stubbing out functions or classes
This refers to writing classes or functions but not yet implementing them. For example, maybe I create a class:
class Foo(object):
def bar(self):
pass
def tank(self):
pass
I've stubbed out the functions because I haven't yet implemented them. However, I don't think this is a great plan. Instead, you should do:
class Foo(object):
def bar(self):
raise NotImplementedError
def tank(self):
raise NotImplementedError
That way if you accidentally call the method before it is implemented, you'll get an error then nothing happening.
A 'stub' is a placeholder class or function that doesn't do anything yet, but needs to be there so that the class or function in question is defined. The idea is that you can already use certain aspects of it (such as put it in a collection or pass it as a callback), even though you haven't written the implementation yet.
Stubbing is a useful technique in a number of scenarios, including:
Team development: Often, the lead programmer will provide class skeletons filled with method stubs and a comment describing what the method should do, leaving the actual implementation to other team members.
Iterative development: Stubbing allows for starting out with partial implementations; the code won't be complete yet, but it still compiles. Details are filled in over the course of later iterations.
Demonstrational purposes: If the content of a method or class isn't interesting for the purpose of the demonstration, it is often left out, leaving only stubs.
Note that you can stub functions like this:
def get_name(self) -> str : ...
def get_age(self) -> int : ...
(yes, this is valid python code !)
It can be useful to stub functions that are added dynamically to an object by a third party library and you want have typing hints.
Happens to me... once :-)
Ellipsis ... is preferable to pass for stubbing.
pass means "do nothing", whereas ... means "something should go here" - it's a placeholder for future code. The effect is the same but the meaning is different.
Stubbing is a technique in software development. After you have planned a module or class, for example by drawing it's UML diagram, you begin implementing it.
As you may have to implement a lot of methods and classes, you begin with stubs. This simply means that you only write the definition of a function down and leave the actual code for later. The advantage is that you won't forget methods and you can continue to think about your design while seeing it in code.
The reason for pass is that Python is indentation dependent and expects one or more indented statement after a colon (such as after class or function).
When you have no statements (as in the case of a stubbed out function or class), there still needs to be at least one indented statement, so you can use the special pass statement as a placeholder. You could just as easily put something with no effect like:
class Loaf:
True
and that is also fine (but less clear than using pass in my opinion).
Is there a generally accepted way to comment functions in Python? Is the following acceptable?
#########################################################
# Create a new user
#########################################################
def add(self):
The correct way to do it is to provide a docstring. That way, help(add) will also spit out your comment.
def add(self):
"""Create a new user.
Line 2 of comment...
And so on...
"""
That's three double quotes to open the comment and another three double quotes to end it. You can also use any valid Python string. It doesn't need to be multiline and double quotes can be replaced by single quotes.
See: PEP 257
Use docstrings.
This is the built-in suggested convention in PyCharm for describing function using docstring comments:
def test_function(p1, p2, p3):
"""
test_function does blah blah blah.
:param p1: describe about parameter p1
:param p2: describe about parameter p2
:param p3: describe about parameter p3
:return: describe what it returns
"""
pass
Use a docstring, as others have already written.
You can even go one step further and add a doctest to your docstring, making automated testing of your functions a snap.
Use a docstring:
A string literal that occurs as the first statement in a module, function, class, or method definition. Such a docstring becomes the __doc__ special attribute of that object.
All modules should normally have docstrings, and all functions and classes exported by a module should also have docstrings. Public methods (including the __init__ constructor) should also have docstrings. A package may be documented in the module docstring of the __init__.py file in the package directory.
String literals occurring elsewhere in Python code may also act as documentation. They are not recognized by the Python bytecode compiler and are not accessible as runtime object attributes (i.e. not assigned to __doc__ ), but two types of extra docstrings may be extracted by software tools:
String literals occurring immediately after a simple assignment at the top level of a module, class, or __init__ method are called "attribute docstrings".
String literals occurring immediately after another docstring are called "additional docstrings".
Please see PEP 258 , "Docutils Design Specification" [2] , for a detailed description of attribute and additional docstrings...
The principles of good commenting are fairly subjective, but here are some guidelines:
Function comments should describe the intent of a function, not the implementation
Outline any assumptions that your function makes with regards to system state. If it uses any global variables (tsk, tsk), list those.
Watch out for excessive ASCII art. Having long strings of hashes may seem to make the comments easier to read, but they can be annoying to deal with when comments change
Take advantage of language features that provide 'auto documentation', i.e., docstrings in Python, POD in Perl, and Javadoc in Java
I would go for a documentation practice that integrates with a documentation tool such as Sphinx.
The first step is to use a docstring:
def add(self):
""" Method which adds stuff
"""
Read about using docstrings in your Python code.
As per the Python docstring conventions:
The docstring for a function or method should summarize its behavior and document its arguments, return value(s), side effects, exceptions raised, and restrictions on when it can be called (all if applicable). Optional arguments should be indicated. It should be documented whether keyword arguments are part of the interface.
There will be no golden rule, but rather provide comments that mean something to the other developers on your team (if you have one) or even to yourself when you come back to it six months down the road.
I would go a step further than just saying "use a docstring". Pick a documentation generation tool, such as pydoc or epydoc (I use epydoc in pyparsing), and use the markup syntax recognized by that tool. Run that tool often while you are doing your development, to identify holes in your documentation. In fact, you might even benefit from writing the docstrings for the members of a class before implementing the class.
While I agree that this should not be a comment, but a docstring as most (all?) answers suggest, I want to add numpydoc (a docstring style guide).
If you do it like this, you can (1) automatically generate documentation and (2) people recognize this and have an easier time to read your code.
You can use three quotes to do it.
You can use single quotes:
def myfunction(para1,para2):
'''
The stuff inside the function
'''
Or double quotes:
def myfunction(para1,para2):
"""
The stuff inside the function
"""
The correct way is as follows:
def search_phone_state(phone_number_start,state,dataframe_path,separator):
"""
returns records whose phone numbers begin with a phone_number_start and are from state
"""
dataframe = pd.read_csv(filepath_or_buffer=dataframe_path, sep=separator, header=0)
return dataframe[(pd.Series(dataframe["Phone"].values.tolist()).str.startswith(phone_number_start, na="False"))& (dataframe["State"]==state)]
If you do:
help(search_phone_state)
It will print:
Help on function search_phone_state in module __main__:
search_phone_state(phone_number_start, state, dataframe_path, separator)
returns records whose phone numbers begin with a phone_number_start and are from state
So I'm brand new to SQLAlchemy, and I'm trying to use the SQL Expression API to create a SELECT statement that specifies the exact columns to return. I found both a class and a function defined in the sqlalchmey.sql.expressions module and I'm not too sure which to use... Why do they have both a class and a function? When would you use one over the other? And would anyone be willing to explain why they need to have both in their library? It doesn't really make much sense to me to be honest, other than just to confuse me. :) JK
Thanks for the help in advance!
Use the source.
Here's the implementation of the select function, from the source code:
def select(columns=None, whereclause=None, from_obj=[], **kwargs):
"""Returns a ``SELECT`` clause element.
(... long docstring ...)
"""
return Select(columns, whereclause=whereclause, from_obj=from_obj, **kwargs)
So, it is exactly the same.
the expression package provides Python functions to do everything. These functions in some cases return a class instance verbatim from the function's arguments and other times compose an object from several components. It was originally the idea that the functions would be doing a lot more composition than they ended up doing in the end. In any case, the package prefers to stick to pep-8 as far as classes being in CamelCase, functions being all lowercase, and wanted the front end API to be all lower case - so you have the public "constructor" functions.
The SQL expression language is very easy to grok if you start with the tutorial.
I think it's pretty much the same. The documentation says for select (the function):
The returned object is an instance of Select.
As you can pass the select function the same parameters that Select.__init__() accepts, I don't really see a difference. At first glance the arguments of the class constructor seem to be a superset of the function's. But the function can be passed any of the constructor's keyword arguments.