Related
I'm a beginner in OOP and I want to create a program with three classes, A, B, and C. Each instance of the class is defined by a set of characteristics, Achar1, Achar2 etc.
The program is supposed to create uses comprising of element of A, element of B and element of C with begin and end date. There are subclasses of A and B, so that some elements of A can only be connected to certain elements of B. The main functionality of the program is to list elements of A, B and C with their aributes, list uses and suggest new uses that would emply the instances of classes used the least to avoid repetition.
My first instinct is to use three dictionaries A, B and C and a dictionary for uses. This seems intuitive and easy to import/export into json or similar. I tried to rewrite it to employ classes, but I can't see any gain in doing so: it's hard (?) to iterate over instances of classes and also they basically are dictionaries since this data doesn't need much else.
What am I doing wrong? What can I gain from switching from dictionaries to objects?
Edit: the code in question (dict-based) is here. The first version, where I tried to make it object oriented is here.
What can you gain from switching from dictionaries to objects?
This is a pretty vast question.
It is right, in Python, that an object is semantically equivalent to a dictionary, since an object is almost equivalent to its __dict__ attribute (I will not detail this "almost" here, since it is far off the topic).
I see two main benefits from using classes instead of dictionaries: abstraction, and comfort.
Abstraction
When you are in the design phase, especially for short to medium length programs, you generally want to write the skeleton of your code while thinking.
In a situation where you need to model interactions, it's more natural to think in classes, because it's halfway between speaking and programming.
This makes it easier to understand your own problematic. In addition, it greatly improves the readability of your code, because it seems natural, even when the reader does not know anything about your code.
This brings you concepts such as inheritance and polymorphism, that enrich the abstraction offered by OOP.
Comfort
One of Python's many strengths is its data model. The plenty magic methods and attributes allow you to use very simple syntaxes. In addition, it can take care of some operations in your place.
Here are some comparisons between imperative and object-oriented programming in Python.
Of course, everything in Python is an object, so I will use dot calls (foo.bar()) even in imperative examples.
Files reading
Imperative way
f1 = open(in_path, 'r')
f2 = open(out_path, 'w')
for line in f1:
processed = process(line)
f2.write(processed)
# Oops, I forgot to close my files...
Object-oriented way
with open(in_path, 'r') as f1, open(out_path, 'w') as f2:
for line in f1:
processed = process(line)
f2.write(processed)
# I don't have to close my files, Python did it for me
Note that for line in f is an extensive use of Python's object-oriented data model. Imagine the pain if this syntax did not exist (well, just try it in C).
Emptiness test
Imperative way
if len(some_list) == 0:
print("empty list")
Object-oriented way
if some_list:
print("empty list")
Iteration over a sequence
Imperative way
i = 0
while i < len(some_sequence):
print(some_sequence[i])
i += 1
Object-oriented way
for elt in some_sequence:
print(elt)
But the actual strength of this data model is that it lets you redefine a lot of magic attributes. For instance, you can make complex things comparable just by implementing __lt__, __le__ and so on, which will redefine the behaviour of <, <=, and so on. Thenceforth, built-in functions like min, max or sort will understand how to compare your objects.
This said, the use of mere dictionaries can be more than enough in a large variety of cases.
And at the end of the day, OOP is only one paradigm, and imperative programming just works as fine.
tl;dr The answer is Polymorphism
Classes are not inherently better than dicts, just applicable to different problem domains. If you want to create 3 objects that behave like dicts, then just use dicts. So, if A, B, and C are dicts and that solves your problem, then you are done: dicts are the right choice.
If they are, actually, different things with different data structures and perhaps radically different implementations, then dicts aren't the right answer.
In this case, however, (and this is where classes help you), classes provide polymorphism to help you with your stated problem. This is where multiple objects, though belonging to different classes, can each respond to the same method, each in their own way.
Thus, you can implement a "use" method for classes A, B, and C. OBjects of those classes will now respond to that method, even though the rest of their implementations are wildly different.
This is most easily represented in the __str__ function of basic classes.
>>> x = 1
>>> x.__str__()
'1'
>>> x = "hello"
>>> x.__str__()
'hello'
No one will ever confuse the function of an integer (i.e. doing math) with the function of a string (i.e. storing text), but both of them respond to the same message __str__.Which is why the following works:
>>> y = 1
>>> z = "hello"
>>> y.__str__() + z.__str__()
'1hello'
So in your case, you can have three classes A, B, and C with wildly different implementations, but give them each one similar method "use". Then when you ask them all their "use", they will all answer the same way, and you can compile your list of uses.
Your code is replete with long "case" statements, here is a great discussion on how to use polymorphism to remove them:
Ways to eliminate switch in code
You should be able to drastically reduce the amount of code you have, especially all of the repetitive parts, by eliminating all of the control statements (ifs) and replacing them with sending messages to objects and letting the objects figure out what to do. Your objects should have methods such as "import", "export" and "add".
What if you just extend each class from dict and personalize their behavior?
class classA(dict):
'''Some personalization here...'''
def __init__(self, **kwargs):
pass
I mean... they will still be dictionaries, will work as dictionaries, but you can define a few methods of your own.
Edit:
Ok... I read your code and tried to do some OOP from it. Not really the best OOP ever, but you can get the idea. Code is cleaner, easier to debug, easier to extend, easier maintenance, and so on...
Here is the dropbox folder:
https://www.dropbox.com/sh/8clyys2gcnodzkq/AACXhPnCW5d9fx0XY6Y-y_6ca?dl=0
What do you think?
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
There is a lot of discussions of Python vs Ruby, and I all find them completely unhelpful, because they all turn around why feature X sucks in language Y, or that claim language Y doesn't have X, although in fact it does. I also know exactly why I prefer Python, but that's also subjective, and wouldn't help anybody choosing, as they might not have the same tastes in development as I do.
It would therefore be interesting to list the differences, objectively. So no "Python's lambdas sucks". Instead explain what Ruby's lambdas can do that Python's can't. No subjectivity. Example code is good!
Don't have several differences in one answer, please. And vote up the ones you know are correct, and down those you know are incorrect (or are subjective). Also, differences in syntax is not interesting. We know Python does with indentation what Ruby does with brackets and ends, and that # is called self in Python.
UPDATE: This is now a community wiki, so we can add the big differences here.
Ruby has a class reference in the class body
In Ruby you have a reference to the class (self) already in the class body. In Python you don't have a reference to the class until after the class construction is finished.
An example:
class Kaka
puts self
end
self in this case is the class, and this code would print out "Kaka". There is no way to print out the class name or in other ways access the class from the class definition body in Python (outside method definitions).
All classes are mutable in Ruby
This lets you develop extensions to core classes. Here's an example of a rails extension:
class String
def starts_with?(other)
head = self[0, other.length]
head == other
end
end
Python (imagine there were no ''.startswith method):
def starts_with(s, prefix):
return s[:len(prefix)] == prefix
You could use it on any sequence (not just strings). In order to use it you should import it explicitly e.g., from some_module import starts_with.
Ruby has Perl-like scripting features
Ruby has first class regexps, $-variables, the awk/perl line by line input loop and other features that make it more suited to writing small shell scripts that munge text files or act as glue code for other programs.
Ruby has first class continuations
Thanks to the callcc statement. In Python you can create continuations by various techniques, but there is no support built in to the language.
Ruby has blocks
With the "do" statement you can create a multi-line anonymous function in Ruby, which will be passed in as an argument into the method in front of do, and called from there. In Python you would instead do this either by passing a method or with generators.
Ruby:
amethod { |here|
many=lines+of+code
goes(here)
}
Python (Ruby blocks correspond to different constructs in Python):
with amethod() as here: # `amethod() is a context manager
many=lines+of+code
goes(here)
Or
for here in amethod(): # `amethod()` is an iterable
many=lines+of+code
goes(here)
Or
def function(here):
many=lines+of+code
goes(here)
amethod(function) # `function` is a callback
Interestingly, the convenience statement in Ruby for calling a block is called "yield", which in Python will create a generator.
Ruby:
def themethod
yield 5
end
themethod do |foo|
puts foo
end
Python:
def themethod():
yield 5
for foo in themethod():
print foo
Although the principles are different, the result is strikingly similar.
Ruby supports functional style (pipe-like) programming more easily
myList.map(&:description).reject(&:empty?).join("\n")
Python:
descriptions = (f.description() for f in mylist)
"\n".join(filter(len, descriptions))
Python has built-in generators (which are used like Ruby blocks, as noted above)
Python has support for generators in the language. In Ruby 1.8 you can use the generator module which uses continuations to create a generator from a block. Or, you could just use a block/proc/lambda! Moreover, in Ruby 1.9 Fibers are, and can be used as, generators, and the Enumerator class is a built-in generator 4
docs.python.org has this generator example:
def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
Contrast this with the above block examples.
Python has flexible name space handling
In Ruby, when you import a file with require, all the things defined in that file will end up in your global namespace. This causes namespace pollution. The solution to that is Rubys modules. But if you create a namespace with a module, then you have to use that namespace to access the contained classes.
In Python, the file is a module, and you can import its contained names with from themodule import *, thereby polluting the namespace if you want. But you can also import just selected names with from themodule import aname, another or you can simply import themodule and then access the names with themodule.aname. If you want more levels in your namespace you can have packages, which are directories with modules and an __init__.py file.
Python has docstrings
Docstrings are strings that are attached to modules, functions and methods and can be
introspected at runtime. This helps for creating such things as the help command and
automatic documentation.
def frobnicate(bar):
"""frobnicate takes a bar and frobnicates it
>>> bar = Bar()
>>> bar.is_frobnicated()
False
>>> frobnicate(bar)
>>> bar.is_frobnicated()
True
"""
Ruby's equivalent are similar to javadocs, and located above the method instead of within it. They can be retrieved at runtime from the files by using 1.9's Method#source_location example use
Python has multiple inheritance
Ruby does not ("on purpose" -- see Ruby's website, see here how it's done in Ruby). It does reuse the module concept as a type of abstract classes.
Python has list/dict comprehensions
Python:
res = [x*x for x in range(1, 10)]
Ruby:
res = (0..9).map { |x| x * x }
Python:
>>> (x*x for x in range(10))
<generator object <genexpr> at 0xb7c1ccd4>
>>> list(_)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
Ruby:
p = proc { |x| x * x }
(0..9).map(&p)
Python 2.7+:
>>> {x:str(y*y) for x,y in {1:2, 3:4}.items()}
{1: '4', 3: '16'}
Ruby:
>> Hash[{1=>2, 3=>4}.map{|x,y| [x,(y*y).to_s]}]
=> {1=>"4", 3=>"16"}
Python has decorators
Things similar to decorators can also be created in Ruby, and it can also be argued that they aren't as necessary as in Python.
Syntax differences
Ruby requires "end" or "}" to close all of its scopes, while Python uses white-space only. There have been recent attempts in Ruby to allow for whitespace only indentation http://github.com/michaeledgar/seamless
Ruby has the concepts of blocks, which are essentially syntactic sugar around a section of code; they are a way to create closures and pass them to another method which may or may not use the block. A block can be invoked later on through a yield statement.
For example, a simple definition of an each method on Array might be something like:
class Array
def each
for i in self
yield(i) # If a block has been passed, control will be passed here.
end
end
end
Then you can invoke this like so:
# Add five to each element.
[1, 2, 3, 4].each{ |e| puts e + 5 }
> [6, 7, 8, 9]
Python has anonymous functions/closures/lambdas, but it doesn't quite have blocks since it's missing some of the useful syntactic sugar. However, there's at least one way to get it in an ad-hoc fashion. See, for example, here.
Python Example
Functions are first-class variables in Python. You can declare a function, pass it around as an object, and overwrite it:
def func(): print "hello"
def another_func(f): f()
another_func(func)
def func2(): print "goodbye"
func = func2
This is a fundamental feature of modern scripting languages. JavaScript and Lua do this, too. Ruby doesn't treat functions this way; naming a function calls it.
Of course, there are ways to do these things in Ruby, but they're not first-class operations. For example, you can wrap a function with Proc.new to treat it as a variable--but then it's no longer a function; it's an object with a "call" method.
Ruby's functions aren't first-class objects
Ruby functions aren't first-class objects. Functions must be wrapped in an object to pass them around; the resulting object can't be treated like a function. Functions can't be assigned in a first-class manner; instead, a function in its container object must be called to modify them.
def func; p "Hello" end
def another_func(f); method(f)[] end
another_func(:func) # => "Hello"
def func2; print "Goodbye!"
self.class.send(:define_method, :func, method(:func2))
func # => "Goodbye!"
method(:func).owner # => Object
func # => "Goodbye!"
self.func # => "Goodbye!"
Ultimately all answers are going to be subjective at some level, and the answers posted so far pretty much prove that you can't point to any one feature that isn't doable in the other language in an equally nice (if not similar) way, since both languages are very concise and expressive.
I like Python's syntax. However, you have to dig a bit deeper than syntax to find the true beauty of Ruby. There is zenlike beauty in Ruby's consistency. While no trivial example can possibly explain this completely, I'll try to come up with one here just to explain what I mean.
Reverse the words in this string:
sentence = "backwards is sentence This"
When you think about how you would do it, you'd do the following:
Split the sentence up into words
Reverse the words
Re-join the words back into a string
In Ruby, you'd do this:
sentence.split.reverse.join ' '
Exactly as you think about it, in the same sequence, one method call after another.
In python, it would look more like this:
" ".join(reversed(sentence.split()))
It's not hard to understand, but it doesn't quite have the same flow. The subject (sentence) is buried in the middle. The operations are a mix of functions and object methods. This is a trivial example, but one discovers many different examples when really working with and understanding Ruby, especially on non-trivial tasks.
Python has a "we're all adults here" mentality. Thus, you'll find that Ruby has things like constants while Python doesn't (although Ruby's constants only raise a warning). The Python way of thinking is that if you want to make something constant, you should put the variable names in all caps and not change it.
For example, Ruby:
>> PI = 3.14
=> 3.14
>> PI += 1
(irb):2: warning: already initialized constant PI
=> 4.14
Python:
>>> PI = 3.14
>>> PI += 1
>>> PI
4.1400000000000006
You can import only specific functions from a module in Python. In Ruby, you import the whole list of methods. You could "unimport" them in Ruby, but it's not what it's all about.
EDIT:
let's take this Ruby module :
module Whatever
def method1
end
def method2
end
end
if you include it in your code :
include Whatever
you'll see that both method1 and method2 have been added to your namespace. You can't import only method1. You either import them both or you don't import them at all. In Python you can import only the methods of your choosing. If this would have a name maybe it would be called selective importing?
From Ruby's website:
Similarities
As with Python, in Ruby,...
There’s an interactive prompt (called irb).
You can read docs on the command line (with the ri command instead of pydoc).
There are no special line terminators (except the usual newline).
String literals can span multiple lines like Python’s triple-quoted strings.
Brackets are for lists, and braces are for dicts (which, in Ruby, are called “hashes”).
Arrays work the same (adding them makes one long array, but composing them like this a3 = [ a1, a2 ] gives you an array of arrays).
Objects are strongly and dynamically typed.
Everything is an object, and variables are just references to objects.
Although the keywords are a bit different, exceptions work about the same.
You’ve got embedded doc tools (Ruby’s is called rdoc).
Differences
Unlike Python, in Ruby,...
Strings are mutable.
You can make constants (variables whose value you don’t intend to change).
There are some enforced case-conventions (ex. class names start with a capital letter, variables start with a lowercase letter).
There’s only one kind of list container (an Array), and it’s mutable.
Double-quoted strings allow escape sequences (like \t) and a special “expression substitution” syntax (which allows you to insert the results of Ruby expressions directly into other strings without having to "add " + "strings " + "together"). Single-quoted strings are like Python’s r"raw strings".
There are no “new style” and “old style” classes. Just one kind.
You never directly access attributes. With Ruby, it’s all method calls.
Parentheses for method calls are usually optional.
There’s public, private, and protected to enforce access, instead of Python’s _voluntary_ underscore __convention__.
“mixin’s” are used instead of multiple inheritance.
You can add or modify the methods of built-in classes. Both languages let you open up and modify classes at any point, but Python prevents modification of built-ins — Ruby does not.
You’ve got true and false instead of True and False (and nil instead of None).
When tested for truth, only false and nil evaluate to a false value. Everything else is true (including 0, 0.0, "", and []).
It’s elsif instead of elif.
It’s require instead of import. Otherwise though, usage is the same.
The usual-style comments on the line(s) above things (instead of docstrings below them) are used for generating docs.
There are a number of shortcuts that, although give you more to remember, you quickly learn. They tend to make Ruby fun and very productive.
What Ruby has over Python are its scripting language capabilities. Scripting language in this context meaning to be used for "glue code" in shell scripts and general text manipulation.
These are mostly shared with Perl. First-class built-in regular expressions, $-Variables, useful command line options like Perl (-a, -e) etc.
Together with its terse yet epxressive syntax it is perfect for these kind of tasks.
Python to me is more of a dynamically typed business language that is very easy to learn and has a neat syntax. Not as "cool" as Ruby but neat.
What Python has over Ruby to me is the vast number of bindings for other libs. Bindings to Qt and other GUI libs, many game support libraries and and and. Ruby has much less. While much used bindings e.g. to Databases are of good quality I found niche libs to be better supported in Python even if for the same library there is also a Ruby binding.
So, I'd say both languages have its use and it is the task that defines which one to use. Both are easy enough to learn. I use them side-by-side. Ruby for scripting and Python for stand-alone apps.
I don't think "Ruby has X and Python doesn't, while Python has Y and Ruby doesn't" is the most useful way to look at it. They're quite similar languages, with many shared abilities.
To a large degree, the difference is what the language makes elegant and readable. To use an example you brought up, both do theoretically have lambdas, but Python programmers tend to avoid them, and constructs made using them do not look anywhere near as readable or idiomatic as in Ruby. So in Python, a good programmer will want to take a different route to solving the problem than he would in Ruby, just because it actually is the better way to do it.
I'd like to suggest a variant of the original question, "What does Ruby have that Python doesn't, and vice versa?" which admits the disappointing answer, "Well, what can you do with either Ruby or Python that can't be done in Intercal?" Nothing on that level, because Python and Ruby are both part of the vast royal family sitting on the throne of being Turing approximant.
But what about this:
What can be done gracefully and well in Python that can't be done in Ruby with such beauty and good engineering, or vice versa?
That may be much more interesting than mere feature comparison.
Python has an explicit, builtin syntax for list-comprehenions and generators whereas in Ruby you would use map and code blocks.
Compare
list = [ x*x for x in range(1, 10) ]
to
res = (1..10).map{ |x| x*x }
"Variables that start with a capital letter becomes constants and can't be modified"
Wrong. They can.
You only get a warning if you do.
Somewhat more on the infrastructure side:
Python has much better integration with C++ (via things like Boost.Python, SIP, and Py++) than Ruby, where the options seem to be either write directly against the Ruby interpreter API (which you can do with Python as well, of course, but in both cases doing so is low level, tedious, and error prone) or use SWIG (which, while it works and definitely is great if you want to support many languages, isn't nearly as nice as Boost.Python or SIP if you are specifically looking to bind C++).
Python has a number of web application environments (Django, Pylons/Turbogears, web.py, probably at least half a dozen others), whereas Ruby (effectively) has one: Rails. (Other Ruby web frameworks do exist, but seemingly have a hard time getting much traction against Rails). Is this aspect good or bad? Hard to say, and probably quite subjective; I can easily imagine arguments that the Python situation is better and that the Ruby situation is better.
Culturally, the Python and Ruby communities seem somewhat different, but I can only hint at this as I don't have that much experience interacting with the Ruby community. I'm adding this mostly in the hopes that someone who has a lot of experience with both can amplify (or reject) this statement.
Shamelessly copy/pasted from: Alex Martelli answer on "What's better about Ruby than Python" thread from comp.lang.python mailing list.
Aug 18 2003, 10:50 am Erik Max Francis
wrote:
"Brandon J. Van Every" wrote:
What's better about Ruby than Python? I'm sure there's something.
What is it?
Wouldn't it make much more sense to ask Ruby people this, rather than
Python people?
Might, or might not, depending on
one's purposes -- for example, if
one's purposes include a "sociological
study" of the Python community, then
putting questions to that community is
likely to prove more revealing of
information about it, than putting
them elsewhere:-).
Personally, I gladly took the
opportunity to follow Dave Thomas'
one-day Ruby tutorial at last OSCON.
Below a thin veneer of syntax
differences, I find Ruby and Python
amazingly similar -- if I was
computing the minimum spanning tree
among just about any set of languages,
I'm pretty sure Python and Ruby would
be the first two leaves to coalesce
into an intermediate node:-).
Sure, I do get weary, in Ruby, of
typing the silly "end" at the end of
each block (rather than just
unindenting) -- but then I do get to
avoid typing the equally-silly ':'
which Python requires at the
start of each block, so that's almost a wash:-). Other syntax
differences such as '#foo' versus
'self.foo', or the higher significance
of case in Ruby vs Python, are really
just about as irrelevant to me.
Others no doubt base their choice of
programming languages on just such
issues, and they generate the hottest
debates -- but to me that's just an
example of one of Parkinson's Laws in
action (the amount on debate on an
issue is inversely proportional to the
issue's actual importance).
Edit (by AM 6/19/2010 11:45): this is also known as "painting the
bikeshed" (or, for short,
"bikeshedding") -- the reference is,
again, to Northcote Parkinson, who
gave "debates on what color to paint
the bikeshed" as a typical example of
"hot debates on trivial topics".
(end-of-Edit).
One syntax difference that I do find
important, and in Python's favor --
but other people will no doubt think
just the reverse -- is "how do you
call a function which takes no
parameters". In Python (like in C),
to call a function you always apply
the "call operator" -- trailing
parentheses just after the object
you're calling (inside those trailing
parentheses go the args you're passing
in the call -- if you're passing no
args, then the parentheses are empty).
This leaves the mere mention of
any object, with no operator involved, as meaning just a reference
to the object -- in any context,
without special cases, exceptions,
ad-hoc rules, and the like. In Ruby
(like in Pascal), to call a function
WITH arguments you pass the args
(normally in parentheses, though that
is not invariably the case) -- BUT if
the function takes no args then simply
mentioning the function implicitly
calls it. This may meet the
expectations of many people (at least,
no doubt, those whose only previous
experience of programming was with
Pascal, or other languages with
similar "implicit calling", such as
Visual Basic) -- but to me, it means
the mere mention of an object may
EITHER mean a reference to the object,
OR a call to the object, depending on
the object's type -- and in those
cases where I can't get a reference to
the object by merely mentioning it I
will need to use explicit "give me a
reference to this, DON'T call it!"
operators that aren't needed
otherwise. I feel this impacts the
"first-classness" of functions (or
methods, or other callable objects)
and the possibility of interchanging
objects smoothly. Therefore, to me,
this specific syntax difference is a
serious black mark against Ruby -- but
I do understand why others would thing
otherwise, even though I could hardly
disagree more vehemently with them:-).
Below the syntax, we get into some
important differences in elementary
semantics -- for example, strings in
Ruby are mutable objects (like in
C++), while in Python they are not
mutable (like in Java, or I believe
C#). Again, people who judge
primarily by what they're already
familiar with may think this is a plus
for Ruby (unless they're familiar with
Java or C#, of course:-). Me, I think
immutable strings are an excellent
idea (and I'm not surprised that Java,
independently I think, reinvented that
idea which was already in Python),
though I wouldn't mind having a
"mutable string buffer" type as well
(and ideally one with better
ease-of-use than Java's own "string
buffers"); and I don't give this
judgment because of familiarity --
before studying Java, apart from
functional programming languages where
all data are immutable, all the languages I knew had mutable strings
-- yet when I first saw the immutable-string idea in Java (which I
learned well before I learned Python),
it immediately struck me as excellent,
a very good fit for the
reference-semantics of a higher level
programming language (as opposed to
the value-semantics that fit best with
languages closer to the machine and
farther from applications, such as C)
with strings as a first-class,
built-in (and pretty crucial) data
type.
Ruby does have some advantages in
elementary semantics -- for example,
the removal of Python's "lists vs
tuples" exceedingly subtle
distinction. But mostly the score (as
I keep it, with simplicity a big plus
and subtle, clever distinctions a
notable minus) is against Ruby (e.g.,
having both closed and half-open
intervals, with the notations a..b and
a...b [anybody wants to claim that
it's obvious which is which?-)], is
silly -- IMHO, of course!). Again,
people who consider having a lot of
similar but subtly different things at
the core of a language a PLUS, rather
than a MINUS, will of course count
these "the other way around" from how
I count them:-).
Don't be misled by these comparisons
into thinking the two languages are
very different, mind you. They aren't. But if I'm asked to compare
"capelli d'angelo" to "spaghettini",
after pointing out that these two
kinds of pasta are just about
undistinguishable to anybody and
interchangeable in any dish you might
want to prepare, I would then
inevitably have to move into
microscopic examination of how the
lengths and diameters imperceptibly
differ, how the ends of the strands
are tapered in one case and not in the
other, and so on -- to try and explain
why I, personally, would rather have
capelli d'angelo as the pasta in any
kind of broth, but would prefer
spaghettini as the pastasciutta to go
with suitable sauces for such long
thin pasta forms (olive oil, minced
garlic, minced red peppers, and finely
ground anchovies, for example - but if
you sliced the garlic and peppers
instead of mincing them, then you
should choose the sounder body of
spaghetti rather than the thinner
evanescence of spaghettini, and would
be well advised to forego the achovies
and add instead some fresh spring
basil [or even -- I'm a heretic...! --
light mint...] leaves -- at the very
last moment before serving the dish).
Ooops, sorry, it shows that I'm
traveling abroad and haven't had pasta
for a while, I guess. But the analogy
is still pretty good!-)
So, back to Python and Ruby, we come
to the two biggies (in terms of
language proper -- leaving the
libraries, and other important
ancillaries such as tools and
environments, how to embed/extend each
language, etc, etc, out of it for now
-- they wouldn't apply to all IMPLEMENTATIONS of each language
anyway, e.g., Jython vs Classic Python
being two implementations of the
Python language!):
Ruby's iterators and codeblocks vs Python's iterators and generators;
Ruby's TOTAL, unbridled "dynamicity", including the ability
to "reopen" any existing class,
including all built-in ones, and
change its behavior at run-time -- vs
Python's vast but bounded
dynamicity, which never changes the
behavior of existing built-in
classes and their instances.
Personally, I consider 1 a wash (the
differences are so deep that I could
easily see people hating either
approach and revering the other, but
on MY personal scales the pluses and
minuses just about even up); and 2 a
crucial issue -- one that makes Ruby
much more suitable for "tinkering",
BUT Python equally more suitable for
use in large production applications.
It's funny, in a way, because both
languages are so MUCH more dynamic
than most others, that in the end the
key difference between them from my
POV should hinge on that -- that Ruby
"goes to eleven" in this regard (the
reference here is to "Spinal Tap", of
course). In Ruby, there are no limits
to my creativity -- if I decide that
all string comparisons must become
case-insensitive, I CAN DO THAT!
I.e., I can dynamically alter the
built-in string class so that
a = "Hello World"
b = "hello world"
if a == b
print "equal!\n"
else
print "different!\n"
end WILL print "equal". In python, there is NO way I can do that.
For the purposes of metaprogramming,
implementing experimental frameworks,
and the like, this amazing dynamic
ability of Ruby is extremely
appealing. BUT -- if we're talking
about large applications, developed by
many people and maintained by even
more, including all kinds of libraries
from diverse sources, and needing to
go into production in client sites...
well, I don't WANT a language that is
QUITE so dynamic, thank you very much.
I loathe the very idea of some library
unwittingly breaking other unrelated
ones that rely on those strings being
different -- that's the kind of deep
and deeply hidden "channel", between
pieces of code that LOOK separate and
SHOULD BE separate, that spells
d-e-a-t-h in large-scale programming.
By letting any module affect the
behavior of any other "covertly", the
ability to mutate the semantics of
built-in types is just a BAD idea for
production application programming,
just as it's cool for tinkering.
If I had to use Ruby for such a large
application, I would try to rely on
coding-style restrictions, lots of
tests (to be rerun whenever ANYTHING
changes -- even what should be totally
unrelated...), and the like, to
prohibit use of this language feature.
But NOT having the feature in the
first place is even better, in my
opinion -- just as Python itself would
be an even better language for
application programming if a certain
number of built-ins could be "nailed
down", so I KNEW that, e.g.,
len("ciao") is 4 (rather than having
to worry subliminally about whether
somebody's changed the binding of name
'len' in the builtins module...).
I do hope that eventually Python does
"nail down" its built-ins.
But the problem's minor, since
rebinding built-ins is quite a
deprecated as well as a rare practice
in Python. In Ruby, it strikes me as
major -- just like the too powerful
macro facilities of other languages
(such as, say, Dylan) present similar
risks in my own opinion (I do hope
that Python never gets such a powerful
macro system, no matter the allure of
"letting people define their own
domain-specific little languages
embedded in the language itself" -- it
would, IMHO, impair Python's wonderful
usefulness for application
programming, by presenting an
"attractive nuisance" to the would-be
tinkerer who lurks in every
programmer's heart...).
Alex
Some others from:
http://www.ruby-lang.org/en/documentation/ruby-from-other-languages/to-ruby-from-python/
(If I have misintrepreted anything or any of these have changed on the Ruby side since that page was updated, someone feel free to edit...)
Strings are mutable in Ruby, not in Python (where new strings are created by "changes").
Ruby has some enforced case conventions, Python does not.
Python has both lists and tuples (immutable lists). Ruby has arrays corresponding to Python lists, but no immutable variant of them.
In Python, you can directly access object attributes. In Ruby, it's always via methods.
In Ruby, parentheses for method calls are usually optional, but not in Python.
Ruby has public, private, and protected to enforce access, instead of Python’s convention of using underscores and name mangling.
Python has multiple inheritance. Ruby has "mixins."
And another very relevant link:
http://c2.com/cgi/wiki?PythonVsRuby
Which, in particular, links to another good one by Alex Martelli, who's been also posting a lot of great stuff here on SO:
http://groups.google.com/group/comp.lang.python/msg/028422d707512283
I'm unsure of this, so I add it as an answer first.
Python treats unbound methods as functions
That means you can call a method either like theobject.themethod() or by TheClass.themethod(anobject).
Edit: Although the difference between methods and functions is small in Python, and non-existant in Python 3, it also doesn't exist in Ruby, simply because Ruby doesn't have functions. When you define functions, you are actually defining methods on Object.
But you still can't take the method of one class and call it as a function, you would have to rebind it to the object you want to call on, which is much more obstuse.
I would like to mention Python descriptor API that allows one customize object-to-attribute "communication". It is also noteworthy that, in Python, one is free to implement an alternative protocol via overriding the default given through the default implementation of the __getattribute__ method.
Let me give more details about the aforementioned.
Descriptors are regular classes with __get__, __set__ and/or __delete__ methods.
When interpreter encounters something like anObj.anAttr, the following is performed:
__getattribute__ method of anObj is invoked
__getattribute__ retrieves anAttr object from the class dict
it checks whether abAttr object has __get__, __set__ or __delete__ callable objects
the context (i.e., caller object or class, and value, instead of the latter, if we have setter) is passed to the callable object
the result is returned.
As was mentioned, this is the default behavior. One is free to change the protocol by re-implementing __getattribute__.
This technique is lot more powerful than decorators.
Ruby has builtin continuation support using callcc.
Hence you can implement cool things like the amb-operator
At this stage, Python still has better unicode support
Python has docstrings and ruby doesn't... Or if it doesn't, they are not accessible as easily as in python.
Ps. If im wrong, pretty please, leave an example? I have a workaround that i could monkeypatch into classes quite easily but i'd like to have docstring kinda of a feature in "native way".
Ruby has a line by line loop over input files (the '-n' flag) from the commandline so it can be used like AWK. This Ruby one-liner:
ruby -ne 'END {puts $.}'
will count lines like the AWK one-liner:
awk 'END{print NR}'
Ruby gets feature this through Perl, which took it from AWK as a way of getting sysadmins on board with Perl without having to change the way they do things.
Ruby has sigils and twigils, Python doesn't.
Edit: And one very important thing that I forgot (after all, the previous was just to flame a little bit :-p):
Python has a JIT compiler (Psyco), a sightly lower level language for writing faster code (Pyrex) and the ability to add inline C++ code (Weave).
My python's rusty, so some of these may be in python and i just don't remember/never learned in the first place, but here are the first few that I thought of:
Whitespace
Ruby handles whitespace completely different. For starters, you don't need to indent anything (which means it doesn't matter if you use 4 spaces or 1 tab). It also does smart line continuation, so the following is valid:
def foo(bar,
cow)
Basically, if you end with an operator, it figures out what is going on.
Mixins
Ruby has mixins which can extend instances instead of full classes:
module Humor
def tickle
"hee, hee!"
end
end
a = "Grouchy"
a.extend Humor
a.tickle » "hee, hee!"
Enums
I'm not sure if this is the same as generators, but as of Ruby 1.9 ruby as enums, so
>> enum = (1..4).to_enum
=> #<Enumerator:0x1344a8>
Reference: http://blog.nuclearsquid.com/writings/ruby-1-9-what-s-new-what-s-changed
"Keyword Arguments"
Both of the items listed there are supported in Ruby, although you can't skip default values like that.
You can either go in order
def foo(a, b=2, c=3)
puts "#{a}, #{b}, #{c}"
end
foo(1,3) >> 1, 3, 3
foo(1,c=5) >> 1, 5, 3
c >> 5
Note that c=5 actually assigns the variable c in the calling scope the value 5, and sets the parameter b the value 5.
or you can do it with hashes, which address the second issue
def foo(a, others)
others[:b] = 2 unless others.include?(:b)
others[:c] = 3 unless others.include?(:c)
puts "#{a}, #{others[:b]}, #{others[:c]}"
end
foo(1,:b=>3) >> 1, 3, 3
foo(1,:c=>5) >> 1, 2, 5
Reference: The Pragmatic Progammer's Guide to Ruby
You can have code in the class definition in both Ruby and Python. However, in Ruby you have a reference to the class (self). In Python you don't have a reference to the class, as the class isn't defined yet.
An example:
class Kaka
puts self
end
self in this case is the class, and this code would print out "Kaka". There is no way to print out the class name or in other ways access the class from the class definition body in Python.
Syntax is not a minor thing, it has a direct impact on how we think. It also has a direct effect on the rules we create for the systems we use. As an example we have the order of operations because of the way we write mathematical equations or sentences. The standard notation for mathematics allows people to read it more than one way and arrive at different answers given the same equation. If we had used prefix or postfix notation we would have created rules to distinguish what the numbers to be manipulated were rather than only having rules for the order in which to compute values.
The standard notation makes it plain what numbers we are talking about while making the order in which to compute them ambiguous. Prefix and postfix notation make the order in which to compute plain while making the numbers ambiguous. Python would already have multiline lambdas if it were not for the difficulties caused by the syntactic whitespace. (Proposals do exist for pulling this kind of thing off without necessarily adding explicit block delimiters.)
I find it easier to write conditions where I want something to occur if a condition is false much easier to write with the unless statement in Ruby than the semantically equivalent "if-not" construction in Ruby or other languages for example. If most of the languages that people are using today are equal in power, how can the syntax of each language be considered a trivial thing? After specific features like blocks and inheritance mechanisms etc. syntax is the most important part of a language,hardly a superficial thing.
What is superficial are the aesthetic qualities of beauty that we ascribe to syntax. Aesthetics have nothing to do with how our cognition works, syntax does.
Surprised to see nothing mentioned of ruby's "method missing" mechanism. I'd give examples of the find_by_... methods in Rails, as an example of the power of that language feature. My guess is that something similar could be implemented in Python, but to my knowledge it isn't there natively.
Another difference in lambdas between Python and Ruby is demonstrated by Paul Graham's Accumulator Generator problem. Reprinted here:
Write a function foo that takes a number n and returns a function that takes a number i, and returns n incremented by i.
Note: (a) that's number, not integer, (b) that's incremented by, not plus.
In Ruby, you can do this:
def foo(n)
lambda {|i| n += i }
end
In Python, you'd create an object to hold the state of n:
class foo(object):
def __init__(self, n):
self.n = n
def __call__(self, i):
self.n += i
return self.n
Some folks might prefer the explicit Python approach as being clearer conceptually, even if it's a bit more verbose. You store state like you do for anything else. You just need to wrap your head around the idea of callable objects. But regardless of which approach one prefers aesthetically, it does show one respect in which Ruby lambdas are more powerful constructs than Python's.
python has named optional arguments
def func(a, b=2, c=3):
print a, b, c
>>> func(1)
1 2 3
>>> func(1, c=4)
1 2 4
AFAIK Ruby has only positioned arguments because b=2 in the function declaration is an affectation that always append.
Ruby has embedded documentation:
=begin
You could use rdoc to generate man pages from this documentation
=end
http://c2.com/cgi/wiki?PythonVsRuby
http://c2.com/cgi/wiki?SwitchedFromPythonToRuby
http://c2.com/cgi/wiki?SwitchedFromRubyToPython
http://c2.com/cgi/wiki?UsingPythonDontNeedRuby
http://c2.com/cgi/wiki?UsingRubyDontNeedPython
In Ruby, when you import a file with
require, all the things defined in
that file will end up in your global
namespace.
With Cargo you can "require libraries without cluttering your namespace".
# foo-1.0.0.rb
class Foo
VERSION = "1.0.0"
end
# foo-2.0.0.rb
class Foo
VERSION = "2.0.0"
end
>> Foo1 = import("foo-1.0.0")
>> Foo2 = import("foo-2.0.0")
>> Foo1::VERSION
=> "1.0.0"
>> Foo2::VERSION
=> "2.0.0"
Why are there no ++ and -- operators in Python?
It's not because it doesn't make sense; it makes perfect sense to define "x++" as "x += 1, evaluating to the previous binding of x".
If you want to know the original reason, you'll have to either wade through old Python mailing lists or ask somebody who was there (eg. Guido), but it's easy enough to justify after the fact:
Simple increment and decrement aren't needed as much as in other languages. You don't write things like for(int i = 0; i < 10; ++i) in Python very often; instead you do things like for i in range(0, 10).
Since it's not needed nearly as often, there's much less reason to give it its own special syntax; when you do need to increment, += is usually just fine.
It's not a decision of whether it makes sense, or whether it can be done--it does, and it can. It's a question of whether the benefit is worth adding to the core syntax of the language. Remember, this is four operators--postinc, postdec, preinc, predec, and each of these would need to have its own class overloads; they all need to be specified, and tested; it would add opcodes to the language (implying a larger, and therefore slower, VM engine); every class that supports a logical increment would need to implement them (on top of += and -=).
This is all redundant with += and -=, so it would become a net loss.
This original answer I wrote is a myth from the folklore of computing: debunked by Dennis Ritchie as "historically impossible" as noted in the letters to the editors of Communications of the ACM July 2012 doi:10.1145/2209249.2209251
The C increment/decrement operators were invented at a time when the C compiler wasn't very smart and the authors wanted to be able to specify the direct intent that a machine language operator should be used which saved a handful of cycles for a compiler which might do a
load memory
load 1
add
store memory
instead of
inc memory
and the PDP-11 even supported "autoincrement" and "autoincrement deferred" instructions corresponding to *++p and *p++, respectively. See section 5.3 of the manual if horribly curious.
As compilers are smart enough to handle the high-level optimization tricks built into the syntax of C, they are just a syntactic convenience now.
Python doesn't have tricks to convey intentions to the assembler because it doesn't use one.
I always assumed it had to do with this line of the zen of python:
There should be one — and preferably only one — obvious way to do it.
x++ and x+=1 do the exact same thing, so there is no reason to have both.
Of course, we could say "Guido just decided that way", but I think the question is really about the reasons for that decision. I think there are several reasons:
It mixes together statements and expressions, which is not good practice. See http://norvig.com/python-iaq.html
It generally encourages people to write less readable code
Extra complexity in the language implementation, which is unnecessary in Python, as already mentioned
Because, in Python, integers are immutable (int's += actually returns a different object).
Also, with ++/-- you need to worry about pre- versus post- increment/decrement, and it takes only one more keystroke to write x+=1. In other words, it avoids potential confusion at the expense of very little gain.
Clarity!
Python is a lot about clarity and no programmer is likely to correctly guess the meaning of --a unless s/he's learned a language having that construct.
Python is also a lot about avoiding constructs that invite mistakes and the ++ operators are known to be rich sources of defects.
These two reasons are enough not to have those operators in Python.
The decision that Python uses indentation to mark blocks rather
than syntactical means such as some form of begin/end bracketing
or mandatory end marking is based largely on the same considerations.
For illustration, have a look at the discussion around introducing a conditional operator (in C: cond ? resultif : resultelse) into Python in 2005.
Read at least the first message and the decision message of that discussion (which had several precursors on the same topic previously).
Trivia:
The PEP frequently mentioned therein is the "Python Enhancement Proposal" PEP 308. LC means list comprehension, GE means generator expression (and don't worry if those confuse you, they are none of the few complicated spots of Python).
My understanding of why python does not have ++ operator is following: When you write this in python a=b=c=1 you will get three variables (labels) pointing at same object (which value is 1). You can verify this by using id function which will return an object memory address:
In [19]: id(a)
Out[19]: 34019256
In [20]: id(b)
Out[20]: 34019256
In [21]: id(c)
Out[21]: 34019256
All three variables (labels) point to the same object. Now increment one of variable and see how it affects memory addresses:
In [22] a = a + 1
In [23]: id(a)
Out[23]: 34019232
In [24]: id(b)
Out[24]: 34019256
In [25]: id(c)
Out[25]: 34019256
You can see that variable a now points to another object as variables b and c. Because you've used a = a + 1 it is explicitly clear. In other words you assign completely another object to label a. Imagine that you can write a++ it would suggest that you did not assign to variable a new object but ratter increment the old one. All this stuff is IMHO for minimization of confusion. For better understanding see how python variables works:
In Python, why can a function modify some arguments as perceived by the caller, but not others?
Is Python call-by-value or call-by-reference? Neither.
Does Python pass by value, or by reference?
Is Python pass-by-reference or pass-by-value?
Python: How do I pass a variable by reference?
Understanding Python variables and Memory Management
Emulating pass-by-value behaviour in python
Python functions call by reference
Code Like a Pythonista: Idiomatic Python
It was just designed that way. Increment and decrement operators are just shortcuts for x = x + 1. Python has typically adopted a design strategy which reduces the number of alternative means of performing an operation. Augmented assignment is the closest thing to increment/decrement operators in Python, and they weren't even added until Python 2.0.
I'm very new to python but I suspect the reason is because of the emphasis between mutable and immutable objects within the language. Now, I know that x++ can easily be interpreted as x = x + 1, but it LOOKS like you're incrementing in-place an object which could be immutable.
Just my guess/feeling/hunch.
To complete already good answers on that page:
Let's suppose we decide to do this, prefix (++i) that would break the unary + and - operators.
Today, prefixing by ++ or -- does nothing, because it enables unary plus operator twice (does nothing) or unary minus twice (twice: cancels itself)
>>> i=12
>>> ++i
12
>>> --i
12
So that would potentially break that logic.
now if one needs it for list comprehensions or lambdas, from python 3.8 it's possible with the new := assignment operator (PEP572)
pre-incrementing a and assign it to b:
>>> a = 1
>>> b = (a:=a+1)
>>> b
2
>>> a
2
post-incrementing just needs to make up the premature add by subtracting 1:
>>> a = 1
>>> b = (a:=a+1)-1
>>> b
1
>>> a
2
I believe it stems from the Python creed that "explicit is better than implicit".
First, Python is only indirectly influenced by C; it is heavily influenced by ABC, which apparently does not have these operators, so it should not be any great surprise not to find them in Python either.
Secondly, as others have said, increment and decrement are supported by += and -= already.
Third, full support for a ++ and -- operator set usually includes supporting both the prefix and postfix versions of them. In C and C++, this can lead to all kinds of "lovely" constructs that seem (to me) to be against the spirit of simplicity and straight-forwardness that Python embraces.
For example, while the C statement while(*t++ = *s++); may seem simple and elegant to an experienced programmer, to someone learning it, it is anything but simple. Throw in a mixture of prefix and postfix increments and decrements, and even many pros will have to stop and think a bit.
The ++ class of operators are expressions with side effects. This is something generally not found in Python.
For the same reason an assignment is not an expression in Python, thus preventing the common if (a = f(...)) { /* using a here */ } idiom.
Lastly I suspect that there operator are not very consistent with Pythons reference semantics. Remember, Python does not have variables (or pointers) with the semantics known from C/C++.
as i understood it so you won't think the value in memory is changed.
in c when you do x++ the value of x in memory changes.
but in python all numbers are immutable hence the address that x pointed as still has x not x+1. when you write x++ you would think that x change what really happens is that x refrence is changed to a location in memory where x+1 is stored or recreate this location if doe's not exists.
Other answers have described why it's not needed for iterators, but sometimes it is useful when assigning to increase a variable in-line, you can achieve the same effect using tuples and multiple assignment:
b = ++a becomes:
a,b = (a+1,)*2
and b = a++ becomes:
a,b = a+1, a
Python 3.8 introduces the assignment := operator, allowing us to achievefoo(++a) with
foo(a:=a+1)
foo(a++) is still elusive though.
Maybe a better question would be to ask why do these operators exist in C. K&R calls increment and decrement operators 'unusual' (Section 2.8page 46). The Introduction calls them 'more concise and often more efficient'. I suspect that the fact that these operations always come up in pointer manipulation also has played a part in their introduction.
In Python it has been probably decided that it made no sense to try to optimise increments (in fact I just did a test in C, and it seems that the gcc-generated assembly uses addl instead of incl in both cases) and there is no pointer arithmetic; so it would have been just One More Way to Do It and we know Python loathes that.
This may be because #GlennMaynard is looking at the matter as in comparison with other languages, but in Python, you do things the python way. It's not a 'why' question. It's there and you can do things to the same effect with x+=. In The Zen of Python, it is given: "there should only be one way to solve a problem." Multiple choices are great in art (freedom of expression) but lousy in engineering.
I think this relates to the concepts of mutability and immutability of objects. 2,3,4,5 are immutable in python. Refer to the image below. 2 has fixed id until this python process.
x++ would essentially mean an in-place increment like C. In C, x++ performs in-place increments. So, x=3, and x++ would increment 3 in the memory to 4, unlike python where 3 would still exist in memory.
Thus in python, you don't need to recreate a value in memory. This may lead to performance optimizations.
This is a hunch based answer.
I know this is an old thread, but the most common use case for ++i is not covered, that being manually indexing sets when there are no provided indices. This situation is why python provides enumerate()
Example : In any given language, when you use a construct like foreach to iterate over a set - for the sake of the example we'll even say it's an unordered set and you need a unique index for everything to tell them apart, say
i = 0
stuff = {'a': 'b', 'c': 'd', 'e': 'f'}
uniquestuff = {}
for key, val in stuff.items() :
uniquestuff[key] = '{0}{1}'.format(val, i)
i += 1
In cases like this, python provides an enumerate method, e.g.
for i, (key, val) in enumerate(stuff.items()) :
In addition to the other excellent answers here, ++ and -- are also notorious for undefined behavior. For example, what happens in this code?
foo[bar] = bar++;
It's so innocent-looking, but it's wrong C (and C++), because you don't know whether the first bar will have been incremented or not. One compiler might do it one way, another might do it another way, and a third might make demons fly out of your nose. All would be perfectly conformant with the C and C++ standards.
(EDIT: C++17 has changed the behavior of the given code so that it is defined; it will be equivalent to foo[bar+1] = bar; ++bar; — which nonetheless might not be what the programmer is expecting.)
Undefined behavior is seen as a necessary evil in C and C++, but in Python, it's just evil, and avoided as much as possible.
i am a python newbie, and i am not sure why python implemented len(obj), max(obj), and min(obj) as a static like functions (i am from the java language) over obj.len(), obj.max(), and obj.min()
what are the advantages and disadvantages (other than obvious inconsistency) of having len()... over the method calls?
why guido chose this over the method calls? (this could have been solved in python3 if needed, but it wasn't changed in python3, so there gotta be good reasons...i hope)
thanks!!
The big advantage is that built-in functions (and operators) can apply extra logic when appropriate, beyond simply calling the special methods. For example, min can look at several arguments and apply the appropriate inequality checks, or it can accept a single iterable argument and proceed similarly; abs when called on an object without a special method __abs__ could try comparing said object with 0 and using the object change sign method if needed (though it currently doesn't); and so forth.
So, for consistency, all operations with wide applicability must always go through built-ins and/or operators, and it's those built-ins responsibility to look up and apply the appropriate special methods (on one or more of the arguments), use alternate logic where applicable, and so forth.
An example where this principle wasn't correctly applied (but the inconsistency was fixed in Python 3) is "step an iterator forward": in 2.5 and earlier, you needed to define and call the non-specially-named next method on the iterator. In 2.6 and later you can do it the right way: the iterator object defines __next__, the new next built-in can call it and apply extra logic, for example to supply a default value (in 2.6 you can still do it the bad old way, for backwards compatibility, though in 3.* you can't any more).
Another example: consider the expression x + y. In a traditional object-oriented language (able to dispatch only on the type of the leftmost argument -- like Python, Ruby, Java, C++, C#, &c) if x is of some built-in type and y is of your own fancy new type, you're sadly out of luck if the language insists on delegating all the logic to the method of type(x) that implements addition (assuming the language allows operator overloading;-).
In Python, the + operator (and similarly of course the builtin operator.add, if that's what you prefer) tries x's type's __add__, and if that one doesn't know what to do with y, then tries y's type's __radd__. So you can define your types that know how to add themselves to integers, floats, complex, etc etc, as well as ones that know how to add such built-in numeric types to themselves (i.e., you can code it so that x + y and y + x both work fine, when y is an instance of your fancy new type and x is an instance of some builtin numeric type).
"Generic functions" (as in PEAK) are a more elegant approach (allowing any overriding based on a combination of types, never with the crazy monomaniac focus on the leftmost arguments that OOP encourages!-), but (a) they were unfortunately not accepted for Python 3, and (b) they do of course require the generic function to be expressed as free-standing (it would be absolutely crazy to have to consider the function as "belonging" to any single type, where the whole POINT is that can be differently overridden/overloaded based on arbitrary combination of its several arguments' types!-). Anybody who's ever programmed in Common Lisp, Dylan, or PEAK, knows what I'm talking about;-).
So, free-standing functions and operators are just THE right, consistent way to go (even though the lack of generic functions, in bare-bones Python, does remove some fraction of the inherent elegance, it's still a reasonable mix of elegance and practicality!-).
It emphasizes the capabilities of an object, not its methods or type. Capabilites are declared by "helper" functions such as __iter__ and __len__ but they don't make up the interface. The interface is in the builtin functions, and beside this also in the buit-in operators like + and [] for indexing and slicing.
Sometimes, it is not a one-to-one correspondance: For example, iter(obj) returns an iterator for an object, and will work even if __iter__ is not defined. If not defined, it goes on to look if the object defines __getitem__ and will return an iterator accessing the object index-wise (like an array).
This goes together with Python's Duck Typing, we care only about what we can do with an object, not that it is of a particular type.
Actually, those aren't "static" methods in the way you are thinking about them. They are built-in functions that really just alias to certain methods on python objects that implement them.
>>> class Foo(object):
... def __len__(self):
... return 42
...
>>> f = Foo()
>>> len(f)
42
These are always available to be called whether or not the object implements them or not. The point is to have some consistency. Instead of some class having a method called length() and another called size(), the convention is to implement len and let the callers always access it by the more readable len(obj) instead of obj.methodThatDoesSomethingCommon
I thought the reason was so these basic operations could be done on iterators with the same interface as containers. However, it actually doesn't work with len:
def foo():
for i in range(10):
yield i
print len(foo())
... fails with TypeError. len() won't consume and count an iterator; it only works with objects that have a __len__ call.
So, as far as I'm concerned, len() shouldn't exist. It's much more natural to say obj.len than len(obj), and much more consistent with the rest of the language and the standard library. We don't say append(lst, 1); we say lst.append(1). Having a separate global method for length is an odd, inconsistent special case, and eats a very obvious name in the global namespace, which is a very bad habit of Python.
This is unrelated to duck typing; you can say getattr(obj, "len") to decide whether you can use len on an object just as easily--and much more consistently--than you can use getattr(obj, "__len__").
All that said, as language warts go--for those who consider this a wart--this is a very easy one to live with.
On the other hand, min and max do work on iterators, which gives them a use apart from any particular object. This is straightforward, so I'll just give an example:
import random
def foo():
for i in range(10):
yield random.randint(0, 100)
print max(foo())
However, there are no __min__ or __max__ methods to override its behavior, so there's no consistent way to provide efficient searching for sorted containers. If a container is sorted on the same key that you're searching, min/max are O(1) operations instead of O(n), and the only way to expose that is by a different, inconsistent method. (This could be fixed in the language relatively easily, of course.)
To follow up with another issue with this: it prevents use of Python's method binding. As a simple, contrived example, you can do this to supply a function to add values to a list:
def add(f):
f(1)
f(2)
f(3)
lst = []
add(lst.append)
print lst
and this works on all member functions. You can't do that with min, max or len, though, since they're not methods of the object they operate on. Instead, you have to resort to functools.partial, a clumsy second-class workaround common in other languages.
Of course, this is an uncommon case; but it's the uncommon cases that tell us about a language's consistency.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
There is a lot of discussions of Python vs Ruby, and I all find them completely unhelpful, because they all turn around why feature X sucks in language Y, or that claim language Y doesn't have X, although in fact it does. I also know exactly why I prefer Python, but that's also subjective, and wouldn't help anybody choosing, as they might not have the same tastes in development as I do.
It would therefore be interesting to list the differences, objectively. So no "Python's lambdas sucks". Instead explain what Ruby's lambdas can do that Python's can't. No subjectivity. Example code is good!
Don't have several differences in one answer, please. And vote up the ones you know are correct, and down those you know are incorrect (or are subjective). Also, differences in syntax is not interesting. We know Python does with indentation what Ruby does with brackets and ends, and that # is called self in Python.
UPDATE: This is now a community wiki, so we can add the big differences here.
Ruby has a class reference in the class body
In Ruby you have a reference to the class (self) already in the class body. In Python you don't have a reference to the class until after the class construction is finished.
An example:
class Kaka
puts self
end
self in this case is the class, and this code would print out "Kaka". There is no way to print out the class name or in other ways access the class from the class definition body in Python (outside method definitions).
All classes are mutable in Ruby
This lets you develop extensions to core classes. Here's an example of a rails extension:
class String
def starts_with?(other)
head = self[0, other.length]
head == other
end
end
Python (imagine there were no ''.startswith method):
def starts_with(s, prefix):
return s[:len(prefix)] == prefix
You could use it on any sequence (not just strings). In order to use it you should import it explicitly e.g., from some_module import starts_with.
Ruby has Perl-like scripting features
Ruby has first class regexps, $-variables, the awk/perl line by line input loop and other features that make it more suited to writing small shell scripts that munge text files or act as glue code for other programs.
Ruby has first class continuations
Thanks to the callcc statement. In Python you can create continuations by various techniques, but there is no support built in to the language.
Ruby has blocks
With the "do" statement you can create a multi-line anonymous function in Ruby, which will be passed in as an argument into the method in front of do, and called from there. In Python you would instead do this either by passing a method or with generators.
Ruby:
amethod { |here|
many=lines+of+code
goes(here)
}
Python (Ruby blocks correspond to different constructs in Python):
with amethod() as here: # `amethod() is a context manager
many=lines+of+code
goes(here)
Or
for here in amethod(): # `amethod()` is an iterable
many=lines+of+code
goes(here)
Or
def function(here):
many=lines+of+code
goes(here)
amethod(function) # `function` is a callback
Interestingly, the convenience statement in Ruby for calling a block is called "yield", which in Python will create a generator.
Ruby:
def themethod
yield 5
end
themethod do |foo|
puts foo
end
Python:
def themethod():
yield 5
for foo in themethod():
print foo
Although the principles are different, the result is strikingly similar.
Ruby supports functional style (pipe-like) programming more easily
myList.map(&:description).reject(&:empty?).join("\n")
Python:
descriptions = (f.description() for f in mylist)
"\n".join(filter(len, descriptions))
Python has built-in generators (which are used like Ruby blocks, as noted above)
Python has support for generators in the language. In Ruby 1.8 you can use the generator module which uses continuations to create a generator from a block. Or, you could just use a block/proc/lambda! Moreover, in Ruby 1.9 Fibers are, and can be used as, generators, and the Enumerator class is a built-in generator 4
docs.python.org has this generator example:
def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
Contrast this with the above block examples.
Python has flexible name space handling
In Ruby, when you import a file with require, all the things defined in that file will end up in your global namespace. This causes namespace pollution. The solution to that is Rubys modules. But if you create a namespace with a module, then you have to use that namespace to access the contained classes.
In Python, the file is a module, and you can import its contained names with from themodule import *, thereby polluting the namespace if you want. But you can also import just selected names with from themodule import aname, another or you can simply import themodule and then access the names with themodule.aname. If you want more levels in your namespace you can have packages, which are directories with modules and an __init__.py file.
Python has docstrings
Docstrings are strings that are attached to modules, functions and methods and can be
introspected at runtime. This helps for creating such things as the help command and
automatic documentation.
def frobnicate(bar):
"""frobnicate takes a bar and frobnicates it
>>> bar = Bar()
>>> bar.is_frobnicated()
False
>>> frobnicate(bar)
>>> bar.is_frobnicated()
True
"""
Ruby's equivalent are similar to javadocs, and located above the method instead of within it. They can be retrieved at runtime from the files by using 1.9's Method#source_location example use
Python has multiple inheritance
Ruby does not ("on purpose" -- see Ruby's website, see here how it's done in Ruby). It does reuse the module concept as a type of abstract classes.
Python has list/dict comprehensions
Python:
res = [x*x for x in range(1, 10)]
Ruby:
res = (0..9).map { |x| x * x }
Python:
>>> (x*x for x in range(10))
<generator object <genexpr> at 0xb7c1ccd4>
>>> list(_)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
Ruby:
p = proc { |x| x * x }
(0..9).map(&p)
Python 2.7+:
>>> {x:str(y*y) for x,y in {1:2, 3:4}.items()}
{1: '4', 3: '16'}
Ruby:
>> Hash[{1=>2, 3=>4}.map{|x,y| [x,(y*y).to_s]}]
=> {1=>"4", 3=>"16"}
Python has decorators
Things similar to decorators can also be created in Ruby, and it can also be argued that they aren't as necessary as in Python.
Syntax differences
Ruby requires "end" or "}" to close all of its scopes, while Python uses white-space only. There have been recent attempts in Ruby to allow for whitespace only indentation http://github.com/michaeledgar/seamless
Ruby has the concepts of blocks, which are essentially syntactic sugar around a section of code; they are a way to create closures and pass them to another method which may or may not use the block. A block can be invoked later on through a yield statement.
For example, a simple definition of an each method on Array might be something like:
class Array
def each
for i in self
yield(i) # If a block has been passed, control will be passed here.
end
end
end
Then you can invoke this like so:
# Add five to each element.
[1, 2, 3, 4].each{ |e| puts e + 5 }
> [6, 7, 8, 9]
Python has anonymous functions/closures/lambdas, but it doesn't quite have blocks since it's missing some of the useful syntactic sugar. However, there's at least one way to get it in an ad-hoc fashion. See, for example, here.
Python Example
Functions are first-class variables in Python. You can declare a function, pass it around as an object, and overwrite it:
def func(): print "hello"
def another_func(f): f()
another_func(func)
def func2(): print "goodbye"
func = func2
This is a fundamental feature of modern scripting languages. JavaScript and Lua do this, too. Ruby doesn't treat functions this way; naming a function calls it.
Of course, there are ways to do these things in Ruby, but they're not first-class operations. For example, you can wrap a function with Proc.new to treat it as a variable--but then it's no longer a function; it's an object with a "call" method.
Ruby's functions aren't first-class objects
Ruby functions aren't first-class objects. Functions must be wrapped in an object to pass them around; the resulting object can't be treated like a function. Functions can't be assigned in a first-class manner; instead, a function in its container object must be called to modify them.
def func; p "Hello" end
def another_func(f); method(f)[] end
another_func(:func) # => "Hello"
def func2; print "Goodbye!"
self.class.send(:define_method, :func, method(:func2))
func # => "Goodbye!"
method(:func).owner # => Object
func # => "Goodbye!"
self.func # => "Goodbye!"
Ultimately all answers are going to be subjective at some level, and the answers posted so far pretty much prove that you can't point to any one feature that isn't doable in the other language in an equally nice (if not similar) way, since both languages are very concise and expressive.
I like Python's syntax. However, you have to dig a bit deeper than syntax to find the true beauty of Ruby. There is zenlike beauty in Ruby's consistency. While no trivial example can possibly explain this completely, I'll try to come up with one here just to explain what I mean.
Reverse the words in this string:
sentence = "backwards is sentence This"
When you think about how you would do it, you'd do the following:
Split the sentence up into words
Reverse the words
Re-join the words back into a string
In Ruby, you'd do this:
sentence.split.reverse.join ' '
Exactly as you think about it, in the same sequence, one method call after another.
In python, it would look more like this:
" ".join(reversed(sentence.split()))
It's not hard to understand, but it doesn't quite have the same flow. The subject (sentence) is buried in the middle. The operations are a mix of functions and object methods. This is a trivial example, but one discovers many different examples when really working with and understanding Ruby, especially on non-trivial tasks.
Python has a "we're all adults here" mentality. Thus, you'll find that Ruby has things like constants while Python doesn't (although Ruby's constants only raise a warning). The Python way of thinking is that if you want to make something constant, you should put the variable names in all caps and not change it.
For example, Ruby:
>> PI = 3.14
=> 3.14
>> PI += 1
(irb):2: warning: already initialized constant PI
=> 4.14
Python:
>>> PI = 3.14
>>> PI += 1
>>> PI
4.1400000000000006
You can import only specific functions from a module in Python. In Ruby, you import the whole list of methods. You could "unimport" them in Ruby, but it's not what it's all about.
EDIT:
let's take this Ruby module :
module Whatever
def method1
end
def method2
end
end
if you include it in your code :
include Whatever
you'll see that both method1 and method2 have been added to your namespace. You can't import only method1. You either import them both or you don't import them at all. In Python you can import only the methods of your choosing. If this would have a name maybe it would be called selective importing?
From Ruby's website:
Similarities
As with Python, in Ruby,...
There’s an interactive prompt (called irb).
You can read docs on the command line (with the ri command instead of pydoc).
There are no special line terminators (except the usual newline).
String literals can span multiple lines like Python’s triple-quoted strings.
Brackets are for lists, and braces are for dicts (which, in Ruby, are called “hashes”).
Arrays work the same (adding them makes one long array, but composing them like this a3 = [ a1, a2 ] gives you an array of arrays).
Objects are strongly and dynamically typed.
Everything is an object, and variables are just references to objects.
Although the keywords are a bit different, exceptions work about the same.
You’ve got embedded doc tools (Ruby’s is called rdoc).
Differences
Unlike Python, in Ruby,...
Strings are mutable.
You can make constants (variables whose value you don’t intend to change).
There are some enforced case-conventions (ex. class names start with a capital letter, variables start with a lowercase letter).
There’s only one kind of list container (an Array), and it’s mutable.
Double-quoted strings allow escape sequences (like \t) and a special “expression substitution” syntax (which allows you to insert the results of Ruby expressions directly into other strings without having to "add " + "strings " + "together"). Single-quoted strings are like Python’s r"raw strings".
There are no “new style” and “old style” classes. Just one kind.
You never directly access attributes. With Ruby, it’s all method calls.
Parentheses for method calls are usually optional.
There’s public, private, and protected to enforce access, instead of Python’s _voluntary_ underscore __convention__.
“mixin’s” are used instead of multiple inheritance.
You can add or modify the methods of built-in classes. Both languages let you open up and modify classes at any point, but Python prevents modification of built-ins — Ruby does not.
You’ve got true and false instead of True and False (and nil instead of None).
When tested for truth, only false and nil evaluate to a false value. Everything else is true (including 0, 0.0, "", and []).
It’s elsif instead of elif.
It’s require instead of import. Otherwise though, usage is the same.
The usual-style comments on the line(s) above things (instead of docstrings below them) are used for generating docs.
There are a number of shortcuts that, although give you more to remember, you quickly learn. They tend to make Ruby fun and very productive.
What Ruby has over Python are its scripting language capabilities. Scripting language in this context meaning to be used for "glue code" in shell scripts and general text manipulation.
These are mostly shared with Perl. First-class built-in regular expressions, $-Variables, useful command line options like Perl (-a, -e) etc.
Together with its terse yet epxressive syntax it is perfect for these kind of tasks.
Python to me is more of a dynamically typed business language that is very easy to learn and has a neat syntax. Not as "cool" as Ruby but neat.
What Python has over Ruby to me is the vast number of bindings for other libs. Bindings to Qt and other GUI libs, many game support libraries and and and. Ruby has much less. While much used bindings e.g. to Databases are of good quality I found niche libs to be better supported in Python even if for the same library there is also a Ruby binding.
So, I'd say both languages have its use and it is the task that defines which one to use. Both are easy enough to learn. I use them side-by-side. Ruby for scripting and Python for stand-alone apps.
I don't think "Ruby has X and Python doesn't, while Python has Y and Ruby doesn't" is the most useful way to look at it. They're quite similar languages, with many shared abilities.
To a large degree, the difference is what the language makes elegant and readable. To use an example you brought up, both do theoretically have lambdas, but Python programmers tend to avoid them, and constructs made using them do not look anywhere near as readable or idiomatic as in Ruby. So in Python, a good programmer will want to take a different route to solving the problem than he would in Ruby, just because it actually is the better way to do it.
I'd like to suggest a variant of the original question, "What does Ruby have that Python doesn't, and vice versa?" which admits the disappointing answer, "Well, what can you do with either Ruby or Python that can't be done in Intercal?" Nothing on that level, because Python and Ruby are both part of the vast royal family sitting on the throne of being Turing approximant.
But what about this:
What can be done gracefully and well in Python that can't be done in Ruby with such beauty and good engineering, or vice versa?
That may be much more interesting than mere feature comparison.
Python has an explicit, builtin syntax for list-comprehenions and generators whereas in Ruby you would use map and code blocks.
Compare
list = [ x*x for x in range(1, 10) ]
to
res = (1..10).map{ |x| x*x }
"Variables that start with a capital letter becomes constants and can't be modified"
Wrong. They can.
You only get a warning if you do.
Somewhat more on the infrastructure side:
Python has much better integration with C++ (via things like Boost.Python, SIP, and Py++) than Ruby, where the options seem to be either write directly against the Ruby interpreter API (which you can do with Python as well, of course, but in both cases doing so is low level, tedious, and error prone) or use SWIG (which, while it works and definitely is great if you want to support many languages, isn't nearly as nice as Boost.Python or SIP if you are specifically looking to bind C++).
Python has a number of web application environments (Django, Pylons/Turbogears, web.py, probably at least half a dozen others), whereas Ruby (effectively) has one: Rails. (Other Ruby web frameworks do exist, but seemingly have a hard time getting much traction against Rails). Is this aspect good or bad? Hard to say, and probably quite subjective; I can easily imagine arguments that the Python situation is better and that the Ruby situation is better.
Culturally, the Python and Ruby communities seem somewhat different, but I can only hint at this as I don't have that much experience interacting with the Ruby community. I'm adding this mostly in the hopes that someone who has a lot of experience with both can amplify (or reject) this statement.
Shamelessly copy/pasted from: Alex Martelli answer on "What's better about Ruby than Python" thread from comp.lang.python mailing list.
Aug 18 2003, 10:50 am Erik Max Francis
wrote:
"Brandon J. Van Every" wrote:
What's better about Ruby than Python? I'm sure there's something.
What is it?
Wouldn't it make much more sense to ask Ruby people this, rather than
Python people?
Might, or might not, depending on
one's purposes -- for example, if
one's purposes include a "sociological
study" of the Python community, then
putting questions to that community is
likely to prove more revealing of
information about it, than putting
them elsewhere:-).
Personally, I gladly took the
opportunity to follow Dave Thomas'
one-day Ruby tutorial at last OSCON.
Below a thin veneer of syntax
differences, I find Ruby and Python
amazingly similar -- if I was
computing the minimum spanning tree
among just about any set of languages,
I'm pretty sure Python and Ruby would
be the first two leaves to coalesce
into an intermediate node:-).
Sure, I do get weary, in Ruby, of
typing the silly "end" at the end of
each block (rather than just
unindenting) -- but then I do get to
avoid typing the equally-silly ':'
which Python requires at the
start of each block, so that's almost a wash:-). Other syntax
differences such as '#foo' versus
'self.foo', or the higher significance
of case in Ruby vs Python, are really
just about as irrelevant to me.
Others no doubt base their choice of
programming languages on just such
issues, and they generate the hottest
debates -- but to me that's just an
example of one of Parkinson's Laws in
action (the amount on debate on an
issue is inversely proportional to the
issue's actual importance).
Edit (by AM 6/19/2010 11:45): this is also known as "painting the
bikeshed" (or, for short,
"bikeshedding") -- the reference is,
again, to Northcote Parkinson, who
gave "debates on what color to paint
the bikeshed" as a typical example of
"hot debates on trivial topics".
(end-of-Edit).
One syntax difference that I do find
important, and in Python's favor --
but other people will no doubt think
just the reverse -- is "how do you
call a function which takes no
parameters". In Python (like in C),
to call a function you always apply
the "call operator" -- trailing
parentheses just after the object
you're calling (inside those trailing
parentheses go the args you're passing
in the call -- if you're passing no
args, then the parentheses are empty).
This leaves the mere mention of
any object, with no operator involved, as meaning just a reference
to the object -- in any context,
without special cases, exceptions,
ad-hoc rules, and the like. In Ruby
(like in Pascal), to call a function
WITH arguments you pass the args
(normally in parentheses, though that
is not invariably the case) -- BUT if
the function takes no args then simply
mentioning the function implicitly
calls it. This may meet the
expectations of many people (at least,
no doubt, those whose only previous
experience of programming was with
Pascal, or other languages with
similar "implicit calling", such as
Visual Basic) -- but to me, it means
the mere mention of an object may
EITHER mean a reference to the object,
OR a call to the object, depending on
the object's type -- and in those
cases where I can't get a reference to
the object by merely mentioning it I
will need to use explicit "give me a
reference to this, DON'T call it!"
operators that aren't needed
otherwise. I feel this impacts the
"first-classness" of functions (or
methods, or other callable objects)
and the possibility of interchanging
objects smoothly. Therefore, to me,
this specific syntax difference is a
serious black mark against Ruby -- but
I do understand why others would thing
otherwise, even though I could hardly
disagree more vehemently with them:-).
Below the syntax, we get into some
important differences in elementary
semantics -- for example, strings in
Ruby are mutable objects (like in
C++), while in Python they are not
mutable (like in Java, or I believe
C#). Again, people who judge
primarily by what they're already
familiar with may think this is a plus
for Ruby (unless they're familiar with
Java or C#, of course:-). Me, I think
immutable strings are an excellent
idea (and I'm not surprised that Java,
independently I think, reinvented that
idea which was already in Python),
though I wouldn't mind having a
"mutable string buffer" type as well
(and ideally one with better
ease-of-use than Java's own "string
buffers"); and I don't give this
judgment because of familiarity --
before studying Java, apart from
functional programming languages where
all data are immutable, all the languages I knew had mutable strings
-- yet when I first saw the immutable-string idea in Java (which I
learned well before I learned Python),
it immediately struck me as excellent,
a very good fit for the
reference-semantics of a higher level
programming language (as opposed to
the value-semantics that fit best with
languages closer to the machine and
farther from applications, such as C)
with strings as a first-class,
built-in (and pretty crucial) data
type.
Ruby does have some advantages in
elementary semantics -- for example,
the removal of Python's "lists vs
tuples" exceedingly subtle
distinction. But mostly the score (as
I keep it, with simplicity a big plus
and subtle, clever distinctions a
notable minus) is against Ruby (e.g.,
having both closed and half-open
intervals, with the notations a..b and
a...b [anybody wants to claim that
it's obvious which is which?-)], is
silly -- IMHO, of course!). Again,
people who consider having a lot of
similar but subtly different things at
the core of a language a PLUS, rather
than a MINUS, will of course count
these "the other way around" from how
I count them:-).
Don't be misled by these comparisons
into thinking the two languages are
very different, mind you. They aren't. But if I'm asked to compare
"capelli d'angelo" to "spaghettini",
after pointing out that these two
kinds of pasta are just about
undistinguishable to anybody and
interchangeable in any dish you might
want to prepare, I would then
inevitably have to move into
microscopic examination of how the
lengths and diameters imperceptibly
differ, how the ends of the strands
are tapered in one case and not in the
other, and so on -- to try and explain
why I, personally, would rather have
capelli d'angelo as the pasta in any
kind of broth, but would prefer
spaghettini as the pastasciutta to go
with suitable sauces for such long
thin pasta forms (olive oil, minced
garlic, minced red peppers, and finely
ground anchovies, for example - but if
you sliced the garlic and peppers
instead of mincing them, then you
should choose the sounder body of
spaghetti rather than the thinner
evanescence of spaghettini, and would
be well advised to forego the achovies
and add instead some fresh spring
basil [or even -- I'm a heretic...! --
light mint...] leaves -- at the very
last moment before serving the dish).
Ooops, sorry, it shows that I'm
traveling abroad and haven't had pasta
for a while, I guess. But the analogy
is still pretty good!-)
So, back to Python and Ruby, we come
to the two biggies (in terms of
language proper -- leaving the
libraries, and other important
ancillaries such as tools and
environments, how to embed/extend each
language, etc, etc, out of it for now
-- they wouldn't apply to all IMPLEMENTATIONS of each language
anyway, e.g., Jython vs Classic Python
being two implementations of the
Python language!):
Ruby's iterators and codeblocks vs Python's iterators and generators;
Ruby's TOTAL, unbridled "dynamicity", including the ability
to "reopen" any existing class,
including all built-in ones, and
change its behavior at run-time -- vs
Python's vast but bounded
dynamicity, which never changes the
behavior of existing built-in
classes and their instances.
Personally, I consider 1 a wash (the
differences are so deep that I could
easily see people hating either
approach and revering the other, but
on MY personal scales the pluses and
minuses just about even up); and 2 a
crucial issue -- one that makes Ruby
much more suitable for "tinkering",
BUT Python equally more suitable for
use in large production applications.
It's funny, in a way, because both
languages are so MUCH more dynamic
than most others, that in the end the
key difference between them from my
POV should hinge on that -- that Ruby
"goes to eleven" in this regard (the
reference here is to "Spinal Tap", of
course). In Ruby, there are no limits
to my creativity -- if I decide that
all string comparisons must become
case-insensitive, I CAN DO THAT!
I.e., I can dynamically alter the
built-in string class so that
a = "Hello World"
b = "hello world"
if a == b
print "equal!\n"
else
print "different!\n"
end WILL print "equal". In python, there is NO way I can do that.
For the purposes of metaprogramming,
implementing experimental frameworks,
and the like, this amazing dynamic
ability of Ruby is extremely
appealing. BUT -- if we're talking
about large applications, developed by
many people and maintained by even
more, including all kinds of libraries
from diverse sources, and needing to
go into production in client sites...
well, I don't WANT a language that is
QUITE so dynamic, thank you very much.
I loathe the very idea of some library
unwittingly breaking other unrelated
ones that rely on those strings being
different -- that's the kind of deep
and deeply hidden "channel", between
pieces of code that LOOK separate and
SHOULD BE separate, that spells
d-e-a-t-h in large-scale programming.
By letting any module affect the
behavior of any other "covertly", the
ability to mutate the semantics of
built-in types is just a BAD idea for
production application programming,
just as it's cool for tinkering.
If I had to use Ruby for such a large
application, I would try to rely on
coding-style restrictions, lots of
tests (to be rerun whenever ANYTHING
changes -- even what should be totally
unrelated...), and the like, to
prohibit use of this language feature.
But NOT having the feature in the
first place is even better, in my
opinion -- just as Python itself would
be an even better language for
application programming if a certain
number of built-ins could be "nailed
down", so I KNEW that, e.g.,
len("ciao") is 4 (rather than having
to worry subliminally about whether
somebody's changed the binding of name
'len' in the builtins module...).
I do hope that eventually Python does
"nail down" its built-ins.
But the problem's minor, since
rebinding built-ins is quite a
deprecated as well as a rare practice
in Python. In Ruby, it strikes me as
major -- just like the too powerful
macro facilities of other languages
(such as, say, Dylan) present similar
risks in my own opinion (I do hope
that Python never gets such a powerful
macro system, no matter the allure of
"letting people define their own
domain-specific little languages
embedded in the language itself" -- it
would, IMHO, impair Python's wonderful
usefulness for application
programming, by presenting an
"attractive nuisance" to the would-be
tinkerer who lurks in every
programmer's heart...).
Alex
Some others from:
http://www.ruby-lang.org/en/documentation/ruby-from-other-languages/to-ruby-from-python/
(If I have misintrepreted anything or any of these have changed on the Ruby side since that page was updated, someone feel free to edit...)
Strings are mutable in Ruby, not in Python (where new strings are created by "changes").
Ruby has some enforced case conventions, Python does not.
Python has both lists and tuples (immutable lists). Ruby has arrays corresponding to Python lists, but no immutable variant of them.
In Python, you can directly access object attributes. In Ruby, it's always via methods.
In Ruby, parentheses for method calls are usually optional, but not in Python.
Ruby has public, private, and protected to enforce access, instead of Python’s convention of using underscores and name mangling.
Python has multiple inheritance. Ruby has "mixins."
And another very relevant link:
http://c2.com/cgi/wiki?PythonVsRuby
Which, in particular, links to another good one by Alex Martelli, who's been also posting a lot of great stuff here on SO:
http://groups.google.com/group/comp.lang.python/msg/028422d707512283
I'm unsure of this, so I add it as an answer first.
Python treats unbound methods as functions
That means you can call a method either like theobject.themethod() or by TheClass.themethod(anobject).
Edit: Although the difference between methods and functions is small in Python, and non-existant in Python 3, it also doesn't exist in Ruby, simply because Ruby doesn't have functions. When you define functions, you are actually defining methods on Object.
But you still can't take the method of one class and call it as a function, you would have to rebind it to the object you want to call on, which is much more obstuse.
I would like to mention Python descriptor API that allows one customize object-to-attribute "communication". It is also noteworthy that, in Python, one is free to implement an alternative protocol via overriding the default given through the default implementation of the __getattribute__ method.
Let me give more details about the aforementioned.
Descriptors are regular classes with __get__, __set__ and/or __delete__ methods.
When interpreter encounters something like anObj.anAttr, the following is performed:
__getattribute__ method of anObj is invoked
__getattribute__ retrieves anAttr object from the class dict
it checks whether abAttr object has __get__, __set__ or __delete__ callable objects
the context (i.e., caller object or class, and value, instead of the latter, if we have setter) is passed to the callable object
the result is returned.
As was mentioned, this is the default behavior. One is free to change the protocol by re-implementing __getattribute__.
This technique is lot more powerful than decorators.
Ruby has builtin continuation support using callcc.
Hence you can implement cool things like the amb-operator
At this stage, Python still has better unicode support
Python has docstrings and ruby doesn't... Or if it doesn't, they are not accessible as easily as in python.
Ps. If im wrong, pretty please, leave an example? I have a workaround that i could monkeypatch into classes quite easily but i'd like to have docstring kinda of a feature in "native way".
Ruby has a line by line loop over input files (the '-n' flag) from the commandline so it can be used like AWK. This Ruby one-liner:
ruby -ne 'END {puts $.}'
will count lines like the AWK one-liner:
awk 'END{print NR}'
Ruby gets feature this through Perl, which took it from AWK as a way of getting sysadmins on board with Perl without having to change the way they do things.
Ruby has sigils and twigils, Python doesn't.
Edit: And one very important thing that I forgot (after all, the previous was just to flame a little bit :-p):
Python has a JIT compiler (Psyco), a sightly lower level language for writing faster code (Pyrex) and the ability to add inline C++ code (Weave).
My python's rusty, so some of these may be in python and i just don't remember/never learned in the first place, but here are the first few that I thought of:
Whitespace
Ruby handles whitespace completely different. For starters, you don't need to indent anything (which means it doesn't matter if you use 4 spaces or 1 tab). It also does smart line continuation, so the following is valid:
def foo(bar,
cow)
Basically, if you end with an operator, it figures out what is going on.
Mixins
Ruby has mixins which can extend instances instead of full classes:
module Humor
def tickle
"hee, hee!"
end
end
a = "Grouchy"
a.extend Humor
a.tickle » "hee, hee!"
Enums
I'm not sure if this is the same as generators, but as of Ruby 1.9 ruby as enums, so
>> enum = (1..4).to_enum
=> #<Enumerator:0x1344a8>
Reference: http://blog.nuclearsquid.com/writings/ruby-1-9-what-s-new-what-s-changed
"Keyword Arguments"
Both of the items listed there are supported in Ruby, although you can't skip default values like that.
You can either go in order
def foo(a, b=2, c=3)
puts "#{a}, #{b}, #{c}"
end
foo(1,3) >> 1, 3, 3
foo(1,c=5) >> 1, 5, 3
c >> 5
Note that c=5 actually assigns the variable c in the calling scope the value 5, and sets the parameter b the value 5.
or you can do it with hashes, which address the second issue
def foo(a, others)
others[:b] = 2 unless others.include?(:b)
others[:c] = 3 unless others.include?(:c)
puts "#{a}, #{others[:b]}, #{others[:c]}"
end
foo(1,:b=>3) >> 1, 3, 3
foo(1,:c=>5) >> 1, 2, 5
Reference: The Pragmatic Progammer's Guide to Ruby
You can have code in the class definition in both Ruby and Python. However, in Ruby you have a reference to the class (self). In Python you don't have a reference to the class, as the class isn't defined yet.
An example:
class Kaka
puts self
end
self in this case is the class, and this code would print out "Kaka". There is no way to print out the class name or in other ways access the class from the class definition body in Python.
Syntax is not a minor thing, it has a direct impact on how we think. It also has a direct effect on the rules we create for the systems we use. As an example we have the order of operations because of the way we write mathematical equations or sentences. The standard notation for mathematics allows people to read it more than one way and arrive at different answers given the same equation. If we had used prefix or postfix notation we would have created rules to distinguish what the numbers to be manipulated were rather than only having rules for the order in which to compute values.
The standard notation makes it plain what numbers we are talking about while making the order in which to compute them ambiguous. Prefix and postfix notation make the order in which to compute plain while making the numbers ambiguous. Python would already have multiline lambdas if it were not for the difficulties caused by the syntactic whitespace. (Proposals do exist for pulling this kind of thing off without necessarily adding explicit block delimiters.)
I find it easier to write conditions where I want something to occur if a condition is false much easier to write with the unless statement in Ruby than the semantically equivalent "if-not" construction in Ruby or other languages for example. If most of the languages that people are using today are equal in power, how can the syntax of each language be considered a trivial thing? After specific features like blocks and inheritance mechanisms etc. syntax is the most important part of a language,hardly a superficial thing.
What is superficial are the aesthetic qualities of beauty that we ascribe to syntax. Aesthetics have nothing to do with how our cognition works, syntax does.
Surprised to see nothing mentioned of ruby's "method missing" mechanism. I'd give examples of the find_by_... methods in Rails, as an example of the power of that language feature. My guess is that something similar could be implemented in Python, but to my knowledge it isn't there natively.
Another difference in lambdas between Python and Ruby is demonstrated by Paul Graham's Accumulator Generator problem. Reprinted here:
Write a function foo that takes a number n and returns a function that takes a number i, and returns n incremented by i.
Note: (a) that's number, not integer, (b) that's incremented by, not plus.
In Ruby, you can do this:
def foo(n)
lambda {|i| n += i }
end
In Python, you'd create an object to hold the state of n:
class foo(object):
def __init__(self, n):
self.n = n
def __call__(self, i):
self.n += i
return self.n
Some folks might prefer the explicit Python approach as being clearer conceptually, even if it's a bit more verbose. You store state like you do for anything else. You just need to wrap your head around the idea of callable objects. But regardless of which approach one prefers aesthetically, it does show one respect in which Ruby lambdas are more powerful constructs than Python's.
python has named optional arguments
def func(a, b=2, c=3):
print a, b, c
>>> func(1)
1 2 3
>>> func(1, c=4)
1 2 4
AFAIK Ruby has only positioned arguments because b=2 in the function declaration is an affectation that always append.
Ruby has embedded documentation:
=begin
You could use rdoc to generate man pages from this documentation
=end
http://c2.com/cgi/wiki?PythonVsRuby
http://c2.com/cgi/wiki?SwitchedFromPythonToRuby
http://c2.com/cgi/wiki?SwitchedFromRubyToPython
http://c2.com/cgi/wiki?UsingPythonDontNeedRuby
http://c2.com/cgi/wiki?UsingRubyDontNeedPython
In Ruby, when you import a file with
require, all the things defined in
that file will end up in your global
namespace.
With Cargo you can "require libraries without cluttering your namespace".
# foo-1.0.0.rb
class Foo
VERSION = "1.0.0"
end
# foo-2.0.0.rb
class Foo
VERSION = "2.0.0"
end
>> Foo1 = import("foo-1.0.0")
>> Foo2 = import("foo-2.0.0")
>> Foo1::VERSION
=> "1.0.0"
>> Foo2::VERSION
=> "2.0.0"