I would like to know if there are any documented performance differences between a Python interpreter that I can install from an rpm (or using yum) and a Python interpreter compiled from sources (with a priori well set flags for compilations).
I am using a Redhat 6.3 machine as Django/Apache/Mod_WSGI production server. I have already properly compiled everything in different setups and in different orders. However, I usually keep the build-dev dependencies on such machine. For some various ego-related (and more or less practical) reasons, I would like to use Python-2.7.3. By default, Redhat comes with Python-2.6.6. I think I could go with it but it would hurt me somehow (I would have to drop and find a replacement for a few libraries and my ego).
However, besides my ego and dependencies, I would like to know what would be the impact in terms of performance for a Django server.
If you compile with the exact same flags that were used to compile the RPM version, you will get a binary that's exactly as fast. And you can get those flags by looking at the RPM's spec file.
However, you can sometimes do better than the pre-built version. For example, you can let the compiler optimize for your specific CPU, instead of for "general 386 compatible" (or whatever the RPM was optimized for). Of course if you don't know what you're doing (or are doing it on purpose), it's always possible to build something slower than the pre-built version, too.
Meanwhile, 2.7.3 is faster in a few areas than 2.6.6. Most of them usually won't affect you, but if they do, they'll probably be a big win.
Finally, for the vast majority of Python code, the speed of the Python interpreter itself isn't relevant to your overall performance or scalability. (And when it is, you probably want to try PyPy, Jython, or IronPython to replace CPython.) This is especially true for a WSGI service. If you're not doing anything slow, Apache will probably be the bottleneck. If you are doing anything slow, it's probably something I/O bound and well outside of Python's control (like reading files).
Ultimately, the only way you can know how much gain you get is by trying it both ways and performance testing. But if you just want a rule of thumb, I'd say expect a 0% gain, and be pleasantly surprised if you get lucky.
Related
I want to get my Python script working on a bare metal device like microcontroller WITHOUT the need for an interpreter. I know there are already JIT compilers for Python like PyPy, and interpreters like CPython.
However, existing interpreters I've seen (such as CPython) take up large memory (in MB range).
Is there an AOT compiler for Python (i.e. compiling directly to native hardware through intermediary like LLVM)?
I assume such a compiler would enable Python to run much faster compared to existing implementations AND with lower memory footprint. If there is, I wonder why that solution hasn't been popularized.
As you already mentioned Cython is an option (However, it is true that the result is big due since the C runtime need to implement the Python functionality together with your program).
With regards to LLVM there was a project by Google named unladen swallow. However, that project is mostly abandoned. You can find some information about it here
Basically it was an attempt to bring LLVM optimizations into the runtime of Cython. E.g JITTING Python code.
Another old alternative was shed skin which compiles Python to C++. Some information about it can be found here.
Yet another option similar to shed skin is to restrict yourself to a subset of the Python language and use micropython.
An alternative would be to use GraalVM with Truffle AOT with Python.
It's basically Python running on an optimized AOT for jvm.
The project looks promising. You can chek this link here:
https://www.graalvm.org/22.2/graalvm-as-a-platform/language-implementation-framework/AOT/
Recently came across codon,
Codon is a high-performance Python compiler that compiles Python code to native machine code without any runtime overhead. Typical speedups over Python are on the order of 10-100x or more, on a single thread. Codon's performance is typically on par with (and sometimes better than) that of C/C++.
I have been looking for the freeze.py utility which is supposed to come bundled with Python 3 in a Python 3.3 Windows install (albeit with distribute and pip installed) and haven't found it. The utility can be downloaded directly out of the Python svn repository here, but I'm wondering: does freeze come with a standard Windows Python 3 install?
It looks like Windows binary installations of Python don't come with the freeze tool. And there's apparently a good reason for this. According to the freeze README in the source tree:
Under Windows 95 or NT, you must use the -p option and point it to the top of the Python source tree.
If you read the whole section, it comes down to this: On Windows, freeze only works if you've built Python from source, and have the resulting tree sitting around to be used for freezing. So, there's no good reason to give you freeze in binary installations.
Meanwhile, I probably should have asked this in the first place, but… are you sure you want freeze in the first place?
The freeze utility is very out of date (you might have guessed that from the README talking about requiring VC++ 5.0, Windows 95 or NT 4.0, etc.). It also never worked that well on Windows (as you can tell from the documentation describing it as a utility "… to compile executables for Unix systems"). And there's just a lot of things it can't handle, or handles badly. At this point should probably be considered more as example code than as a useful tool.
There are a number of third-party alternatives out there: cx_freeze, py2exe, PyInstaller, etc. If you search PyPI for "freeze" (and other terms that seem reasonable), you will find a bunch of these alternatives. If your goal is to create a standalone executable out of your Python script (which, btw, freeze can never do on Windows anyway), experiment with a few of these and pick the one you like best.
If your goal is something different, the right tool will be different—you might be better off using venv or just zipping up a user site-packages directory or creating a local PyPI server.
In the comments, you said:
What I was actually looking for is a tool to convert Python code to C code. Apparently, that's impossible.
It's not impossible, it's just not what freeze (or its successors/competitors) does. Cython compiles almost a strict superset of Python to C code, although it's C code that uses Python runtime objects (except where you explicitly statically declare variables and functions with C types). If C++ is an acceptable alternative to C, Shed Skin compiles a restricted subset of Python 2.6 (using native C++ objects, and using type inference so you don't have to statically declare your types).
The question is why you want to compile Python code to C.
If you're looking to optimize some slow code, Cython is great at speeding up small pieces of bottleneck code. It takes a bit of effort (deciding what to move to Cython, what static type declarations to put in, etc.), but the curve of payoff to effort is pretty solid. Shed Skin takes a lot less effort—if it works, it just speeds up everything, automatically—but it also means you can't write a lot of idiomatic Python code in the first place. But really, before looking at either, you should consider PyPy, a complete implementation of Python 2.7.3 (and hopefully 3.3 soon) in a JIT-compiling interpreter, that often offers similar speedups, with pretty much no tradeoffs at all. Or, alternatively, you may just need to rewrite slow code to take advantage of already-optimized libraries (numpy instead of mapping over lists, itertools instead of explicit loops, lxml instead of html.parse, …).
If you're looking to write Python code that can interact directly with C code, without all the headaches of ctypes (or manually building Python bindings), Cython scores again. Cython code can effectively natively call both Python code and C code, and the compiler makes it all work like magic.
If you're looking to get C code that you can read, maintain, and improve on… there, you're out of luck. And this one may actually be impossible. Idiomatic Python code is just so different from idiomatic C code that it's hard to imagine how you could translate one into the other.
If you're wondering what the underlying problem is:
As far as I can tell, freeze makes a lot of assumptions about how things are laid out. It should be enough to have any Python installation that can build C extension modules and embedding apps, but it's not, because freeze goes under the covers and expects that building to work in specific ways. A standard binary installation on almost every *nix platform ends up looking like what freeze expects,* but a standard binary installation on Windows looks completely different.
It's not impossible to hack things up using Windows symlinks (at least if you have Vista or later and a drive with a modern version of NTFS) to get everything organized the way freeze expects (I found a blog where someone did that with 2.7.1…), but really, I don't think it's worth trying. It will be a lot of work (especially if you're just learning this stuff), and there's no guarantee you won't immediately run into another problem.
* This isn't actually true. On a Mac, both Apple's pre-installed Python and the binary installers at python.org actually give you the files organized as a Mac framework—but they provide a bunch of symlinks that simulate the traditional layout, which is good enough. On most linux distros, and many other platforms, the binary python package doesn't include any of the development files at all—but once you install an add-on binary package named something like python-devel, then you've got the right layout. Anyway, none of this matters to you, because if you wanted to learn about dpkg dependencies or framework builds you wouldn't be using Windows, right?
I am looking to bring speed improvements to an existing application and I'm looking for advice on my possible options. The application is written in Python, uses wxPython, and is packaged with py2exe (I only target windows platforms). Parts of the application are computationally intensive and run too slowly in interpreted Python. I am not familiar with C, so porting parts of the code over is not really an option for me.
So my question is basically do I have a clear picture of my options as I outline below, or am I approaching this from the wrong direction?
Running with pypy: Today I started experimenting with Pypy - the results are exciting, in that I can run large parts of the code from the pypy interpreter and I'm seeing 5x+ speed improvements with no code changes. However, if I understand correctly, (a) Pypy with wxpython support is still a work in progress, and (b) I cannot compile it down to an exe for distribution anyway. So unless I'm mistaken, this seems like a no-go for me? There's no way to package things up so parts of it are executed with pypy?
Converting code to RPython, translating with pypy So the next option seems to be actually rewriting parts of the code to the pypy restricted language, which seems like a pretty large job. But if I do that, parts of the code can then be compiled to an executable (?) and then I can access the code through ctypes (?).
Other restricted options Shedskin seems to be a popular alternative here, does this fit my requirements better? Other options seem to be Cpython, Psyco, and Unladen, but they are all superseded or no longer maintained.
Using PyPy indeed rules out py2exe and similar tools, at least until one is ported (AFAIK there is no active work on that). Still, as PyPy binaries do not need to be installed, you might get away with a more complicated distribution that includes both your Python source code and a PyPy binary+stdlib and uses a small wrapper (batch file, executable) to ease launching. I can't comment on whether WxPython on PyPy is mature enough to be used, but perhaps someone on pypy-dev, wxpython-dev or either one's IRC channel can give a recommendation if you describe your situation.
Translating your code into RPython does not seem viable to me. The translation toolchain is not really a tool for general purpose development, and producing a C dll for embedding/ctypes seems nontrivial. Also, RPython code really is low-level, making your Python code restricted enough may amount to rewriting half of it.
As for other restricted options: You seem to mix up CPython (the original Python interpreter written in C) with Cython (a compiler for a Python-like language that emits C code suitable for CPython extension modules). Both projects are active. I'm not very familiar with Shedskin, but it seems to be a tool for developing whole programs, with little or no interaction with non-restricted Python code. Cython seems a much better fit: Although it requires manual type annotations and lower-level code to achieve really good performance, it's trivial to use from Python: The very purpose of the project is producing extension modules.
I would definitely look into Cython, I've been playing with it some and have seen speedups of ~100x over pure python. Use the profile module to find the bottlenecks first. Usually the loops are the biggest chances to increase speed when going to Cython. You should also look into seeing if you can use array/vector operations in Numpy instead of loops, if so that can also give extreme performance boosts. For instance:
a = range(1000000)
for i in range(len(a)):
a[i] += 5
is slow, real slow. On the other hand:
a = numpy.arange(10000000)
a = a +5
is fast, real fast.
Correction: shedskin can be used to generare extention modules, as well as whole programs.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What exactly is the sole purpose of python being an interpreter.
It doesn't provide executable files (how a commercial software developer use it?)
If any part of the code had bugs, it doesn't show up unless python
goes to that line at run it. In large projects, all parts of code
doesn't get interpreted every time, so, there would be a lot of
hidden bugs inside project
Every system should have a python installed in it to run those software's...
I am using py2exe, and I find myself puzzled to just look at the executable file size (too large).
First, answers to your questions.
They can use it for parts of their system for which they don't mind the source being visible (e.g. extensions) or they can Open Source their application. They can also use it to develop backend services for something which they're providing as a service (e.g. Youtube). They can also use it for internal tools which they don't plan to release(e.g. with Google).
That's why you need to write tests, exercise discipline and measure test coverage regularly. You sacrifice the compilers ability to check for things and some speed for advantages which I've detailed below.
Yes but it's not too hard to bundle Python along with your app. The entire interpreter + libraries is not that big. Python is pretty much a standard on most UNIX environments today. This is usually not a practical problem. The same issue is there with (say) Java (you need the JVM installed).
py2exe bundles all the modules into a single executable. It will be big. If you want to do compiled programs that are lean, don't use Python. Wrong fit.
Now, a few reasons on why "interpreted".
Faster development time. Programmer time is costlier than computer time so we should optimise for that.
No compilation cycle. Very easy to make incremental changes and check. Quick turnaround.
Introspection and dynamic typing allows certain kinds of coding not possible with some compiled languages like C.
Cross platform. If you have an interpreter for your platform, the program will run there even if it was written on a different platform.
You bring up a few different issues, here are some responses:
1) Technically, Python isn't interpreted (usually) - it is compiled to bytecode and that bytecode is run on a virtual machine.
So Python doesn't provide executables because it runs bytecode, not machine code.
You could just as well ask why Java doesn't produce executables.
The standard advantages of virtual machines apply: A big one being a simplified cross-platform development experience.
You could distribute just the .pyc (compiled bytecode) files if you don't want your source to be available. See this reference.
2) Here, you are talking about dynamic vs. static languages. There are tradeoffs, of course. One disadvantage of dynamic languages, as you mention, is that you get more run-time errors rather than compile-time errors.
There are, of course, corresponding advantages. I'll point you to some resources discussing both sides:
Dynamic type languages versus static type languages
What do people find so appealing about dynamic languages?
http://research.microsoft.com/en-us/um/people/emeijer/Papers/RDL04Meijer.pdf
3) Quite right. Just as you need the Java VM installed to run Java, perl to run Perl, etc.
Regarding your last point:
The whole idea of running in a VM is that you can install that VM once, then run many different apps. By bundlig the whole VM with every app (such as with py2exe), you are going against that concept. So yes, you have to pay the cost in terms of size.
Sole purpose of python is to provide a beautiful language to program in.
Your point #1 and #3 are similar and answer is that professional programmers use py2exe/pyinstaller etc to bundle their programs and distribute, in cases of frameworks/libraries they even don't need to do that.
Your point number #2 is also valid for statically compiled languages, something compiles correctly in C++ doesn't mean it will not crash at run-time or business logic is correct, you anyway need to test each part of your code, so with good unittests and functional tests python is at par with other languages in finding bugs, and as it doesn't need to be comiled and being dynamic means better productivity.
IMO
Python is not an interpreter, but an interpreted language.
This question is more about interpreted language VS compiled languages which has actually no other answer that the usual "it depends of your need".
See Noufal Ibrahim for details, but I'm not sure this question is a good fit for SO.
(1) You can provide installers for Python code (which may install the Python environment). This doesn't prevent you from having commercial code. Note that you can also have Java (also "interpreted" or JIT-compiled) commercial or desktop code and require your users to install a JRE.
(2) Any language, even compiled and strongly type, may produce errors that only show up when you get to that given code (e.g. division by zero). I guess you may be referring to strongly v.s. loosely typed languages. It's not just a matter of compilation, but the fact that strongly-typed languages generally make it easier to find "structural" bugs (e.g. trying to use a string as a number) during the compilation process. In contrast, loosely-typed language often lead to shorter code, which may be easier to manage. What to use really depends on the goal of your application.
Interactivity is good. I find it encourages making small, easily testable functions that build together to make an application.
Unless you are writing simple, statically linked applications, you will usually have some run-time baggage that must be included or installed (mfc, dot net, etc.) for compiled languages. Look at the winsxs folder. Sure, you get to "share" that stuff most of the time, but there is still a lot of space taken up by "needed", if hidden, requirements.
And as far as bugs, run-time bugs will be the same no matter what. Any good programmer would test as much as possible when making changes. This should catch what would be compile time bugs in other languages as well as testing run-time behavior.
A python .exe has to necessarily include a complete copy of the python interpreter. As you say, since it's interpreted it won't touch every line of code until that line is actually run. Some parts may actually invoke a python parse/compile sequence (e.g. eval(), some regexes, etc...). These would fail in a compiled .exe unless the full interpreter was available.
Suppose I've developed a general-purpose end user utility written in Python. Previously, I had just one version available which was suitable for Python later than version 2.3 or so. It was sufficient to say, "download Python if you need to, then run this script". There was just one version of the script in source control (I'm using Git) to keep track of.
With Python 3, this is no longer necessarily true. For the foreseeable future, I will need to simultaneously develop two different versions, one suitable for Python 2.x and one suitable for Python 3.x. From a development perspective, I can think of a few options:
Maintain two different scripts in the same branch, making improvements to both simultaneously.
Maintain two separate branches, and merge common changes back and forth as development proceeds.
Maintain just one version of the script, plus check in a patch file that converts the script from one version to the other. When enough changes have been made that the patch no longer applies cleanly, resolve the conflicts and create a new patch.
I am currently leaning toward option 3, as the first two would involve a lot of error-prone tedium. But option 3 seems messy and my source control system is supposed to be managing patches for me.
For distribution packaging, there are more options to choose from:
Offer two different download packages, one suitable for Python 2 and one suitable for Python 3 (the user will have to know to download the correct one for whatever version of Python they have).
Offer one download package, with two different scripts inside (and then the user has to know to run the correct one).
One download package with two version-specific scripts, and a small stub loader that can run in both Python versions, that runs the correct script for the Python version installed.
Again I am currently leaning toward option 3 here, although I haven't tried to develop such a stub loader yet.
Any other ideas?
Edit: my original answer was based on the state of 2009, with Python 2.6 and 3.0 as the current versions. Now, with Python 2.7 and 3.3, there are other options. In particular, it is now quite feasible to use a single code base for Python 2 and Python 3.
See Porting Python 2 Code to Python 3
Original answer:
The official recommendation says:
For porting existing Python 2.5 or 2.6
source code to Python 3.0, the best
strategy is the following:
(Prerequisite:) Start with excellent test coverage.
Port to Python 2.6. This should be no more work than the average port
from Python 2.x to Python 2.(x+1).
Make sure all your tests pass.
(Still using 2.6:) Turn on the -3 command line switch. This enables
warnings about features that will be
removed (or change) in 3.0. Run your
test suite again, and fix code that
you get warnings about until there are
no warnings left, and all your tests
still pass.
Run the 2to3 source-to-source translator over your source code tree.
(See 2to3 - Automated Python 2 to 3
code translation for more on this
tool.) Run the result of the
translation under Python 3.0. Manually
fix up any remaining issues, fixing
problems until all tests pass again.
It is not recommended to try to write
source code that runs unchanged under
both Python 2.6 and 3.0; you’d have to
use a very contorted coding style,
e.g. avoiding print statements,
metaclasses, and much more. If you are
maintaining a library that needs to
support both Python 2.6 and Python
3.0, the best approach is to modify step 3 above by editing the 2.6
version of the source code and running
the 2to3 translator again, rather than
editing the 3.0 version of the source
code.
Ideally, you would end up with a single version, that is 2.6 compatible and can be translated to 3.0 using 2to3. In practice, you might not be able to achieve this goal completely. So you might need some manual modifications to get it to work under 3.0.
I would maintain these modifications in a branch, like your option 2. However, rather than maintaining the final 3.0-compatible version in this branch, I would consider to apply the manual modifications before the 2to3 translations, and put this modified 2.6 code into your branch. The advantage of this method would be that the difference between this branch and the 2.6 trunk would be rather small, and would only consist of manual changes, not the changes made by 2to3. This way, the separate branches should be easier to maintain and merge, and you should be able to benefit from future improvements in 2to3.
Alternatively, take a bit of a "wait and see" approach. Proceed with your porting only so far as you can go with a single 2.6 version plus 2to3 translation, and postpone the remaining manual modification until you really need a 3.0 version. Maybe by this time, you don't need any manual tweaks anymore...
For developement, option 3 is too cumbersome. Maintaining two branches is the easiest way although the way to do that will vary between VCSes. Many DVCS will be happier with separate repos (with a common ancestry to help merging) and centralized VCS will probably easier to work with with two branches. Option 1 is possible but you may miss something to merge and a bit more error-prone IMO.
For distribution, I'd use option 3 as well if possible. All 3 options are valid anyway and I have seen variations on these models from times to times.
I don't think I'd take this path at all. It's painful whichever way you look at it. Really, unless there's strong commercial interest in keeping both versions simultaneously, this is more headache than gain.
I think it makes more sense to just keep developing for 2.x for now, at least for a few months, up to a year. At some point in time it will be just time to declare on a final, stable version for 2.x and develop the next ones for 3.x+
For example, I won't switch to 3.x until some of the major frameworks go that way: PyQt, matplotlib, numpy, and some others. And I don't really mind if at some point they stop 2.x support and just start developing for 3.x, because I'll know that in a short time I'll be able to switch to 3.x too.
I would start by migrating to 2.6, which is very close to python 3.0. You might even want to wait for 2.7, which will be even closer to python 3.0.
And then, once you have migrated to 2.6 (or 2.7), I suggest you simply keep just one version of the script, with things like "if PY3K:... else:..." in the rare places where it will be mandatory. Of course it's not the kind of code we developers like to write, but then you don't have to worry about managing multiple scripts or branches or patches or distributions, which will be a nightmare.
Whatever you choose, make sure you have thorough tests with 100% code coverage.
Good luck!
Whichever option for development is chosen, most potential issues could be alleviated with thorough unit testing to ensure that the two versions produce matching output. That said, option 2 seems most natural to me: applying changes from one source tree to another source tree is a task (most) version control systems were designed for--why not take advantages of the tools they provide to ease this.
For development, it is difficult to say without 'knowing your audience'. Power Python users would probably appreciate not having to download two copies of your software yet for a more general user-base it should probably 'just work'.