What makes C faster than Python? [closed] - python

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know this is probably a very obvious answer and that I'm exposing myself to less-than-helpful snarky comments, but I don't know the answer so here goes.
If Python compiles to bytecode at runtime, is it just that initial compiling step that takes longer? If that's the case wouldn't that just be a small upfront cost in the code (ie if the code is running over a long period of time, do the differences between C and python diminish?)

It's not merely the fact that Python code is interpreted which makes it slower, although that definitely sets a limit to how fast you can get.
If the bytecode-centric perspective were right, then to make Python code as fast as C all you'd have to do is replace the interpreter loop with direct calls to the functions, eliminating any bytecode, and compile the resulting code. But it doesn't work like that. You don't have to take my word for it, either: you can test it for yourself. Cython converts Python code to C, but a typical Python function converted and then compiled doesn't show C-level speed. All you have to do is look at some typical C code thus produced to see why.
The real challenge is multiple dispatch (or whatever the right jargon is -- I can't keep it all straight), by which I mean the fact that whereas a+b if a and b are both known to be integers or floats can compile down to one op in C, in Python you have to do a lot more to compute a+b (get the objects that the names are bound to, go via __add__, etc.)
This is why to make Cython reach C speeds you have to specify the types in the critical path; this is how Shedskin makes Python code fast using (Cartesian product) type inference to get C++ out of it; and how PyPy can be fast -- the JIT can pay attention to how the code is behaving and specialize on things like types. Each approach eliminates dynamism, whether at compile time or at runtime, so that it can generate code which knows what it's doing.

Byte codes are not natural to the CPU so they need interpretation (by a CPU native code called interpreter). The advantage of byte code is that it enables optimizations, pre-computations, and saves space. C compiler produces machine code and machine code does not need interpretation, it is native to CPU.

Related

Best language for battery modelling? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm interested to learn if there's a general consensus around using one language or environment for building physics-based computational models of batteries?
The modelling typically involves mathematically representing electrochemical, mechanical and thermal phenomena, solving partial differential equations and outputting plots of different variables in two and three dimensions.
So far, I've seen various academic research groups using MATLAB, but from other questions here, I can see that Fortran and Python have been suggested for relatively generic physics modelling. (See here: https://goo.gl/3ACddi)
I have a preference for a free (as in beer & speech) environment, wherever possible, but I recognise that some proprietary environments may have built-in toolboxes that are useful. Additionally, I would like the environment to allow the code to be easily parallelized so that it can run across many cores.
This is a broad question, but I'll share what I've experienced so far. Maybe it's of some use. Keep in mind that this is all my personal option.
MATLAB: It's widely used in academic environments. One reason is that Mathworks is following a smart business strategy where educational licenses are very cheap compared to the retail prize, thus many students and professors get used to MATLAB, even if there might be something better for them out there.
MATLAB has the advantage of being very easy to code. It will often take you a short time to get the first prototype of your code running. This comes at the expense of performance (compared to C/C++ and Python, which are often a bit faster than MATLAB). One of the downsides is that Matlab was not meant to compete with C/C++ and the like. You don't even have namespaces in matlab. Writing frameworks in matlab is therefore a whole lot more tiresome (if not impossible) and inefficient than writing one in C/C++. For instance if you create a function in your workspace called max which does absolutely nothing, you have no way to call Matlab's built in max function as long as yours is in the workspace.
C++: I'm studying engineering and here C++ is the favourite choice when it comes to physical simulations. Compared to other languages it's really fast. And since the programmer is responsible for memory management, he or she can get the last 10% bit of performance by writing efficient and case specific code for handling memory. There's also a ton of Open Source libraries out there, for example Eigen which is a library for Matrix and Vector calculation.
C: Some people (hello Linus) are convinced, that C++ is not a good language and prefer the plain C since it is a bit faster and the library "bloat" (in C++ coming from STD, Boost and the likes) is smaller. More arguments against C++ are that it seduces the programmer into creating classes for every little thing and use Polymorphism out of laziness. Both things can have a negative impact on performance, but if it makes it worth refusing to work with C++ at all is up to you to decide. As a sidenote: The complete Linux Kernel is written in C, not C++ and many tools like GIT are also written in plain C.
Python: Another language suitable for rapid prototyping since you don't need to compile a lot and the syntax is optimized to be easy and intuitive to use. Debuggers are not necessary since you can simply use the Interpreter to check out different variables and their values, much like in matlab. But contrary to Matlab Python also allows you to create objects with methods and everything like C++. (I know that Matlab recently added classes, but I refuse to say it's equivalent to C++/Python). Python is also widely used for academic purposes. There are open source libraries for Machine Learning, Artificial Intelligence and everything. There are also libraries which allow you to use Fractions without approximations. I.e. 1/6 is stored as two numbers, numerator and denominator, and not as a double. In the open source community people are putting a great effort into copying many features Matlab has over to Python, which is why you'll find many open source enthusiasts using it.
You see, some languages are good for rapid prototyping, meaning for scenarios where you want to get a proof of concept. MATLAB is useful since you don't have to compile anything and you can quickly visualize results. Python is also worth noting for rapid prototyping. But once you need to deploy the code on actual hardware or want to sell a finished product with user interface and everything, you'd probably go with something like C/C++ or Python, but not Matlab.

Tips on writing optimised project in Python with C/C++ extensions? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a project written in IDL (interactive data language) that is used to produce a near real time model of the ionosphere by assimilating a heap of different data inputs. IDL is not a great language for this to be written in, but it is so mainly because of legacy code. The project is written in OO style despite the relatively limited object environment in IDL.
The scope of the next generation of this project is much larger and will require much more computing grunt. IDL has limited support from multi-threading and no support for parallel running on distributed memory systems. The original plan was to write the next generation code in C++ using MPI to parallelize, however I have recently started learning Python and am very impressed with the ease of use and ability to rapidly develop and maintain code. I am now considering writing the high level parts of this project in Python and using C extensions when/if required to improve the optimisation of the core number crunching parts.
Since I'm new to Python, it won't be immediately obvious to me where Python is likely to be slow compared to a C version (and I'll also probably do things sub-optimally in Python until I learn its idiosyncrasies). This means I'll thinking of basically planning out the whole project as if it was to be done all in Python, write the code, profile and optimise repeatedly until I can't make any more improvements and then look to replace the slowest parts with C extensions.
Is this a good approach? Does anyone have any tips for developing this kind of project? I'll be looking to utilise as many existing well optimised libraries as possible (such as scaLAPACK) which may also reduce the need to roll my own C based extensions for the number crunching.
Python is especially slow when you do a lot of looping, especially nested loops
for i in x:
for j in y:
....
When it comes computationally intensive problems, 99% of the problems can be solved by doing vectorized calculations with numpy instead of looping, e.g.:
x = np.arange(1000) #numbers from 0 to 999
y = np.arange(1000, 2000) #numbers from 1000 to 1999
# slow:
for i in range(len(x)):
y[i] += x[i]
# fast:
y += x
For many scientific problems there are binary libraries, written in FORTRAN or C(++), that are available in Python. This makes life really easy.
If you come to a point where this is not possible, I'd stick to Cython to easily implement the core parts in C, without writing C.

Suitable Language for RSA implementation [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to implement an RSA cryptosystem algorithm for a university project and I' m trying to decide which programming language to use. I am very familiar with C so it would be a convenient choice. However, the algorithm will have to deal with very large numbers (it will include a Primality subroutine), and I have heard that using Python will result in a better implementation. Is that right?
Thank you in advance.
Of course you could use any language to implement RSA, even Assembler. The question is probably not about a "better" implementation, but maybe about what's easier to grasp when looking at the resulting code a couple of weeks later.
Let's recap what you would need for a RSA implementation:
Large integer support
modular exponentiation
modular inverse
primality testing for key generation
The more support the language of your choice has for these, the cleaner and easier to understand will be the result. Lower-level languages like C(++) won't have native support for large integers, but a library like gmp will provide you with everything that's necessary. Java has the BigInteger class for that.
But still, the result will probably not be as easy to understand as an implementation in a language that has built-in big integer support, such as Python, Ruby or Haskell for example. The resulting code will pretty much look like the textbook description of the algorithms that are used. On the downside, they tend to be slower than for example the highly optimized gmp code.
But since performance is probably not what you are after at this point, I would recommend to use a higher-level language. You don't have to deal with low-level maintenance and can concentrate on the task at hand, pick the one you like best or have experience in. If you want to draw from your familiarity with C, no problem, use a aribitrary-precision library such as gmp and you're good to go, too.
For the missing parts that are probably not built into the language by default, you can use the following as references:
modular exponentiation is built-in in Python (pow with 3 arguments), for other languages you may try the "square-and-multiply" method
the modular inverse can be retrieved from the Extended Euclidean Algorithm. gmp and the Python port gmpy2 will have these algorithms built in.
for primality testing I'd recommend to use Miller-Rabin which is also not too hard to implement, but you could also find implementations for example in PyCrypto
Although you probably know this already, for the sake of completeness just let me warn you that this, what is called "textbook RSA" implementation, will not be secure to use in production - a lot of things have not been addressed yet. There's RSA blinding to prevent side-channel attacks, for RSA to be secure as an encryption scheme you will also need to implement some form of padding, it's crucial to use a cryptographically secure random generator for your keys etc. etc.
I don't know if Python will result in a "better" implementation, since better is rather subjective here. You can find numerical libraries for both that will allow you to deal with large numbers easily. Python has the advantage (imo) of having the numpy library which is very easy to read and use and is generally more human readable which often leads to easier debugging.
Using a scripting language or any more high level language than C (e.g. C# or Java) will most likely be easier since you don't have to deal with memory management and other tasks not really related to your project.

Optimizing Python Code [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I've been working on one of the coding challenges on InterviewStreet.com and I've run into a bit of an efficiency problem. Can anyone suggest where I might change the code to make it faster and more efficient?
Here's the code
Here's the problem statement if you're interested
If your question is about optimising python code generally (which I think it should be ;) then there are all sorts of intesting things you can do, but first:
You probably shouldn't be obsessively optimising python code! If you're using the fastest algorithm for the problem you're trying to solve and python doesn't do it fast enough you should probably be using a different language.
That said, there are several approaches you can take (because sometimes, you really do want to make python code faster):
Profile (do this first!)
There are lots of ways of profiling python code, but there are two that I'll mention: cProfile (or profile) module, and PyCallGraph.
cProfile
This is what you should actually use, though interpreting the results can be a bit daunting.
It works by recording when each function is entered or exited, and what the calling function was (and tracking exceptions).
You can run a function in cProfile like this:
import cProfile
cProfile.run('myFunction()', 'myFunction.profile')
Then to view the results:
import pstats
stats = pstats.Stats('myFunction.profile')
stats.strip_dirs().sort_stats('time').print_stats()
This will show you in which functions most of the time is spent.
PyCallGraph
PyCallGraph provides a prettiest and maybe the easiest way of profiling python programs -- and it's a good introduction to understanding where the time in your program is spent, however it adds significant execution overhead
To run pycallgraph:
pycallgraph graphviz ./myprogram.py
Simple! You get a png graph image as output (perhaps after a while...)
Use Libraries
If you're trying to do something in python that a module already exists for (maybe even in the standard library), then use that module instead!
Most of the standard library modules are written in C, and they will execute hundreds of times faster than equivilent python implementations of, say, bisection search.
Make the Interpreter do as Much of Your Work as You Can
The interpreter will do some things for you, like looping. Really? Yes! You can use the map, reduce, and filter keywords to significantly speed up tight loops:
consider:
for x in xrange(0, 100):
doSomethingWithX(x)
vs:
map(doSomethingWithX, xrange(0,100))
Well obviously this could be faster because the interpreter only has to deal with a single statement, rather than two, but that's a bit vague... in fact, this is faster for two reasons:
all flow control (have we finished looping yet...) is done in the interpreter
the doSomethingWithX function name is only resolved once
In the for loop, each time around the loop python has to check exactly where the doSomethingWithX function is! even with cacheing this is a bit of an overhead.
Remember that Python is an Interpreted Language
(Note that this section really is about tiny tiny optimisations that you shouldn't let affect your normal, readable coding style!)
If you come from a background of a programming in a compiled language, like c or Fortran, then some things about the performance of different python statements might be surprising:
try:ing is cheap, ifing is expensive
If you have code like this:
if somethingcrazy_happened:
uhOhBetterDoSomething()
else:
doWhatWeNormallyDo()
And doWhatWeNormallyDo() would throw an exception if something crazy had happened, then it would be faster to arrange your code like this:
try:
doWhatWeNormallyDo()
except SomethingCrazy:
uhOhBetterDoSomething()
Why? well the interpreter can dive straight in and start doing what you normally do; in the first case the interpreter has to do a symbol look up each time the if statement is executed, because the name could refer to something different since the last time the statement was executed! (And a name lookup, especially if somethingcrazy_happened is global can be nontrivial).
You mean Who??
Because of cost of name lookups it can also be better to cache global values within functions, and bake-in simple boolean tests into functions like this:
Unoptimised function:
def foo():
if condition_that_rarely_changes:
doSomething()
else:
doSomethingElse()
Optimised approach, instead of using a variable, exploit the fact that the interpreter is doing a name lookup on the function anyway!
When the condition becomes true:
foo = doSomething # now foo() calls doSomething()
When the condition becomes false:
foo = doSomethingElse # now foo() calls doSomethingElse()
PyPy
PyPy is a python implementation written in python. Surely that means it will run code infinitely slower? Well, no. PyPy actually uses a Just-In-Time compiler (JIT) to run python programs.
If you don't use any external libraries (or the ones you do use are compatible with PyPy), then this is an extremely easy way to (almost certainly) speed up repetitive tasks in your program.
Basically the JIT can generate code that will do what the python interpreter would, but much faster, since it is generated for a single case, rather than having to deal with every possible legal python expression.
Where to look Next
Of course, the first place you should have looked was to improve your algorithms and data structures, and to consider things like caching, or even whether you need to be doing so much in the first place, but anyway:
This page of the python.org wiki provides lots of information about how to speed up python code, though some of it is a bit out of date.
Here's the BDFL himself on the subject of optimising loops.
There are quite a few things, even from my own limited experience that I've missed out, but this answer was long enough already!
This is all based on my own recent experiences with some python code that just wasn't fast enough, and I'd like to stress again that I don't really think any of what I've suggested is actually a good idea, sometimes though, you have to....
First off, profile your code so you know where the problems lie. There are many examples of how to do this, here's one: https://codereview.stackexchange.com/questions/3393/im-trying-to-understand-how-to-make-my-application-more-efficient
You do a lot of indexed access as in:
for pair in range(i-1, j):
if coordinates[pair][0] >= 0 and coordinates[pair][1] >= 0:
Which could be written more plainly as:
for coord in coordinates[i-1:j]:
if coord[0] >= 0 and cood[1] >= 0:
List comprehensions are cool and "pythonic", but this code would probably run faster if you didn't create 4 lists:
N = int(raw_input())
coordinates = []
coordinates = [raw_input() for i in xrange(N)]
coordinates = [pair.split(" ") for pair in coordinates]
coordinates = [[int(pair[0]), int(pair[1])] for pair in coordinates]
I would instead roll all those together into one simple loop or if you're really dead set on list comprehensions, encapsulate the multiple transformations into a function which operates on the raw_input().
This answer shows how I locate code to optimize.
Suppose there is some line of code you could replace, and it is costing, say, 40% of the time.
Then it resides on the call stack 40% of the time.
If you take 10 samples of the call stack, it will appear on 4 of them, give or take.
It really doesn't matter how many samples show it.
If it appears on two or more, and if you can replace it, you will save whatever time it costs.
Most of the interview street problems seem to be tested in a way that will verify that you have found an algorithm with the right big O complexity rather than that you have coded the solution in the most optimal way possible.
In other words if you are failing some of the test cases due to running out of time the problem is likely that you need to figure out a solution with lower algorithmic complexity rather than micro-optimize the algorithm you have. This is why they generally state that N can be quite large.

Best language for Molecular Dynamics Simulator, to be run in production. (Python+Numpy?) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I need to build a heavy duty molecular dynamics simulator. I am wondering if python+numpy is a good choice. This will be used in production, so I wanted to start with a good language. I am wondering if I should rather start with a functional language like eg.scala. Do we have enough library support for scientific computation in scala? Or any other language/paradigm combination you think is good - and why. If you had actually built something in the past and are talking from experience, please mention it as it will help me with collecting data points.
thanks much!
The high performing MD implementations tend to be decidedly imperative (as opposed to functional) with big arrays of data trumping object-oriented design. I've worked with LAMMPS, and while it has its warts, it does get the job done. A perhaps more appealing option is HOOMD, which has been optimized from the beginning for Nvidia GPUs with CUDA. HOOMD doesn't have all the features of LAMMPS, but the interface seems a bit nicer (it's scriptable from Python) and it's very high performance.
I've actually implemented my own MD code a couple times (Java and Scala) using a high level object oriented design, and have found disappointing performance compared to the popular MD implementations that are heavily tuned and use C++/CUDA. These days, it seems few scientists write their own MD implementations, but it is useful to be able to modify existing ones.
I believe that most highly performant MD codes are written in native languages like Fortran, C or C++. Modern GPU programming techniques are also finding favour more recently.
A language like Python would allow for much more rapid development that native code. The flip side of that is that the performance is typically worse than for compiled native code.
A question for you. Why are you writing your own MD code? There are many many libraries out there. Can't you find one to suit your needs?
Why would you do this? There are many good, freely available, molecular dynamics packages out there you could use: LAMMPS, Gromacs, NAMD, HALMD all come immediately to mind (along with less freely available ones like CHARMM, AMBER, etc.) Modifying any of these to suit your purpose is going to be vastly easier than writing your own, and any of these packages, with thousands of users and dozens of contributors, are going to be better than whatever you'd write yourself.
Python+numpy is going to be fine for prototyping, but it's going to be vastly slower (yes, even with numpy linked against fast libraries) than C/C++/Fortran, which is what all the others use. Unless you're using GPU, in which case all the hard work is done in kernels written in C/C++ anyway.
Another alternative if you want to use Python is to take a look at OpenMM:
https://simtk.org/home/openmm
It's a Molecular Dynamics API that has many of the basic elements that you need (integrators, thermostats, barostats, etc) and supports running on the CPU via OpenCL and GPU via CUDA and OpenCL. It has a python wrapper that I've used before and basically mimics the underlying c-api calls. It's been incorporated into Gromacs, and MDLab, so you have some examples of how to integrate it if you're really dead set on building something from (semi) scratch
However as others have said, I highly recommend taking a look at NAMD, Gromacs, HOOMD, LAMMPS, DL_POLY, etc to see if it fits your needs before you embark on re-inventing the wheel.

Categories

Resources