I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.
Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development.
What do you think of the python socket module :
Is it reliable and fast enough for production quality software ?
Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ?
Thanks in advance,
Paul
Check out Twisted, a Python engine for Networking. Has built-in support for TCP, UDP, SSL/TLS, multicast, Unix sockets, a large number of protocols (including HTTP, NNTP, IMAP, SSH, IRC, FTP, and others)
Python is a mature language that can do almost anything that you can do in C/C++ (even direct memory access if you really want to hurt yourself).
You'll find that you can write beautiful code in it in a very short time, that this code is readable from the start and that it will stay readable (you will still know what it does even after returning one year later).
The drawback of Python is that your code will be somewhat slow. "Somewhat" as in "might be too slow for certain cases". So the usual approach is to write as much as possible in Python because it will make your app maintainable. Eventually, you might run into speed issues. That would be the time to consider to rewrite a part of your app in C.
The main advantages of this approach are:
You already have a running application. Translating the code from Python to C is much more simple than write it from scratch.
You already have a running application. After the translation of a small part of Python to C, you just have to test that small part and you can use the rest of the app (that didn't change) to do it.
You don't pay a price upfront. If Python is fast enough for you, you'll never have to do the optional optimization.
Python is much, much more powerful than C. Every line of Python can do the same as 100 or even 1000 lines of C.
To answer #1, I know that among other things, EVE Online (the MMO) uses a variant of Python for their server code.
The python that EVE online uses is StacklessPython (http://www.stackless.com/), and as far as i understand they use it for how it implements threading through using tasklets and whatnot. But since python itself can handle stuff like MMO with 40k people online i think it can do anything.
This bad answer and not really an answer to your question, rather addition to previous answer.
Alan.
Related
I have been playing around with the twisted framework for about a week now(more because of curiosity rather than having to use it) and its been a lot of fun doing event driven asynchronous network programming.
However, there is something that I fail to understand. The twisted documentation starts off with
Twisted is a framework designed to be very flexible and let you write powerful servers.
My doubt is :- Why do we need such an event-driven library to write powerful servers when there are already very efficient implementations of various servers out there?
Surely, there must have been more than a couple of concrete implementations which the twisted developers had in mind while writing this event-driven I\O library. What are those? Why exactly was twisted made?
In a comment on another answer, you say "Every library is supposed to have ...". "Supposed" by whom? Having use-cases is certainly a nice way to nail down your requirements, but it's not the only way. It also doesn't make sense to talk about the use-cases for all of Twisted at once. There is no use case that justifies every single API in Twisted. There are hundreds or thousands of different use cases, each which justifies a lesser or greater subdivision of Twisted. These came and went over the years of Twisted's development, and no attempt has been made to keep a list of them. I can say that I worked on part of Twisted Names so that I would have a topic for a paper I was presenting at the time. I implemented the vt102 parser in Twisted Conch because I am obsessed with terminals and wanted a fun project involving them. And I implemented the IMAP4 support in Twisted Mail because I worked at a company developing a mail server which required tighter control over the mail store than any other IMAP4 server at the time offered.
So, as you can see, different parts of Twisted were written for widely differing reasons (and I've only given examples of my own reasons, not the reasons of any other developers).
The initial reason for a program being written often doesn't matter much in the long run though. Now the code is written: Twisted Names now runs the DNS for many domain names on the internet, the vt102 parser helped me get a job, and the company that drove the IMAP4 development is out of business. What really matters is what useful things you can do with the code now. As MattH points out, the resulting plethora of functionality has resulted in a library that (perhaps uniquely) addresses a wide array of interesting problems.
Why do we need such an event-driven library to write powerful servers when there are already very efficient implementations of various servers out there?
So paraphrasing: you can't imagine why anyone would need a toolkit when dyecast products already exist?
I'm guessing you've never needed to knock up a protocol gateway, e.g.
- write a daemon to md5 local files on demand over a unix socket
- interrogate a piece of software using udp and expose statistics over http.
I wrote a little proof-of-concept for the second example for a question here on SO in a handful of minutes. I couldn't do that without twisted.
Have you looked at: ProjectsUsingTwisted?
More on 'why': (disclaimer: I'm not a developer of Twisted proper), it's necessary to consider Twisted's high age (relative to Python's). When Twisted was written there was no sufficiently powerful non-blocking network/event driven library written around the reactor pattern (almost everyone was using threads back then). Twisted's initial use case was a large multiplayer game, although the specifics of this game seems to be somewhat lost in time.
Since the origins, as #MattH's link suggest, a very large amount of various network servers written in Python is based on Twisted.
This PyCon talk by the creator of Twisted should give you answers.
It has changed my opinion of Twisted. Before I viewed it as a massive piece of software with interfaces and weird names, two things that many developers dislike but that are actually just superficial things, and now that I’ve seen the history behind and the amazing number of use cases I respect it a lot. Life is short, you need Twisted :)
I recall when I first read Pragmatic Programmer that they suggested using scripting languages to make you a more productive programmer.
I am in a quandary putting this into practice.
I want to know specific ways that using Python or Ruby can make me a more productive .NET developer.
One specific way per answer, and even better if you can say whether I could use Python or Ruby or Both for it.
See standard format below.
IronPython / IronRuby
IronPython in Action will do a better job explaining this (and exactly how best to use IronPython) that can possibly be accommodated in a SO answer. I'm biased -- I was a tech reviewer and am a friend of one of the authors -- but objectively think it's a great book. (No idea if IronRuby is blessed with a similarly wonderful book, yet).
As you want "one specific way per answer" (incompatible with SO, which STRONGLY discourages a poster posting 25 different answers if they have 25 "specific ways" to indicate...!-): prototyping in order to explore some specific assembly or collection thereof that you're unfamiliar with (to check if you've understood their docs right and how to perform certain tasks) is an order of magnitude more productive in IronPython than in C#, as you can explore interactively and compilation is instantaneous and as-needed. (Have not tried IronRuby but I'll assume it can work in a roughly equivalent way and speed).
Less Code
I think productivity is direct result on how proficient you are in a specific language. That said the terseness of a language like Python might save some time on getting certain things done.
If I compare how much less code I have to write for simple administration scripts (e.g. clean-up of old files) compared to .NET code there is certain amount of productivity gain. (Plus it is more fun which also helps getting the job done)
Advanced Text Processing
Traditional strengths of awk and perl. You can just glue together a bunch of regular expressions to create a simple data-mining system on the go.
Learning a new language gives you knowledge that you can bring back to any programming language. Here are some things you'd learn.
Add functionality to your objects on the fly.
Mix in modules.
Pass a chunk of code around.
Figure out how to do more with less code: ruby -e "puts 'hello world'"
C# can do some of these things, but a fresh perspective might bring you one step closer to automating your breakfast.
Embedding a script engine
Use of IronPython for a scripting engine inside your .NET application. For example enabling end-users of your application to change customizable parts with a full fledge language such as Python.
A possible example might be to expose custom logic to end-users for a work flow engine.
Quick Prototyping - Both
In the simplest cases when firing a python interpreter and writing a line or two is way faster than creating a new project in visual studio.
And you can use ruby to. Or lua, or evel perl, whatever. The point is implicit typing and light-weight feel.
Cross platform
Compared to .NET a simple script Python is more easily ported to other platforms such as Linux. Although possible to achieve the same with the likes of Mono it simpler to run a Python script file on different platforms.
Processing received Email
Python has built-in support for POP3 and IMAP where the standard .NET framework doesn't. Useful for automating email triggered tasks.
I want to start writing a http proxy that will modify responses according to some rules/filters I will configure. However, before I start coding it, I want to make sure I'm making the right choice in going with Python. Later, this tool would have to be able to process a lot of requests, so, I would like to know I can count on it later on to be able to perform when "push comes to shove".
As long as the bulk of the processing uses Python's built-in modules it should be fine as far as performance. The biggest strength of Python is its clear syntax and ease of testing/maintainability. If you find that one section of your code is slowing down the process, you can rewrite that section and use it as a C module, while keeping the bulk of your control code in Python.
However if you're looking to make the most optimized Python Code you may want to check out this SO post.
Yes, I think you will find Python to be perfectly adequate for your needs. There's a huge number of web frameworks, WSGI libraries, etc. to choose from, or learn from when building your own.
There's an interesting post on the Python History blog about how Python was supporting high performance websites in 1996.
This will depend on the library you use more than the language itself. The twisted framework is known to scale well.
Here's a proxy server example in python/twisted to get you started.
Bottomline: choose your third party tools wisely and I'm sure you'll be fine.
Python performs pretty well for most tasks, but you'll need to change the way you program if you're used to other languages. See Python is not Java for more info.
If plain old CPython doesn't give the performance you need, you have other options as well.
As has been mentioned, you can extend it in C (using a tool like swig or Pyrex). I also hear good things about PyPy as well, but bear in mind that it uses a restricted subset of Python. Lastly, a lot of people use psyco to speed up performance.
I have a program in C that communicates via UDP with another program (in Java) and then does process manipulation (start/stop) based on the UDP pkt exchange.
Now this C program has been legacy and I want to convert it to Python - do you think Python will be a good choice for the tasks mentioned?
Yes, I do think that Python would be a good replacement. I understand that the Twisted Python framework is quite popular.
I'd say that if:
Your C code contains no platform specific requirements
You are sure speed is not going to be an issue going from C to python
You have a desire to not compile anymore
You would like to try utilise exception handling
You want to dabble in OO
You might choose to run on many platforms without porting
You are curious about dynamic typing
You want memory handled for you
You know or want to learn python
Then sure, why not.
There doesn't seem to be any technical reason you shouldn't use python here, so it's a preference in this case.
Remember as well, you can leave parts of your program in C, turn them into Python modules and build python code around them - you don't need to re-write everything up-front.
Assuming that you have control over the environment which this application will run, and that the performance of interpreted language (python) compared to a compiled one (C) can be ignored, I believe Python is a great choice for this.
If I was faced with a similar situation I'd ask myself a couple of questions:
Is there anything more important I could be working on?
Does Python bring anything to the table that is currently handled poorly by the current application?
Will this allow me to add functionality that was previously too difficult to implement?
Is this going to disrupt service in any way?
If I can't answer those satisfactorily, then I'd put off the rewrite.
Yes, I think Python is a good choice, if all your platforms support it. Since this is a network program, I'm assuming the network is your runtime bottleneck? That's likely to still be the case in Python. If you really do need to speed it up, you can include your long-since-debugged, speedy C as Python modules.
If this is an embedded program, then it might be a problem to port it since Python programs typically rely on the Python runtime and library, and those are fairly large. Especially when compared to a C program doing a well-defined task. Of course, it's likely you've already considered that aspect, but I wanted to mention it in the context of the question anyway, since I feel it's an important aspect when doing this type of comparison.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
My company is using Python for a relatively simple embedded project. Is anyone else out there using Python on embedded platforms? Overall it's working well for us, quick to develop apps, quick to debug. I like the overall "conciseness" of the language.
The only real problem I have in day to day work is that the lack of static checking vs a regular compiler can cause problems to be thrown at run-time, e.g. a simple accidental cat of a string and an int in a print statement can bring the whole application down.
We use python in quite a lot of embedded boards with ARM processors and 16 MB of RAM (running linux).
It works really well and is really easy to make custom code quickly - one of the strong points of python.
As for reliability of the code - we try to have 100% test coverage. Writing tests with python is very quick and it gives you a wonderful feeling of confidence. We use twisted trial to run the tests and report on coverage, but there are many other tools available.
In my experience python + tests is more reliable and much quicker to write than any other alternatives.
The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). This hasn't causes us a show stopping problem yet, and python is quite easy to profile for both speed and memory if it becomes a problem.
pychecker is a very useful too also which will catch quite a lot of common errors.
BTW, see this blog post: "Type inference for Python" for an interesting discussion of type inference and static typing, including links to some Guido van Rossum blog posts describing adding optional static typing to Python.
I agree with Bruce Eckel that one is better off practicing "strong testing" than relying on strong typing. I think that applies equally well to embedded development.
Personally, I've worked on some of the software that runs in the device used by BusRadio. It's an example of an embedded project built on Twisted and Python. The device is an embedded XScale processor running a debian-derived distribution, so it might not meet certain definitions of "embedded", but it is pretty dang small: it fits into the dashboard of a school bus.
There were some interesting issues with using Python with large libraries - the interpreter can take quite a while to start up and load all the code for Twisted on a really slow chip, and some things needed special-case optimizations. However, at no point was the dynamic nature of Python a problem. The software in question certainly wasn't perfect, but at least when using Twisted, a simple programming error will not "bring the whole application down". A traceback will get logged, and processing continues.
So, if you're in an embedded environment sufficiently unconstrained that you can use Python in the first place, it's no different than developing "regular" programs (games, desktop applications, web apps). You don't need static typing there, and you don't need it here either.
At my previous employer I had wanted to spend some time playing with building embedded systems in tinypy, which is a "minimalist implementation of Python in 64k of code". (But I never got to it and I no longer have time.)
Telit makes GSM/GPRS modem modules that include an embedded Python interpreter.
I haven't tried them myself, so I don't know how the Python interpreter compares or differs from a PC implementation, such as which included modules, RAM and ROM memory limits, execution speed, etc.
However, as user foresightyj pointed out in a comment, it appears that they use Python 1.5.x, which is a truly ancient version, and so I would have trouble taking them seriously. Python developers would not enjoy downgrading to such an ancient version without so many modern Python features. I would be concerned about security issues with such an old version.
I've been working on microwave telecommunication equipments based on old and slow powerpc and 16Mb of RAM.
I've been able to port the Python 2.6.1 interpreter on VxWorks, in order to have the command line interpreter available directly from the target shell, or to execute python scripts uploaded to the target flash.
We used those scripts to perform autotest on the target or execute diagnostic procedures.
Here some details on the whole procedure: HOW TO: Port Python to VxWorks
The only real problem I have in day to
day work is that the last of static
checking vs a regular compiler can
cause problems to be thrown at
run-time, e.g. a simple accidental cat
of a string and an int in a print
statement can bring the whole
application down.
Unit tests are your only safety against these things.
Indeed, Python is often used as a 'support language' while you need to write some kind of tests - i.e. I was involved in a project, which (Python based) test framework code base was (is?) almost as big as that of the main product.
Python 'agents' works on QNX, VxWorks - and most problems we have, was to port properly threading and network related parts of our code.
It might be worth to take a look OpenMoko project a lot of embedded development in Python is done there.
Things to watch-out:
- support for Python/C extension module might behave quite strangely depending on platform/OS
- most of embedded platforms offers quite out-dated versions of Python
- finally you will find out that there is a difference between 'proper' embedded software in which every bit counts, and 'modern' embedded software that is performed on >412Mhz XScale CPUs with more thatn 128MB, and then Python just don't match the hardware that you would like to target :(
We use Python here at the university for embedded applications based on the Gumstix hardware platform. Although more capable than traditional embedded systems, we find the mix of small formfactor, low (ish) power consumption and the ease in transferring code between development on desktop machines and the target hardware invaluable.
Python is also a great language to teach the students, and with the Gumstix its great they can get code working on a low power system, rather than the headache and heartbreak that comes with using dedicated languages such as NesC.
My team wrote an embedded software made out of C++ and Python. We decided to write basic classes and heavy computational routines in C++. We wrote logic in Python. Boost libraries as glue. Using boost is never easy, but the results is excellent. Fast and easy to modify. Using python to represent the custom needings, we are able to satisfy customers' needings realtime, changing the code using injection technics. Something really exciting! (ok, I'm a geek ;)
We started prototyping in python but we suddenly realized that it was clearly too slow. So we decided to structure the program in different computational layers, in order to reach the speed requirements. C++ was the best solution.
In order to use python and c++ together we had to keep a strict control on typing.
I worked for a company which used Python on an embedded product based around an Atmel AVR32 and running embedded Linux. The firmware was initially developed on a PC (due to lack of a working hardware prototype), then later moved to the embedded hardware running on the cross-compiled Python interpreter.
The ability to debug and modify source code "live" on the device was a big plus during development, and saved a lot of time. The big disadvantages were speed and memory usage of the Python interpreter.
Following the first release of production firmware we ported critical sections of code over to C/C++. The porting effort was quite straightforward and resulted in an improvement of several orders of magnitude on speed-critical code (as you would expect).
Incidently most of the design and production testing code was written in Python, mainly running inside a test harness on a PC.
In my experience, Python has been traditionally used in desktop environments more than in the embedded field. There are two reasons, related to the fact that Python is interpreted:
C/C++ languages have higher performance than Python (and this is important in embedded systems with a slow microcontroller)
C/C++ languages have more deterministic response times (and this is important in real-time embedded systems controlling something).
Of course, as embedded systems will become faster and time-to-market shorter, Python will be more adopted in the embedded sector.
I have a Python server (using Twisted) and some helper scripts running under XP Embedded, and it's been working great.
Recent developments
MicroPython is a lean and fast implementation of the Python 3 programming language that is optimised to run on a microcontroller.
The European Space Agency (ESA) is funding further development of MicroPython. It is doing so to assess the suitability of the language for space-based applications, in particular for payloads.
WiPy 1.0 & 2.0, LoPy & SiPy are wireless MicroPython platforms sold by Pycom.
Isn't the EVE Online client a showpiece of real-time, high-performance Python?
I'm using a Gatetel GT-HE910 series module which embeds the Telit modem including 3G, GPS, AD, IO and Python 2.7. This is used for a remote data aquisition application. Python is fairy slow on these modules but we only need an update every 15 minutes or in an alarm condition so they work well.
http://www.gatetel.com/#!gt-series/cscb
Blockquote
The only real problem I have in day to day work is that the last of static checking vs a regular compiler can cause problems to be thrown at run-time, e.g. a simple accidental cat of a string and an int in a print statement can bring the whole application down
To me it is a huge deal. Problems you could find at compile time and fix the problem now have to rely at run time. Not knowing the data type and having to write additional function just to check the datatype is hassle. There is no need to do that in C. How would you declare 'volatile' in python?
Blockquote
The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). This hasn't causes us a show stopping problem yet, and python is quite easy to profile for both speed and memory if it becomes a problem.
This is also huge. For Embedded sytems or RTOS time constraint is very important.
Python is not necessary quick to code. It really depends what language you are comfortable with. Honestly it takes me 1 day to write function and unnecessary object orientation stuff which I can do in 2 hours in C.
Testing is so inconvenient I have to write the code, py_compile, copy pyc in the target then run the program, then python quits complaining variable not defined or type cast error or some petty thing like that.
My suggestion is C toolchain is available for any target. C is fast, hardware oriented,challenging and fun. Stick with C for Embedded systems. No need to install configure silly python packages just to run it.