As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm rewriting my server and deciding between using Node.js and Python.
I prefer Javascript (as I'm extremely well versed in it) but this article is giving me pause. I'm curious if anyone has had any problems but also, I'm curious if there are any platform related virtues to one over the other.
Specifically, do either of them not-support/limit/excel-at
mySQL calls
imageMajik interaction
calls out to the system for file-system manipulation
calls to the web via WGET/Curl anything else
you can think of that normal CGI processes have to deal with.
I don't want to start an argument about the virtues of PHP or .Net, I have made a definitive decision to move to either Python or Node.js and was totally settled on Node.js, until I read the above article, so, really, I'm just looking for specific problems/virtues that people have had with these two tools.
Thanks in advance.
There are two issues here:
The choice of language. You'll need to decide for yourself if you prefer python or javascript, and which one offers the libraries you want. I can't help you with that part of the decision.
The choice of IO model.
Unlike what the article suggests a single threaded non blocking IO model isn't bad in principle. Personally I like this model a lot, since it removes the complexities of multi-threading, while still working on a shared memory model.
Another advantage of this model is that because you don't need a thread per request, you can have many concurrent open requests.
One disadvantage is that without language support, you need to explicitly queue continuations, instead of writing the code in a simple imperative manner. C#5 attacks this problem with its async-await feature, and I wouldn't be surprised if node.js offered something similar in the future.
The article mainly talks about the second disadvantage: If you block the main thread, you block the whole server.
One of his examples is simply abuse: He implements a busy wait, instead of subscribing to an event. With correct programming this simply shouldn't happen.
The other example has more of a point: If you have CPU intensive calculations, you better not do them on the main thread. The simple solution to this is spinning of a worker thread, that does the calculation without touching memory used by the main thread. And once it's done it calls a callback on the main thread. Not sure if node.js offers this though. But since many server applications aren't CPU bound, this often isn't a problem at all.
In general that article is very low quality, and tells more about the author than about node.js. You shouldn't let it influence your decision.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been developing a web server application with Django for the last year. The stack is Django + Apache + MySql. This stack was perfectly suited to our needs. Recently came the need of some real-time capabilities and the opportunity to change/rewrite a lot of stuff in the server application. To my surprise, Django is not the best option here.
I've been reading a lot (push, WebSockets, gunicorn... lots of stuff.. http://curella.org/blog/django-push-using-server-sent-events-and-websocket/) but I haven't managed to decide if I want to go with Django or if I should purpose a new stack of software which is more suitable for the job. Going with Django seems a little bit unnatural. Can any experienced developers point me in the right direction?
So, the basic question is: what are some alternatives to the current software stack I have for building a real time web application?
Thanks
PS: Not a native English speaker. :)
EDIT: The alternatives need to allow secure connections.
EDIT 2: The web applications we develop are games.
Look into http://www.tornadoweb.org/ + http://www.mongodb.org/ + https://github.com/bitly/asyncmongo + http://socket.io/
I think it's a good idea to use that to create real-time application.
You need to weigh your decisions against your goals.
You want a product
Then write your code in the language framework that you are most familiar with, only when you have a product and it has limitations consider switching tools.
You want to learn something new
Try out the new language, framework - but it may take a lot longer to produce a product and you may find that the new tool isn't any better than the old one.
Half and Half
Try starting two projects, and building the same thing in each project, just using the other tool. This will take even longer, but you should be able to see which tool you prefer / is best suited to your task very quickly.
Alternatives
I'm not quite sure what you mean by django is 'unnatural'. Django can do server / client communication - just use some AJAX to talk back to the django server, then call another pythonic library to process the request. Python alternatives to django include the wonderful flask and web.py, though neither will do client / server communication unless you program them too (like django). If you're not into python you might try ruby on rails. For the client side you will need to know something about JavaScript so go and learn up on that (coffescript and jQuery can ease JS pain, but do learn JavaScript first).
Of course you could bite the bullet and go with node.js as the server base, apparently it's pretty good (I've not tried it yet) - and written in JS.
EDIT:
In light of your comments, take a closer look at
django-websockets,
node.js + push server
websockets
Noting also that this question is a potential duplicate.
For the real time web applications I suggest that you go with the websockets, they can be secure and response times are very fast as when connection is made there are no overheads in communication anymore. If you are proficient in python you can make server in python twisted and clients in python and/or javascript using autobahn. P.S. here is a really great tutorial for twisted.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm just finishing a relatively big project in scala and will start another related one soon.
I haven't chosen the language yet and would like my decision to be based more on features of the language or available libraries than interoperability concerns.
And this is the reason to ask this.
My requirements are (top is more important):
interoperability between various programming languages/platforms (probable ones are JVM, Haskell, Python, C/C++)
easy to prototype/refactor
easy to program
performant without much concern for optimization on my part (this may exclude using files)
One of the easiest ways to communicate between programs written in various languages and distributed across various platforms is to use a message passing library.
ZeroMQ is one of my favourite due to its simplicity, speed, and the availability of bindings for a significant number of languages: http://www.zeromq.org/bindings:_start
You could also use ActiveMQ, RabbitMQ, or whatever else you come across that has bindings in several languages.
I do almost all of my communication via Redis, its amazingly simple to move data between languages accurately and quickly. Its a simple key/store database that allows me to do this in python and
import redis
r = redis.Redis()
r.set("a", 33)
And then, almost the same code in java (minus the massive initialization because java is verbose)
r.get("a"); // in java
+1 to message passing, especially if the library will defer delivery when the recipient is unavailable. If you decide to use messaging, you will need to define a messaging protocol. One good choice is Representational State Transfer (ReST) which, despite its name, is a stateless, message-based interaction protocol based on HTTP. It requires extremely careful API definition, which is, in itself, a Very Good Thing.
Hope that this helps.
There are lots of ways, but they split into three main options:
Use some kind of centralised communication node (Message Queue, Key-Value store, maybe database);
Cross-platform distributed object technology (like CORBA);
HTTP using whatever web-services approach you like (although most people not brainwashed by the Enterprise Borg like restful webservices of various sorts), directly between components.
I would ignore 2 (it never turns out to be that easy).
As to 1, note that databases should generally not be used as a fake message-passing platform. Only use this if it really is all about storing datasets. Note also that http://redis.io is a message queue AND key-value store.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I've been reading up a lot lately on comparisons between Python and a bunch of the more traditional professional languages - C, C++, Java, etc, mainly trying to find out if its as good as those would be for my own purposes. I can't get this thought out of my head that it isn't good for 'real' programming tasks beyond automation and macros.
Anyway, the general idea I got from about two hundred forum threads and blog posts is that for general, non-professional-level progs, scripts, and apps, and as long as it's a single programmer (you) writing it, a given program can be written quicker and more efficiently with Python than it could be with pretty much any other language. But once its big enough to require multiple programmers or more complex than a regular person (read: non-professional) would have any business making, it pretty much becomes instantly inferior to a million other languages.
Is this idea more or less accurate?
(I'm learning Python for my first language and want to be able to make any small app that I want, but I plan on learning C eventually too, because I want to get into driver writing eventually. So I've been trying to research each ones strengths and weaknesses as much as I can.)
Anyway, thanks for any input
An open source project I work on for VCS integration (RabbitVCS) is written entirely in Python/PyGTK and includes:
Two file browser extensions
A text editor extension
A backend VCS status cache running asynchronously, using DBUS for the interface
A fairly comprehensive set of dialogs, including VCS log browsers, a repository browser and a merge wizard (maybe that one isn't such a selling point).
There's no standalone app, but we're thinking about it.
Because we're always adding new features, and currently trying to adapt to new VCS', Python is ideal for the ability to quickly refactor entire layers of code without breaking our mental flow. I've also found that the syntax itself makes a real difference with complicated merging of version controlled branches, but that might come with the ability to read it quickly.
Recently we've begun adding support for a new VCS, requiring:
refactoring current code to separate VCS specific actions and information from common/generic information
refactoring the UI layer to accomodate the new functionality
Most of what we've achieved has been possible because of the availability of C/Python bindings (eg. PySVN, Nautilus-Python, etc). But when it hasn't been available... well, it's not that hard to roll your own (as a developer did for the new VCS). When the bindings lack functionality... it's not that hard to add it.
The real drawbacks so far have been:
Threading mishaps. Lesson learnt: forget about threads, use multiple processes where possible or your toolkit's threading method (eg. PyGTK, wxPython and Twisted all have their own ways of dealing with concurrency)
(C) Extensions. Cause threading mishaps (they almost invariably lock the GIL, preventing threading). See above.
Needing to hack on C bindings when certain functionality is unavailable.
Profiling can be tricky when you're not just doing something based on a single function call.
If you want to know about more specific aspects, ask away in the comments :)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Just like the title asks. I've been learning Python for a while now and I'd say I'm pretty decent with it. I'm looking for a medium or large project to keep me busy for quite a while. Your suggestions are greatly appreciated.
Find a local charitable orgainzation with a lousy web presence. Solve their problem. Help other people. Learn more Python. Everyone wins.
You can invent a game and code it with pygame. They're always fun to code and you still learn a lot when you code a game.
What are you interested in doing? You could write a whole host of database programs, for keeping track of recipes, cd's, contacts, self tests, etc....
Basically make code to load/save to a database and enforce some business rules, then expose it by a web service. Then make both a web front end and an application graphical front end (using TK/WxWidgets/Qt (4.5 will be LGPL YAY)) that talk with the web service.
That should give you practice with creating/talking with web services (something more and more companies are doing) along with both main ways of creating a GUI.
You could try to replicate an application that is impressive to you just for the sake of guessing how it works behind the scene.
If I had to do that, I'd probably try to clone the following webapps using Django:
BaseCamp
dPaste
Reddit
Mint.com
Here at stackoverflow there are already people asking for solutions to their problems:
e.g.: If you are interested in GUI programming: thumbnailctrl
Anything that hasn't been done to death... no need for yet another clone of popular app x
What I like to do is(with my ti 83) instead of doing all my math by hand I like to program my calculator ti di the problem then do the rest of the problems with the new program. It's fun and you get your homework done so you could do this in python for a fun project(s).
Consider something that does the following:
is multi threaded and preferably include need for synchronization
reads/writes data to a remote database (or even local database)
reads from a web service and includes xml parsing
outputs xml/ html
There are a number of example projects you could do, but if you accomplish all the above, then it will surely give sufficient exposure.
If I had the time to code something just for the fun and the experience, I would personally start an open source project for something that people need and which does not already exist.
You can search the Web for a list of missing opensource projects, or you can base it on your own experience (for example, I would personally love to have some way to synchronize my iPhone with thunderbird+lightning : I hear there's a solution through Google Calendars, but I would like a solution without external servers).
I think the best thing you can do now is spend time learning a new technology, preferably including a new programming language.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to have some work done on the Network front, pinging numerous computers on a LAN and retrieving data about the response time. Which would be the most useful and productive to work with: Perl or Python?
I agree that it is pretty subjective which programming language you use - essentially I would rather get the job done as quickly and efficiently as possible which making it supportable - so that depends on your infrastructure...
Can I suggest that you look at Nagios rather than re-inventing the wheel yourself?
While Nagios might require a greater learning curve in terms of configuration, it will be worth it in the long run, and if you can't find a plugin to suit your requirements, then it is easy to write your own. Joel Spolsky has written an interesting article on this.
Well, I work in both Perl and Python, and my day job is supporting a network monitoring software. Most of the import points have already been covered, but I'll consolidate/reiterate here:
Don't reinvent the wheel - there are dozens of network monitoring solutions that you can use to perform ping tests and analyze collected data. See for example
Nagios
Zenoss
OpenNMS
PyNMS
If you insist on doing this yourself, this can be done in either Perl or Python - use the one you know best. If you're planning on parsing a lot of text, it will be easier to do this "quick and dirty" in Perl than it will be in Python. Both can do it, but Python requires an OOP approach and it just isn't as easy as Perl's inline regex syntax.
Use libraries - many, many people have done this task before you so look around for a suitable lib like Net::Ping in Perl or the icmplib in Python or this ping.py code.
Use threads or asynchronous pings - otherwise pinging is going to take forever for example see this recipe using threads to run pings simultaneously. This is particularly easy to do in Python using either approach, so this is one place Python will be easier to work with IMO than using Perl.
Go with Perl.
You'll have access to a nice Ping object, Net::Ping and storing the results in a database is pretty easy.
Either one should work just fine. If you don't have experience with either, flip a coin. No language is inherently productive; languages allow people to be productive. Different people will benefit differently from different languages.
In general, though, when you know your specific task and need to choose a tool, look for the libraries that would make your life easy. For Perl, check out the Comprehensive Perl Archive Network. There are modules for just every networking thing you might need.
Python probably has very similar tools and libraries; I just don't know what they are.
I know Perl better than Python, so my choice would fall on Perl. That said, I'd argue that on low level tasks (like pinging computers on a network and things like that) they are rather equivalent. Python may have a better object-oriented support but for scripting (that happens to be what you need) the power of Perl is quite obvious. The large pool of tested modules (some of them are even object oriented) that you find on CPAN usually can do everything you need and they can even scale well if you use them appropriately.
I don't know Python, so I can't comment on what it offers, and I agree with those who suggest Nagios or other existing systems.
However, if you decide to roll your own system with Perl, Consider using POE. POE is a cooperative multitasking and networking framework.
POE has a steep learning curve. But you will be repaid for you effort very quickly. POE will provide a solid foundation to build from. Much of the client code you will need is already available on CPAN.
Whichever you know better or are more comfortable using. They both can do the job and do it well, so it is your preference.
Right now I've experimented the approach of creating some simple unit test for network services using various TAP libraries (mainly bash+netcat+curl and perl). The advantage is that you wrote a single script that you can use for both unit and network testing.
The display is dove via TAP::Harness::HTML.
I'd say that if you need something quick and dirty that's up and running by this afternoon, then perl is probably the better language.
However for developing solid application that's easy to maintain and extend and that you can build on over time, I'd go with python.
This is of course assuming you know both languages more or less equally well.