While I've written unit tests for most of the code I've done, I only recently got my hands on a copy of TDD by example by Kent Beck. I have always regretted certain design decisions I made since they prevented the application from being 'testable'. I read through the book and while some of it looks alien, I felt that I could manage it and decided to try it out on my current project which is basically a client/server system where the two pieces communicate via. USB. One on the gadget and the other on the host. The application is in Python.
I started off and very soon got entangled in a mess of rewrites and tiny tests which I later figured didn't really test anything. I threw away most of them and and now have a working application for which the tests have all coagulated into just 2.
Based on my experiences, I have a few questions which I'd like to ask. I gained some information from New to TDD: Are there sample applications with tests to show how to do TDD? but have some specific questions which I'd like answers to/discussion on.
Kent Beck uses a list which he adds to and strikes out from to guide the development process. How do you make such a list? I initially had a few items like "server should start up", "server should abort if channel is not available" etc. but they got mixed and finally now, it's just something like "client should be able to connect to server" (which subsumed server startup etc.).
How do you handle rewrites? I initially selected a half duplex system based on named pipes so that I could develop the application logic on my own machine and then later add the USB communication part. It them moved to become a socket based thing and then moved from using raw sockets to using the Python SocketServer module. Each time things changed, I found that I had to rewrite considerable parts of the tests which was annoying. I'd figured that the tests would be a somewhat invariable guide during my development. They just felt like more code to handle.
I needed a client and a server to communicate through the channel to test either side. I could mock one of the sides to test the other but then the whole channel wouldn't be tested and I worry that I'd miss that. This detracted from the whole red/green/refactor rhythm. Is this just lack of experience or am I doing something wrong?
The "Fake it till you make it" left me with a lot of messy code that I later spent a lot of time to refactor and clean up. Is this the way things work?
At the end of the session, I now have my client and server running with around 3 or 4 unit tests. It took me around a week to do it. I think I could have done it in a day if I were using the unit tests after code way. I fail to see the gain.
I'm looking for comments and advice from people who have implemented large non trivial projects completely (or almost completely) using this methodology. It makes sense to me to follow the way after I have something already running and want to add a new feature but doing it from scratch seems to tiresome and not worth the effort.
P.S. : Please let me know if this should be community wiki and I'll mark it like that.
Update 0 : All the answers were equally helpful. I picked the one I did because it resonated with my experiences the most.
Update 1: Practice Practice Practice!
As a preliminary comment, TDD takes practice. When I look back at the tests I wrote when I began TDD, I see lots of issues, just like when I look at code I wrote a few year ago. Keep doing it, and just like you begin to recognize good code from bad, the same things will happen with your tests - with patience.
How do you make such a list? I
initially had a few items like "server
should start up", "server should abort
if channel is not available" etc. but
they got mixed and finally now, it's
just something like "client should be
able to connect to server"
"The list" can be rather informal (that's the case in Beck's book) but when you move into making the items into tests, try to write the statements in a "[When something happens to this] then [this condition should be true on that]" format. This will force you to think more about what it is you are verifying, how you would verify it and translates directly into tests - or if it doesn't it should give you a clue about which piece of functionality is missing. Think use case / scenario. For instance "server should start up" is unclear, because nobody is initiating an action.
Each time things changed, I found that
I had to rewrite considerable parts of
the tests which was annoying. I'd
figured that the tests would be a
somewhat invariable guide during my
development. They just felt like more
code to handle.
First, yes, tests are more code, and requires maintenance - and writing maintainable tests takes practice. I agree with S. Lott, if you need to change your tests a lot, you are probably testing "too deep". Ideally you want to test at the level of the public interface, which is not likely to change, and not at the level of the implementation detail, which could evolve. But part of the exercise is about coming up with a design, so you should expect to get some of it wrong and have to move/refactor your tests as well.
I could mock one of the sides to test
the other but then the whole channel
wouldn't be tested and I worry that
I'd miss that.
Not totally sure about that one. From the sound of it, using a mock was the right idea: take one side, mock the other one, and check that each side works, assuming the other one is implemented properly. Testing the whole system together is integration testing, which you also want to do, but is typically not part of the TDD process.
The "Fake it till you make it" left me
with a lot of messy code that I later
spent a lot of time to refactor and
clean up. Is this the way things work?
You should spend a lot of time refactoring while doing TDD. On the other hand, when you fake it, it's temporary, and your immediate next step should be to un-fake it. Typically you shouldn't have multiple tests passing because you faked it - you should be focusing on one piece at a time, and work on refactoring it ASAP.
I think I could have done it in a day
if I were using the unit tests after
code way. I fail to see the gain.
Again, it takes practice, and you should get faster over time. Also, sometimes TDD is more fruitful than others, I find that in some situations, when I know exactly the code I want to write, it's just faster to write a good part of the code, and then write tests.
Besides Beck, one book I enjoyed is The Art of Unit Testing, by Roy Osherove. It's not a TDD book, and it is .Net-oriented, but you might want to give it a look anyways: a good part is about how to write maintainable tests, tests quality and related questions. I found that the book resonated with my experience after having written tests and sometimes struggled to do it right...
So my advice is, don't throw the towel too fast, and give it some time. You might also want to give it a shot on something easier - testing server communication related things doesn't sound like the easiest project to start with!
Kent Beck uses a list ... finally now, it's just something like "client should be able to connect to server" (which subsumed server startup etc.).
Often a bad practice.
Separate tests for each separate layer of the architecture are good.
Consolidated tests tend to obscure architectural issues.
However, only test the public functions. Not every function.
And don't invest a lot of time optimizing your testing. Redundancy in the tests doesn't hurt as much as it does in the working application. If things change and one test works, but another test breaks, perhaps then you can refactor your tests. Not before.
2. How do you handle rewrites? ... I found that I had to rewrite considerable parts of the tests.
You're testing at too low a level of detail. Test the outermost, public, visible interface. The part that's supposed to be unchanging.
And
Yes, significant architectural change means significant testing change.
And
The test code is how you prove things work. It is almost as important as the application itself. Yes, it's more code. Yes, you must manage it.
3. I needed a client and a server to communicate through the channel to test either side. I could mock one of the sides to test the other but then the whole channel wouldn't be tested ...
There are unit tests. With mocks.
There are integration tests, which test the whole thing.
Don't confuse them.
You can use unit test tools to do integration tests, but they're different things.
And you need to do both.
4. The "Fake it till you make it" left me with a lot of messy code that I later spent a lot of time to refactor and clean up. Is this the way things work?
Yes. That's exactly how it works. In the long run, some people find this more effective than straining their brains trying to do all the design up front. Some people don't like this and want to do all the design up front; you're free to do a lot of design up front if you want to.
I've found that refactoring is a good thing and design up front is too hard. Maybe it's because I've been coding for almost 40 years and my brain is wearing out.
5. I fail to see the gain.
All the true geniuses find that testing slows them down.
The rest of us can't be sure our code works until we have a complete set of tests that prove that it works.
If you don't need proof that your code works, you don't need testing.
Q. Kent Beck uses a list which he adds to and strikes out from to guide the development process. How do you make such a list? I initially had a few items like "server should start up", "server should abort if channel is not available" etc. but they got mixed and finally now, it's just something like "client should be able to connect to server" (which subsumed server startup etc.).
I start by picking anything I might check. In your example, you chose "server starts".
Server starts
Now I look for any simpler test I might want to write. Something with less variation, and fewer moving parts. I might consider "configured server correctly", for example.
Configured server correctly
Server starts
Really, though, "server starts" depends on "configured server correctly", so I make that link clear.
Configured server correctly
Server starts if configured correctly
Now I look for variations. I ask, "What could go wrong?" I could configure the server incorrectly. How many different ways that matter? Each of those makes a test. How might the server not start even though I configured it correctly? Each case of that makes a test.
Q. How do you handle rewrites? I initially selected a half duplex system based on named pipes so that I could develop the application logic on my own machine and then later add the USB communication part. It them moved to become a socket based thing and then moved from using raw sockets to using the Python SocketServer module. Each time things changed, I found that I had to rewrite considerable parts of the tests which was annoying. I'd figured that the tests would be a somewhat invariable guide during my development. They just felt like more code to handle.
When I change behavior, I find it reasonable to change the tests, and even to change them first! If I have to change tests that don't directly check the behavior I'm in the process of changing, though, that's a sign that my tests depend on too many different behaviors. Those are integration tests, which I think are a scam. (Google "Integration tests are a scam")
Q. I needed a client and a server to communicate through the channel to test either side. I could mock one of the sides to test the other but then the whole channel wouldn't be tested and I worry that I'd miss that. This detracted from the whole red/green/refactor rhythm. Is this just lack of experience or am I doing something wrong?
If I build a client, a server, and a channel, then I try to check each in isolation. I start with the client, and when I test-drive it, I decide how the server and channel need to behave. Then I implement the channel and server each to match the behavior I need. When checking the client, I stub the channel; when checking the server, I mock the channel; when checking the channel, I stub and mock both client and server. I hope this makes sense to you, since I have to make some serious assumptions about the nature of this client, server, and channel.
Q. The "Fake it till you make it" left me with a lot of messy code that I later spent a lot of time to refactor and clean up. Is this the way things work?
If you let your "fake it" code get very messy before cleaning it up, then you might have spent too long faking it. That said, I find that even though I end up cleaning up more code with TDD, the overall rhythm feels much better. This comes from practice.
Q. At the end of the session, I now have my client and server running with around 3 or 4 unit tests. It took me around a week to do it. I think I could have done it in a day if I were using the unit tests after code way. I fail to see the gain.
I have to say that unless your client and server are very, very simple, you need more than 3 or 4 tests each to check them thoroughly. I will guess that your tests check (or at least execute) a number of different behaviors at once, and that might account for the effort it took you to write them.
Also, don't measure the learning curve. My first real TDD experience consisted of re-writing 3 months' worth of work in 9, 14-hour days. I had 125 tests that took 12 minutes to run. I had no idea what I was doing, and it felt slow, but it felt steady, and the results were fantastic. I essentially re-wrote in 3 weeks what originally took 3 months to get wrong. If I wrote it now, I could probably do it in 3-5 days. The difference? My test suite would have 500 tests that take 1-2 seconds to run. That came with practice.
As a novice programmer, the thing I found tricky about test-driven development was the idea that testing should come first.
To the novice, that’s not actually true. Design comes first. (Interfaces, objects and classes, methods, whatever’s appropriate to your language.) Then you write your tests to that. Then you write the code that actually does stuff.
It’s been a while since I looked at the book, but Beck seems to write as if the design of the code just sort of happens unconsciously in your head. For experienced programmers, that may be true, but for noobs like me, nuh-uh.
I found the first few chapters of Code Complete really useful for thinking about design. They emphasise the fact that your design may well change, even once you’re down at the nitty gritty level of implementation. When that happens, you may well have to re-write your tests, because they were based on the same assumptions as your design.
Coding is hard. Let’s go shopping.
For point one, see a question I asked a while back relating to your first point.
Rather than handle the other points in turn, I'll offer some global advice. Practice. It took me a good while and a few 'dodgy' projects (personal though) to actual get TDD. Just Google for much more compelling reasons on why TDD is so good.
Despite the tests driving the design of my code, I still get a whiteboard and scribble out some design. From this, at least you have some idea of what you are meant to be doing. Then I produce the list of tests per fixture that I think I need. Once you start working, more features and tests get added to the list.
One thing that stood out from your question is the act of rewriting your tests again. This sounds like you are carrying out behavioural tests, rather than state. In other words, the tests sound too closely tied to your code. Thus, a simple change that doesn't effect the output will break some tests. Unit testing (at least good unit testing) too, is a skill to master.
I recommend the Google Testing Blog quite heavily because some of the articles on there made my testing for TDD projects much better.
The the named pipes were put behind the right interface, changing how that interface is implemented (from named pipes to sockets to another sockets library) should only impact tests for the component that implements that interface. So cutting things up more/differently would have helped... That interface the sockets are behind will likely evolve to.
I started doing TDD maybe 6 months ago? I am still learning myself. I can say over time my tests and code have gotten much better, so keep it up. I really recommend the book XUnit Design Patterns as well.
How do you make such a list to add to
and strike out from to guide the
development process? I initially had a
few items like "server should start
up", "server should abort if channel
is not available"
Items in TDD TODO lists are finer grained than that, they aim at testing one behavior of one method only, for instance:
test successful client connection
test client connection error type 1
test client connection error type 2
test successful client communication
test client communication fails when not connected
You could build a list of tests (positive and negative) for every example you gave. Moreover, when unit testing you do not establish any connection between the server and the client. You just invoke methods in isolation, ... This answers question 3.
How do you handle rewrites?
If the unit test tests behavior and not implementation, then they do not have to be rewritten. If unit test code really creates a named pipe to communicate with production code and, then obviously the tests have to be modified when switching from pipe to socket.
Unit tests shall stay away from external resources such as filesystems, networks, databases because they are slow, can be unavailable ... see these Unit Testing rules.
This implies the lowest level function are not unit tested, they will be tested with integration tests, where the whole system is tested end-to-end.
Related
I wrote a python program for the company I am employed at. We want to bring this code (along with other code) onto customers' server. My boss want the code secured so that the customer cannot read the code. I know that other people have asked similar questions
How do I protect Python code?
and the answer so far has been "it cannot be done, use legal measures like NDAs". On the one hand this is hard to believe on the other hand my boss still wants it, not just legal measures (and honestly, I want it to). A colleagues suggested this, which could be other code than python as well and also databases and whatnot, so it would be more all purpose than solutions just tailored to python:
Put all the code into a "box", encrypt the box and make it so that the box can only be decrypted into memory, where the code will be interpreted.
This sounds legit to me. It might be possible for the client to read the content while the program is in memory, but this sounds like a lot of work and not worth the effort for the client.
How could this approach be implemented in practice?
Sometimes you need to figure out what effect your boss wants instead of the boss' request.
Python made an early design choice to prevent information hiding in order to make better development tools available. This breaks most pure Python schemes.
Also, all these systems can be broken with different levels of effort. The most common mechanisms for protection:
Talk to the mothership: require an Internet connection and a system you guarantee to keep up (against DDOS attacks, etc.) such that program needs to talk to the mothership to run. Sometimes logging in does something as simple as passing back the current date so that using pirated software is possible but really annoying. Other configurations have significant code running only on the server.
Subvert Python: you can screw around with import hooks, decrypt some string and eval it, screw around with custom codecs, run code obfuscators, etc. The problem is these always cause errors, errors are hard to debug, and users no longer give you the nice stacktrace of what other errors occurred.
Configure Customer Name: Your configuration file with the customer name uses some cryptographic check-sum.
So, the general answer is no. Use contracts and stay out of China.
box can only be decrypted into memory, where the code will be interpreted
If the key is located on the client's side, from the security perspective the encryption is pointless.
You may still obfuscate your code to make it less readable, but it still only obfuscation
this sounds like a lot of work and not worth the effort.
that's may be good estimation. I believe this is one of the reasons why SaaS and APIs are so popular these days
I am working for a company who wants me to test and cover every piece of code I have.
My code works properly from browser. There is no error no fault.
Except my code works properly on browser and my system is responding properly do I need to do testing? Is it compulsory to do testing?
Whether it’s compulsory depends on organization you work for. If others say it is, then it is. Just check how tests are normally written in the company and follow existing examples.
(There’re a lot of ways Django-based website can be tested, different companies do it differently.)
Why write tests?
Regression testing. You checked that your code is working, does it still work now? You or someone else may change something and break your code at some point. Running test suite makes sure that what was written yesterday still works today; that the bug fixed last week wasn’t accidentally re-introduced; that things don’t regress.
Elegant code structuring. Writing tests for your code forces you to write code in certain way. For example, if you must test a long 140-line function definition, you’ll realize it’s much easier to split it into smaller units and test them separately. Often when a program is easy to test it’s an indicator that it was written well.
Understanding. Writing tests helps you understand what are the requirements for your code. Properly written tests will also help new developers understand what the code does and why. (Sometimes documentation doesn’t cover everything.)
Automated tests can test your code under many different conditions quickly, sometimes it’s not humanly possible to test everything by hand each time new feature is added.
If there’s the culture of writing tests in the organization, it’s important that everyone follows it without exceptions. Otherwise people would start slacking and skipping tests, which would cause regressions and errors later on.
First, I'm a newbie at both TDD and Python. I don't code for a living, but I do write a ton of code.
I'm in the process of moving a large project from Matlab to Python. 1) because the language has been limiting, and 2) when analyzing edge cases and debugging, I was starting to break as much as I was fixing. So I decided I'll start from scratch, with TDD this time.
I get the TDD cycle (red, green, refractor). The question is, -what- does one test? For an individual unit/function, how many tests do you write? In this case, I already have the whole project in my head, even though there will be a few library, and structural, changes between the languages.
Also, I've heard ad nauseam to use asserts everywhere, so I am. But sometimes it seems I'm writing tests to verify my asserts. And it seems like a waste of time to write a valid input type test for every argument, one function at a time.
You've ran into the very common realisation across programmers who try to embrace testing (good for you!) - that unit testing, for most part, sucks. Or at the very least is a wrong tool for what you are trying to do.
And this is why I recommend for you to drop the heresy of unit testing and embrace the elegance and beauty of behaviour testing. There even is a fantastic library for it: behave.
This not only allows you to re-use your code, but also forces you to describe logic of your application in plain English (which, when done right, is equivalent to proper documentation) which helps to narrow logic holes and other design non-senses that may arise. It also removes the problem of "what to test", as with BDD you should test every way the application can behave.
In your specific case you will definitely also be interested in the scenario outline functionality, where you can write the testing code once, and then just write long list of examples to run by this code.
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored.
However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor.
(I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
The real test for the tests is the production code.
An effective way to check that a test code refactor hasn't broken your tests would be to do Mutation Testing, in which a copy of the code under test is mutated to introduce errors in order to verify that your tests catch the errors. This is a tactic used by some test coverage tools.
I haven't used it (and I'm not really a python coder), but this seems to be supported by the Python Mutant Tester, so that might be worth looking at.
Coverage.py is your friend.
Move over all the tests you want to refactor into "system tests" (or some such tag). Refactor the tests you want (you would be doing unit tests here right?) and monitor the coverage:
After running your new unit tests but before running the system tests
After running both the new unit tests and the system tests.
In an ideal case, the coverage would be same or higher but you can thrash your old system tests.
FWIW, py.test provides mechanism for easily tagging tests and running only the specific tests and is compatible with unittest2 tests.
Interesting question - I'm always keen to hear discussions of the type "how do I test the tests?!". And good points from #marksweb above too.
It's always a challenge to check your tests are actually doing what you want them to do and testing what you intend, but good to get this right and do it properly. I always try to consider the rule-of-thumb that testing should make up 1/3 of development effort in any project... regardless of project time constraints, pressures and problems that inevitably crop up.
If you intend to continue and grow your project have you considered refactoring like you say, but in a way that creates a proper test framework that allows test driven development (TDD) of any future additions of functionality or general expansion of the project?
Although you have mentioned Python, I would like to comment how refactoring is applied in Smalltalk. Most modern Smalltalk implementations include a "Refactoring Browser" integrated in a System Browser to restructure source code. The RB includes a rewrite framework to perform dynamically the transformations you asked about preserving the system behavior and stability. A way to use it is to open a scoped browser, apply refactorings and review/edit changes before commiting through a diff tool. I don't know about maturity of Python refactoring tools, but it took many iteration cycles (years) for the Smalltalk community to have such an amazing piece of software.
Don Roberts and John Brant wrote one of the first refactoring browser tools which now serves as the standard for refactoring tools. There are some videos and here demonstrating some of these features. For promoting a method into a superclass, in Pharo you just select the method, refactor and "pull up" menu item. The rule will detect and let you review the proposed duplicated sub-implementors for deletion before its execution. Application of refactorings are regardless of Testing code.
In theory you could write a test for the test, mocking the actualy object under test.But I guess that is just way to much work and not worth it.
So what you are left with are some strategies, that will help, but not make this fail safe.
Work very carefully and slowly. Use the features of you IDEs as much as possible in order to limit the chance of human error.
Work in pairs. A partner looking over your shoulder might just spot the glitch that you missed.
Copy the test, then refactor it. When done introduce errors in the production code to ensure, both tests find the the problem in the same (or equivalent) ways. Only then remove the original test.
The last step can be done by tools, although I don't know the python flavors. The keyword to search for is 'mutation testing'.
Having said all that, I'm personally satisfied with steps 1+2.
I can't see an easy way to refactor a test suite, and depending on the extent of your refactor you're obviously going to have to change the test suite. How big is your test suite?
Refactoring properly takes time and attention to detail (and a lot of Ctrl+C Ctrl+V!). Whenever I've refactored my tests I don't try and find any quick ways of doing things, besides find & replace, because there is too much risk involved.
You're best of doing things properly and manually albeit slowly if you want to make keep the quality of your tests.
Don't refactor the test suite.
The purpose of refactoring is to make it easier to maintain the code, not to satisfy some abstract criterion of "code niceness". Test code doesn't need to be nice, it doesn't need to avoid repetition, but it does need to be thorough. Once you have a test that is valid (i.e. it really does test necessary conditions on the code under test), you should never remove it or change it, so test code doesn't need to be easy to maintain en masse.
If you like, you can rewrite the existing tests to be nice, and run the new tests in addition to the old ones. This guarantees that the new combined test suite catches all the errors that the old one did (and maybe some more, as you expand the new code in future).
There are two ways that a test can be deemed invalid -- you realise that it's wrong (i.e. it sometimes fails falsely for correct code under test), or else the interface under test has changed (to remove the API tested, or to permit behaviour that previously was a test failure). In that case you can remove a test from the suite. If you realise that a whole bunch of tests are wrong (because they contain duplicated code that is wrong), then you can remove them all and replace them with a refactored and corrected version. You don't remove tests just because you don't like the style of their source.
To answer your specific question: to test that your new test code is equivalent to the old code, you would have to ensure (a) all the new tests pass on your currently-correct-as-far-as-you-known code base, which is easy, but also (b) the new tests detect all the errors that the old tests detect, which is usually not possible because you don't have on hand a suite of faulty implementations of the code under test.
Test code can be the best low level documentation of your API since they do not outdate as long as they pass and are correct. But messy test code doesn't serve that purpose very well. So refactoring is essential.
Also might your tested code change over time. So do the tests. If you want that to be smooth, code duplication must be minimized and readability is a key.
Tests should be easy to read and always test one thing at once and make the follwing explicit:
what are the preconditions?
what is being executed?
what is the expected outcome?
If that is considered, it should be pretty safe to refactor the test code. One step at a time and, as #Don Ruby mentioned, let your production code be the test for the test.
For many refactoring you can often safely rely on advanced IDE tooling – if you beware of side effects in the extracted code.
Although I agree that refactoring without proper test coverage should be avoided, I think writing tests for your tests is almost absurd in usual contexts.
We are mainting a web application that is built on Classic ASP using VBScript as the primary language. We are in agreement that our backend (framework if you will) is out dated and doesn't provide us with the proper tools to move forward in a quick manner. We have pretty much embraced the current webMVC pattern that is all over the place, and cannot do it, in a reasonable manner, with the current technology. The big missing features are proper dispatching and templating with inheritance, amongst others.
Currently there are two paths being discussed:
Port the existing application to Classic ASP using JScript, which will allow us to hopefully go from there to .NET MSJscript without too much trouble, and eventually end up on the .NET platform (preferably the MVC stuff will be done by then, ASP.NET isn't much better than were we are on now, in our opinions). This has been argued as the safer path with less risk than the next option, albeit it might take slightly longer.
Completely rewrite the application using some other technology, right now the leader of the pack is Python WSGI with a custom framework, ORM, and a good templating solution. There is wiggle room here for even django and other pre-built solutions. This method would hopefully be the quickest solution, as we would probably run a beta beside the actual product, but it does have the potential for a big waste of time if we can't/don't get it right.
This does not mean that our logic is gone, as what we have built over the years is fairly stable, as noted just difficult to deal with. It is built on SQL Server 2005 with heavy use of stored procedures and published on IIS 6, just for a little more background.
Now, the question. Has anyone taken either of the two paths above? If so, was it successful, how could it have been better, etc. We aren't looking to deviate much from doing one of those two things, but some suggestions or other solutions would potentially be helpful.
Don't throw away your code!
It's the single worst mistake you can make (on a large codebase). See Things You Should Never Do, Part 1.
You've invested a lot of effort into that old code and worked out many bugs. Throwing it away is a classic developer mistake (and one I've done many times). It makes you feel "better", like a spring cleaning. But you don't need to buy a new apartment and all new furniture to outfit your house. You can work on one room at a time... and maybe some things just need a new paintjob. Hence, this is where refactoring comes in.
For new functionality in your app, write it in C# and call it from your classic ASP. You'll be forced to be modular when you rewrite this new code. When you have time, refactor parts of your old code into C# as well, and work out the bugs as you go. Eventually, you'll have replaced your app with all new code.
You could also write your own compiler. We wrote one for our classic ASP app a long time ago to allow us to output PHP. It's called Wasabi and I think it's the reason Jeff Atwood thought Joel Spolsky went off his rocker. Actually, maybe we should just ship it, and then you could use that.
It allowed us to switch our entire codebase to .NET for the next release while only rewriting a very small portion of our source. It also caused a bunch of people to call us crazy, but writing a compiler is not that complicated, and it gave us a lot of flexibility.
Also, if this is an internal only app, just leave it. Don't rewrite it - you are the only customer and if the requirement is you need to run it as classic asp, you can meet that requirement.
Use this as an opportunity to remove unused features! Definitely go with the new language. Call it 2.0. It will be a lot less work to rebuild the 80% of it that you really need.
Start by wiping your brain clean of the whole application. Sit down with a list of its overall goals, then decide which features are needed based on which ones are used. Then redesign it with those features in mind, and build.
(I love to delete code.)
It works out better than you'd believe.
Recently I did a large reverse-engineering job on a hideous old collection of C code. Function by function I reallocated the features that were still relevant into classes, wrote unit tests for the classes, and built up what looked like a replacement application. It had some of the original "logic flow" through the classes, and some classes were poorly designed [Mostly this was because of a subset of the global variables that was too hard to tease apart.]
It passed unit tests at the class level and at the overall application level. The legacy source was mostly used as a kind of "specification in C" to ferret out the really obscure business rules.
Last year, I wrote a project plan for replacing 30-year old COBOL. The customer was leaning toward Java. I prototyped the revised data model in Python using Django as part of the planning effort. I could demo the core transactions before I was done planning.
Note: It was quicker to build a the model and admin interface in Django than to plan the project as a whole.
Because of the "we need to use Java" mentality, the resulting project will be larger and more expensive than finishing the Django demo. With no real value to balance that cost.
Also, I did the same basic "prototype in Django" for a VB desktop application that needed to become a web application. I built the model in Django, loaded legacy data, and was up and running in a few weeks. I used that working prototype to specify the rest of the conversion effort.
Note: I had a working Django implementation (model and admin pages only) that I used to plan the rest of the effort.
The best part about doing this kind of prototyping in Django is that you can mess around with the model, unit tests and admin pages until you get it right. Once the model's right, you can spend the rest of your time fiddling around with the user interface until everyone's happy.
Whatever you do, see if you can manage to follow a plan where you do not have to port the application all in one big bang. It is tempting to throw it all away and start from scratch, but if you can manage to do it gradually the mistakes you do will not cost so much and cause so much panic.
Half a year ago I took over a large web application (fortunately already in Python) which had some major architectural deficiencies (templates and code mixed, code duplication, you name it...).
My plan is to eventually have the system respond to WSGI, but I am not there yet. I found the best way to do it, is in small steps. Over the last 6 month, code reuse has gone up and progress has accelerated.
General principles which have worked for me:
Throw away code which is not used or commented out
Throw away all comments which are not useful
Define a layer hierarchy (models, business logic, view/controller logic, display logic, etc.) of your application. This has not to be very clear cut architecture but rather should help you think about the various parts of your application and help you better categorize your code.
If something grossly violates this hierarchy, change the offending code. Move the code around, recode it at another place, etc. At the same time adjust the rest of your application to use this code instead of the old one. Throw the old one away if not used anymore.
Keep you APIs simple!
Progress can be painstakingly slow, but should be worth it.
I would not recommend JScript as that is definitely the road less traveled.
ASP.NET MVC is rapidly maturing, and I think that you could begin a migration to it, simultaneously ramping up on the ASP.NET MVC framework as its finalization comes through.
Another option would be to use something like ASP.NET w/Subsonic or NHibernate.
Don't try and go 2.0 ( more features then currently exists or scheduled) instead build your new platform with the intent of resolving the current issues with the code base (maintainability/speed/wtf) and go from there.
A good place to begin if you're considering the move to Python is to rewrite your administrator interface in Django. This will help you get some of the kinks worked out in terms of getting Python up and running with IIS (or to migrate it to Apache). Speaking of which, I recommend isapi-wsgi. It's by far the easiest way to get up and running with IIS.
I agree with Michael Pryor and Joel that it's almost always a better idea to continue evolving your existing code base rather than re-writing from scratch. There are typically opportunities to just re-write or re-factor certain components for performance or flexibility.