Steven Feuerstein has presented a new approach to the development workflow in his best practices presentation he held in The Netherlands last week. He did this presentation about three times for different audiences. I have seen his approach before and I think the idea is pretty good. Don’t know if and how we can use this approach in our current work. I don’t think we should change the approach ‘by big bang’ but we should find elements in the approach and apply them step by step to our current way of working.
First off here’s the approach Steven is talking about.
1) Make sure we have SQL access, Error management and Coding Conventions in place. If one of these is not available, it’s not really useful to continue :-).
Single Unit Preparation
2) Define Requirements.
Don’t just let the customer (or a manager) come up with ideas of what the program should do, but check them to see if it’s possible to build them, if we want to build them (ethical issues, company policy etc.) and, maybe most important, if we understand correctly what the specs mean.
3) Construct Header
Construct just the header of the program we are creating. Just put in the parameters that are supposed to go into it, define the return type and nothing more. Well, maybe just a stub to get the program to compile.
4) Define Tests
Before we dive into building the actual code, we should think about tests to see we’re done building. How would we otherwise know if we are done. The only way to know this is when it passes the test defined.
5) Build Test Code
And now we can build code, but it is code to test the code we are about to be building. This is kind of strange. Build code to test a program that doesn’t even exist yet. And it is really hard to build code to test code. This is where Quest CodeTester can come in handy. In this step we can use the cases defined in step 4. And then get CodeTester to generate the test code.
The Build Cycle
6) Build/Document/Fix code
Finally, we get to build our code. Taking the defined specifications, testcases and our knowledge of the language, we can build our first implementation of the program.
After we are done coding the implementation, it’s time to run our tests against the program. Chances are that you won’t get all the testcases to run to success the first time. But seeing that at least some cases result in success is a good incentive to continue the building process.
Debug a testcase that failed. Find out where it went wrong and fix the error (step 6). Then test again to see that none of the cases that completed successfully before failed after the fix.
Loop these steps until all cases result in success. That’s an indicator that you’re done and the code can be release to production. I deliberately say it’s an indicator and not a guarantee.
The first thing users do when they use our program is break it. Or run into a bug, which is probably a more accurate way of saying it. What this actually means is that they are using our program in a way we didn’t anticipate. This means we still have a bug in our code, but maybe more importantly, we also have a bug in our test cases. So, instead of fixing the bug immediately, we first need to build one or more testcases to reflect the bug report. Using this approach we can assert that our fix for the bug doesn’t introduce new bugs for testcases that used to complete successfully.
If a user logs an enhancement request, we need to follow mostly the same approach as with a bug report. We need to build testcases to reflect the enhancement request and after that fix the code to include the enhancement. Using the existing cases, we can again assert that our new code doesn’t introduce any new bugs.
Why should we go through all this trouble instead of just ‘getting the job done’ the way we are used to do. Just getting the job done doesn’t work for the long term. The code can be working correctly at this point in time (or at least we guess it does), but what if a bug is identified weeks, months or maybe even years from now. Chances are that you don’t remember all the things you tested. Heck, I don’t even remember my testcases from last week. But if we save the testcases we used to build the code, we can run these cases again when we need to do a bug-fix or build an enhancement. When we want to do refactoring on our code, because of new insights, new versions of the database, newly acquired knowledge of the language, then we can make sure that our new implementation has the same result to the outside world. Even though the signature of the program doesn’t change, it can be done in a better (faster) way.
Using unit testing can help us in the development process, although it might seem like we are doing more work in the beginning. But think of it this way: We test our code while we build it, why not record these tests so we can re-use them later on. It might seem we are using more time, but I think we are probably using the same amount. In the long run, we will be using less time to execute more tests.