When is Software Design Complete Enough?
In this age of agile software development, it’s too easy to rush through design, and begin coding too soon. In addition to the obvious problems (missed requirements, etc), this can also have devastating effects on the development team when they start seeing that development has built something completely different than what they were expecting.
On the other side of the coin, we’ve all heard of projects being stuck in “analysis paralysis”, that state of never really getting started on a software development project because there is always one more thing to learn and discover. There are a number of reasons for getting hung up like this, but fear of not capturing that last bit of “crucial” information to the success of the project is probably one of the biggest.
We don’t want to rush through analysis and begin building too fast, but it’s also pretty obvious that we don’t want to spend too much time on the design. These competing requirements need to be balanced against each other until an accurate and shared idea of what is going to be built emerges.
So how do we know what design is “complete enough” to get started? The quick answer is when you know what you don’t know, and know enough to build what the team expects.
If you think about it, there are relatively few “unknowns” at the beginning of the project. That last project you were part of? It was probably introduced with big, general arm-waving terms, and described with a high-degree of confidence. It sounded relatively easy and everyone was eager to get started, right?
As it progressed, four previously unknown database integrations were “discovered”, six new screens had to be added, and the underlying logic was much more complicated than originally thought. Each question answered lead to three new ones. Then you woke up one morning realizing that the simple project described six weeks ago is now a monster that will never get completed at this rate!
Breathe easy. You are now becoming informed…
With time, there is going to come a tipping point; a magical day when, instead of three new questions, there might only be two, or one. You’re starting to close in on the solution at this point and there’s a light at the end of the tunnel.
Every project goes through this lifecycle during the analysis stage and it should be expected, embraced, and planned for. The phases of the lifecycle probably look something like this:
- Phase 1: Few Unknowns/Un-informed
- Phase 2: Many Unknowns/Becoming Informed
- Phase 3: Few Unknowns/Informed
PLANNING FOR UNKNOWNS
How can you plan for unknowns if you’re uninformed? How do you estimate time? These are good questions, and ones you should be asking. The process is pretty straightforward and involves taking an inventory of what you do know, quantifying it, and making educated guesses.
You’re probably familiar with the 80/20 rule which (roughly) says, “80% of the results come from 20% of the effort…” During the statement of work, we’ve got a 30/20 rule (the 80/20 comes later) – a few simple elements (20%) will us a rough idea how involved the project is (with 30% certainty).
Our inventory is part of the statement of work (SOW) and should include simple and obvious elements such as number of screens that interact with the user, the number and relative complexity (low/medium/high) of “critical user tasks”, how many tables/columns worth of data appear on each screen, the general number and complexity of any reports that are to be included with the application.
Note that experience with a team, development environment, application functionality & business process help speed up the time and reduce unknowns.
Once you have a handle on quantifying what you know, figuring out man-hours is relatively easy, especially if the tasks are broken down into small enough chunks (which they should be). Determining calendar days is harder, and subject to a large number of variables. However, once you have a multiplier, you can simply multiply the number of man-hours by it, and come up with the number of calendar days required with remarkable accuracy.
APPLICATION DESIGN & ANALYSIS
At some point, once analysis is complete, your design is going to close in on that magical “good enough” where you can get started building the darn thing. But how to know when you’re here?
First, it’s important to break down the effort into a number of categories. Things like screens & GUI, data integrations, workflow automation & logic, and reporting are all logical breakdowns of a project and often lend themselves to being distinct sections in the design document. These categories should be continuations of the initial tasks used to estimate the project from the statement of work.
Then I use three simple questions to test for determining how complete each section or category is. Remember, we’re not talking about a document here; we are interested in understanding. A document should be used to capture information, can be used to demonstrate understanding, but is no substitute for understanding.
All team members should answer the questions for themselves, and as part of the team.
- Do I feel comfortable that all functionality and business processes used by the customer have been discovered, understood, and documented?
- Do I understand and agree with all functionality that is to be built into the new application? Are things dispositioned in such a way as to be clear and unambiguous?
- Is my vision of the completed project the same as, and shared with, all members of the team, including customer representatives?
The idea behind these questions is twofold. First, we want to avoid too many surprises further down the development lifecycle. As these same people (analysts, developers, testers & QA, etc) begin interacting with the application, they should be experiencing a design that they are familiar with.
Second, once the design is “complete enough” a whole slew of other tasks can get started. Accurate estimates (with 80% certainty) for the build, test, deploy, and maintenance phases can be made. Test scripts can get developed. And user documentation can be written.
This article was originally published on 2005-05-17 20:10:43, while I was a member the University of Washington School of Medicine IT staff.