Jul 292011

With the return of Atlantis on July 21st the era of the Space Shuttles has come to an end. It certainly was a fascinating program that has delivered quite a few highlights. However, it was not entirely successful.
Do you remember the praise that was given to the new concept of the shuttles 30 years ago? Here are a few things I remember:

  • The shuttles are cheaper, because they (and the booster rockets) are reusable
  • They are safe, because they partially work like an airplane
  • There will be weekly launches to space, because it’s quick to get the vehicles ready for the next flight

Reality was much different: 2 out of 5 orbiters were lost, killing their crew; a single launch weighed in on average with half a billion dollars; and it took more than one year to prepare a vehicle for launch.

How could that happen?
At a first glance, the concept of the shuttles sounds pretty straight and quite suitable to achieve the original goals. What turned out later, however, was that details of the implementation weren’t that simple at all and finally made the Space Shuttle to probably the most complex machine men have ever built. Just a few examples:

  • During countdown, Launch Control at Kennedy Space Center monitored 22000 parameters to decide if a shuttle is ready to go or not. It is no surprise that often they found something and delayed the launch.
  • The heat shield of the orbiters with its thousands of tiles was a weak (and expensive) point from the beginning. The Columbia accident further revealed a fundamental design flaw that eventually could not be fixed. After the issue was known, it just became even more expensive trying to prevent further accidents.
  • Even the Solid Rocket Boosters, apart from the main tank the apparently simplest part of the shuttles, caused the loss of Challenger.

Now compare this to the Russian space program. The Soyuz rockets only slightly changed since the 1960s. There have been more than 850 launches, fatal accidents are rare (also due to a rescue system). Like for the shuttles, the time for a launch is defined months in advance – but did you ever hear of a delay?
The Russian space program almost works like Swiss clockwork.

So what can we learn for software projects?
The Russian spaceships are a typical 80% solution. They are cheap and reliable, but they are not reusable and they can transport only either crew or cargo. The Soyuz rockets are not overly sophisticated, but they do the job.
The US Space Shuttles tried to get close to a 100% solution, combining different kinds of goals. The attempt to create something perfect finally led to a complex beast, which was difficult and expensive to control.

I once attended a talk of Charles Simonyi, one of the co-founders of Microsoft and the only space tourist who was up there twice.
He said, the more he was trained for the Russian spaceship, and the more he understood how it worked, the more secure he felt. It is so incredibly simple, it just has to work.
We should be able to say that about our software systems as well.

Jul 272011

A while back, I gave trainings for the “Certified Professional for Requirements Engineering”. As part of the course material there were exercises based on the story of an Amazon-like webshop where participants had to develop a vision, identify and specify some use cases, model the objects and dependencies and so forth.
A typical question raised during the modelling exercise was the following: The shopping basket and the order – are these two separate entities or are they one entity with different states?
My answer always was that you can choose either of the options – whatever you and the team prefer and what fits best into the rest of your model.
Sometimes this question followed: “Yes, but what is the correct way?”.
I always wondered what people had made of one answer or the other.

To me these episodes indicate a more fundamental problem:
A team member is there to contribute to the work of the whole team. The more the rest of the team can make of an individual’s contribution, the more valuable is this input. However, some don’t seem to worry about the usefulness of their results. They occupy themselves trying to fulfil a specified function – they want to do something “right”, but they don’t consider if this is also effective.

Reasons for such a behavior may be among the following:

  • it’s the character of the person, possibly formed by education
  • the person could be overwhelmed by her task and looking for a simple “recipe” she could follow
  • the environment makes people to keep their backs covered

Whatever the reason is, it hinders productivity of the whole team and requires action.

Smart contributions
The contributions a problem solving team requires are variable and depend on current needs. They can hardly be defined upfront with templates and checklists. Instead, each team member needs to listen, feel the vibes and be able to adjust.
Management needs to facilitate this by creating an atmosphere where people feel that their individual contributions matter and are valued. The team as a whole needs to be focussed on the outcome and allowed some room for finding a good solution. This includes making mistakes and learning from them.

The more you have people capable and motivated of delivering smart inputs, the more productive the project will be – especially when it has to deal with many unknowns and frequent changes. Individual reason and experience are the most valuable contributions a team and a project can get.

Jul 242011

Turning things into routine work is advantageous for management. It simply makes them predictable – you know in advance how the job gets done, what the quality of the result will be, and how you can measure progress on the way.
Wouldn’t it be great to run projects like this, and to get rid of all this uncertainty usually going along with them?
The idea is tempting, and traditional approaches for developing software (e.g. sequential processes or CMMI) try exactly that.

To know or not to know
Jobs done in routine are characterized by a high degree of existing knowledge.
As an example let’s take the production of a hamburger at your favorite fastfood restaurant. The desired end product is known in detail, the ingredients as well as the steps to prepare the burger are exactly defined. No problem, but routine repeated thousands of times.
Opposed to this, software development is usually characterized by a high degree of missing knowledge. The users’ needs are typically expressed more or less fuzzy, details about features and technical implementations are only developed over time, the team has to form and learn to cooperate, and eventually some surprising requirements are discovered.
A task associated with obstacles is called a problem. Software projects have many obstacles in terms of unknowns, variables and dependencies. They belong to the most difficult category of problems.

How do you handle that?
Can you treat problem solving the same way as routine work, or is it wise to do so?
Short answer: No.
Problem solving works in a fundamentally different way: One tries a step towards a supposed solution, compares the result with the desired target, learns from this and defines another step towards a better solution. This continues iteratively until the achieved solution is considered good enough.
That means: Defining the perfect solution in advance and then just follow an implemention plan is not possible, at least not for non-trivial problems.

You can’t just absorb information and hope to synthesize it into a solution. What you need to know about a problem only becomes apparent as you are trying to solve it.
Richard MacCormac, 1976

For software projects this implies that an understanding of the requirements must go along with the development of the solution. It’s an iterative learning process – without working on the solution and trying it out the requirements will never be complete.
This phenomenon can frequently be observed: It is exactly the reason why there are always change requests after a demo or a release, because this learning process includes the customer.

The human factor
Understanding how human beings work together in a team also does not leave much room for treating this like routine work. Humans have highly individual ways of thinking and doing knowledge work, of knowledge and experiences, of strengths and weaknesses. These aspects do not only have a significant influence on how the people work together, they also affect the way the problem is solved and how the solution eventually looks like.
Processes in problem solving teams are driven by evolution and dynamics – the traditional tayloristic approach of assigning pure functions to people falls short of this.

So even if it may be desirable – software development can not simply be declared routine work. A modern and productive management of software projects and organizations considers that. It advances the development and efficient usage of new knowledge, and it advances teams by letting people perform and contribute beyond a mere functional role.

Jul 162011

Large projects are typically so large because they try to solve many and complex problems at a time.
People seem to feel that the natural way of getting a handle on such complexity is to increase the level of ceremony: The more is predefined and regulated, the more things seem controlled.
Agile methods in general take the opposite route. Therefore one can often hear and read that agile is a good thing for smaller projects, larger and more complex ones however require a more formal process, may be even a waterfall.
Can that be substantiated? What is the most productive approach for larger projects?

Managing the unknown
Complex situations are best resolved by isolating problems from one another and tackling them separately. Examples for such a “divide and conquer” approach in a software system would be splitting it by different functional areas (e.g. online and batch functions) or by functions per user group. This kind of structuring narrows the scope, allows people to focus and (because of smaller problems) makes solutions more likely. Governance is required for scope management and to control the interfaces between the areas.
Of course there are always interdependencies, and decomposition may not be straightforward. But taking such decisions is a first step towards reducing the complexity and getting a large initiative on a defined (and therefore controlled) track.

What the process models are saying
The agile mindset in fact leads to exactly this kind of segmentation: It’s a logical consequence of short iterations required to result in executable and usable software. While the agile methods (pair programming, collective code ownership etc.) don’t scale well by headcounts, the agile philosophy (identify value, decompose into smaller problems, go stepwise) is applicable and value-adding to any project size – and in particular helpful for large ones. Having smaller subprojects allows in turn to use more of the agile methods again, and benefit from their positive impact on team productivity.

Traditional processes don’t forbid such decomposition, but normally they aim for one big unified solution. Instead, they segment the work into disciplines (e.g. requirements engineering, architecture) and suggest to scale by headcounts.
There are three problems with that approach:
1. Tayloristic separation of roles into disciplines makes communication and knowledge transfer within the team more difficult and therefore less effective. This problem even grows with the team size.
2. Going for one big solution does not favor the more challenging nature of large and complex problems. It is more risky, because it’s an “all or nothing” approach and you may not know until rather late if there is one solution found that will do the job.
3. The governance is focussed on the process with its disciplines, not on the outcome.

Problem solving gone large
Large projects are more complex and therefore require a highly effective approach for problem solving and a proper model for governance. One can say:

  • The larger a project, the more it should follow the agile mindset. Isolating problems reduces the risk, increases manageability and  leads to earlier results.
  • Governance needs to focus on the outcomes and their alignment.

Traditional processes appear so controllable because they define in detail how the whole job should get done. However, that does not help in developing a solution to a complex problem. Instead, the segmentation into smaller sub-problems helps to gain the required control and to implement an effective governance.

Jul 042011

People seem to don’t really know how they should handle the two contradicting approaches in the world of software development – the traditional way with CMMI on the one side, and the Agile methods on the other side. For those who can’t (or don’t want to) decide for one camp or the other, the diplomatic answer is: Do both! Does that make sense? Let’s check.

What is the core of the CMMI philosophy?

It’s a strong belief in the process.

CMMI itself is a model and agnostic to a specific process, it just defines requirements a process is supposed to fulfill. Implementing the Deming Cycle of quality control (“Plan-Do-Check-Act”), CMMI requires that the process is defined upfront (“Plan”) and then during execution (“Do”) checked for compliance. Reflection on the effectiveness (“Check”) and actions to improve the process (“Act”) are done after the process has been executed. Improvements are therefore aimed to benefit the next project using the same process.

The ultimate aim of CMMI is to have standardized processes across the entire organization – i.e. there is one way of developing software within a company, used by all projects. Projects and people are expected to adhere to the defined standard or record and justify deviations.

CMMI considers the process as the key to make people more productive. In other words, it values the process more than the people.

What is the core of the agile philosophy?

It’s great care for the outcome.

Agile processes are highly iterative because they want to ensure frequent feedback (from the client) on the results (executable software) produced by the team. They are less concerned about how the outcome gets produced (the process). One could say: “We don’t really know what the right process is for this particular team and problem, so we don’t dictate anything and let them find out. We suggest some good practices (e.g. onsite customer, pair programming) and make sure they measure the outcomes frequently and reliably (backlog and sprints).”

Anything to combine?

The two philosophies couldn’t be any more different. While CMMI defines and measures how the outcome is produced, the agile methods simply measure the outcome directly. Some consider exactly this contrast as the chance to combine the two.

From a knowledge work perspective there is no point in doing that. Both have different strategies how they try to improve productivity – they don’t complement, they contradict.

A software project (domain, problem and team) is a highly complex system with many variables, unknowns and dependencies. Upfront definitions, as required by CMMI, are always at risk of being outrun by reality. Improper processes, however, hamper productivity.
Complex environments are like feedback control systems. Effects need to be checked after each operation, not only after the project. Focussing control on the outcome, the process can and should be adaptive, and the team be free to learn and develop its own way. This is why the Agile Manifesto values individuals and interaction more than processes.

CMMI may have some interesting practices (to what extend their attempt to turn everything into routine work is applicable to software development will be subject of a later post). However, focussing on the process creates functions where people’s reason and creativity is needed, and it distracts the view from what really requires all attention when solving a problem – the solution. The agile approach is therefore much more straightforward and naturally effective.

One thing the agile and knowledge working community still need to think about is sustainability in a post-CMMI world. How can an organization continuously improve and learn from successes and failures?