Towards the end of the last century, the software industry was in a state of turmoil. There were some significant software project failures, which were having a substantial reputational impact on the industry as a whole. One observation was that software projects over a certain size were more likely to fail.
In this chapter, we'll discuss the factors that led to a crisis and the subsequent major turning-point in the software industry. In the second part of this chapter, we'll introduce the Agile Manifesto and its origins and discuss the way it revolutionized the way we think about building software. We'll explain the Manifesto's values and principles, what they each mean, and the impact they have on how software professionals work.
In this chapter, we will cover the following topics:
- Why the software industry needed to change
- The origins of the Agile Manifesto
- A detailed look at the values and principles of the manifesto and how they translate to today's context
- Adaptive versus predictive planning
- Incremental versus waterfall delivery
- Agile isn't a process, it's a mindset built on guiding values/principles and requires solid technical practices
When I was outlining this book, I was in two minds about whether to include a section on the history of the software industry and how Agile came to be. I figured most of you would be young enough not to know any different; you're probably doing something Agile or closely related to Agile already. However, I decided to include this chapter for two reasons:
- I still see echoes of the past affecting us today
- "Why" we do something is important to inform "what" we do and "how" we do it
When working as a software developer in the 1990s and 2000s, I worked in a range of different organization types, from software houses to software consultants, from large organizations such as corporate and central government to small three-person startups. In my experience across these different organization types, we used two noticeably different styles of software delivery, either as a product or a project.
When building software products, we tended to form long-lived teams around them. The team would be responsible for the product throughout its life. During the initial build and subsequent revisions, they handled the entire Software Development Life Cycle (SDLC) end-to-end, including delivery into production. These teams were also often responsible for managing the production environment and providing support.
The scale of the product, and how widely it was adopted, determined whether it was managed by a single software team or by a network of teams all working for one product group. One thing that was noticeable about software product teams, apart from their long-lived nature, was that they often had good relationships with their customers as well as technology and operations staff. Sometimes they even had representatives of these groups within the teams.
Funding these products often took place on a yearly cycle, which coincided with setting the objectives for the product's development that year. A key thing to note is that the budget would often be allocated to fund the team(s) and the decisions on what features would be developed, enhanced, or fixed that year would be managed by the team itself.
When treating software delivery as a project, the approach was often very different. The project team would only form for the duration of the build of the software.
Once built, the software would often be handed over to a separate team, known as the Business As Usual (BAU) team in business parlance. They would manage the maintenance and support of the product. There was a two-fold intention in handing over to a separate BAU team:
- They would handle the bulk of changes to the organization, for example, by training all impacted staff on how to use the new software and associated business processes. Once they'd introduced the changes, the BAU team's aim would then be to create and support a stable business environment.
- The software delivery team would be free to move on to the next project. Software delivery teams were a scarce resource, and this was seen as a way to optimize software delivery.
Software projects required project managers who often operated in the change management capacity as well, although sometimes separate change managers were allocated. In this way, they would also be responsible for seeing that the introduction of the new software platform to the organization and the transition to BAU went smoothly. The project team itself would often be managed by a separate unit who reported directly to the business known as the Project Management Office (PMO).
What I discovered by using these two contrasting approaches resulted in a profound shift in my thinking: When delivering software as a product, there was much more opportunity to focus on value because the product team was long-lived and was able to adopt an iterative/incremental approach to delivery. We, therefore, had multiple opportunities to deliver, enhance our strategy, and get things right.
However, with software delivery as a project, more often than not the software delivery team only had one opportunity to get it right. Successfully delivering a software project to meet expectations with only one shot just didn't happen that often.
Even when we did deliver, it was often as a result of a considerable effort on our part to get it across the line, including many lost evenings and weekends.
Subsequent revisions of the software were often handled as separate projects and likely by a different software team, often leading to a lack of continuity and knowledge sharing.
As a result, there was a distinct lack of trust between us—the software professionals and the people we were building software for. Unfortunately, "us" often became them and us with unmet expectations being so often the cause of the rift in the relationship.
We tried to solve this problem by making things more precise. Unfortunately, the version of precision the project mindset opted for had only three predictive aspects—scope, time, and budget. All three of these things are very difficult to quantify when tied to a software project where complexity, uncertainty, and sheer volume of work could and did amplify any errors in these calculations.
However, the single most significant problem when you tie down all three of these factors, scope, time and budget, is that something colloquially known as the Iron Triangle forms. Refer to the following figure:
When you set scope, date, and budget like that, there is little room for maneuverability to deviate from the plan. To help mitigate risks, most will create buffers in their schedules. However, the rigid nature of the triangle means that if and when overruns start to eat more and more into the buffers, something else has to give. And what usually occurs when a software development team is under pressure to deliver? One or more of the following qualities of your software product will start to suffer:
- Functionality: Whether it works as expected
- Reliability: How available it is
- Usability: How intuitive it is to use
- Scalability: How performant it is
- Maintainability: How easy it is to change
- Portability: How easy it is to move to a different platform
To understand why precision in predicting the outcome of a software project is so complicated we need to unpack things a little.
At the time, many felt that the functionality of the delivered software was the priority and would often seek to lock the scope of the project. We allowed for this by having some degree of variability in the budget and the schedule.
At the time, many project managers would work to the PMI or ISO 9000 guidelines on the definition of quality. Both of these had a reasonably straightforward quality definition requiring the scope of the project to be delivered in a fully functional form.
To meet the scope expectation, we had to estimate, to a fair degree of precision, how long it would take us. In this way, we would be able to determine the length of time needed and the number of people required for the team.
And it was at the estimate stage that we often set ourselves up to fail.
One thing that was increasingly obvious in the software industry was that our approaches to estimating or predicting the outcome of a project were out of whack. As a software team, we were often passed detailed requirements to analyze, then do some initial design work, and provide a work breakdown with estimates.
We'd be working on one project and then at the same time asked to estimate on another. The estimates we were told would inform the decision on whether the project was viable and would be funded. Some teams tried to be sophisticated, using algorithmic estimating techniques such as Constructive Cost Model (COCOMO), COCOMO II, Source lines of code (SLOC), and Function Point. Most estimating methods incorporated rule of thumb or experience-based factors as well as factors for complexity and certainty.
Sometimes the estimates would be given by people who weren't going to work on the project. This was either because the project leadership group didn't want to disturb an existing project team, or because, without funding, the project teams weren't formed yet. This meant the involvement of someone like an architect or a technical leader who could break down the work and estimate based on the solution they devised. Most complicated problems have several solutions, so if those who had done the solution work breakdown weren't available to provide guidance on how to implement their solution, obviously this could cause trouble later on.
Either way, whoever provided the estimate would give it based on the best solution they could theorize with the information they were given. More often than not, the first estimate that was given would be used as a way to control the project, and it would be pretty much set in stone. This was a pattern that I stumbled across many times. When this continually happened, we tried to improve the accuracy of our estimates by spending time doing more upfront work. But the reality was to get an exact estimate of effort so that time frames and budget can be drawn, you pretty much have to complete all the work in the first place.
It also became a bit of a standing joke that these estimates were not only painstakingly worked on over days or weeks, but as soon as we gave them to a project manager, they would double the figures. When we challenged that practice, they'd remind us that all software developers are optimistic and that buffers were needed.
But it's not that we're optimistic by nature; in fact, a lot of us had already factored in our own buffer allowances. The explanation is more straightforward; the work we do is novel—more often than not we are working on something entirely different from what we've built before: different domain, different technology stack, different frameworks, and so on.
Some teams would combat this by offering two figures as part of their estimates. The first being the time estimate, the second being the level of certainty that they could complete it within that timeframe. If the task was straightforward, then they would offer their estimate with 100% certainty. If the task was more complicated they would lower the percentage accordingly. The higher the uncertainty, the lower the percentage.
This certainty factor could then be used to allocate a buffer. As certainty got higher, the buffer would get smaller, but even with 100% certainty, there would still be a buffer. After all, there is no such thing as an ideal day as much as we would like to think there is.
At the opposite end of the spectrum, the more the uncertainty in the estimate, the larger the buffer. At the extreme, it was not uncommon to have buffers of 200%.
The chronic misestimation of development effort has led to some Dilbert-esque observations of the way we work.
For example, we would often refer ironically to a task as just a "small matter of programming," when someone with little or no understanding of what was involved was telling us it looked easy.
We also developed ironic laws about underestimation such as the Ninety-Ninety rule, which states the following:
This rule was later made popular by Jon Bentley's September 1985 Programming Pearls column in Communications of the ACM, where it is titled "Rule of Credibility".
The biggest problem with estimation is the amount of information we assume. We make assumptions on how to solve the business problem, the technologies we're going to use, and the capabilities of the people building it. So many factors.
The level of complexity and uncertainty impacts our ability to give an accurate estimate because there are so many variables at play. This, in turn, is amplified by the size of the piece of work. The result is something referred to as the Cone of Uncertainty:
Barry Boehm first described this concept in his book Software Engineering Economics, 1981; he called it the Funnel Curve. It was named The Cone of Uncertainty in the Software Project Survival Guide (McConnell 1997).
It shows us that the further we are away from completion, the larger the variance in the estimate we give. As we move closer to completion, the more accurate our estimate will become, to the point where we complete the work and know exactly how long it took us.
So while it was felt that better precision could be gained using a gated process, such as Waterfall, because it led to a tendency to bundle more of the stuff we wanted to get done together, it would significantly increase the size of the work parcel. This, in turn, compounded the problem of getting an accurate estimate.
Sometimes, of course, we'd fail to deliver something of use to our customer and miss the point entirely.
I remember one project I worked on where we were given ideal working conditions. The teams were offsite, in our own offices, which meant we could be dedicated to the project we were working on without being disturbed or shoulder-tapped by others. It felt like the perfect setup for success.
We spent ten months painstakingly delivering precisely to requirements. Everything was built and tested out according to the detailed designs we were given. We were even on budget and time when we delivered. Unfortunately, when the software went live and was in the hands of the people using it, they reported back that it didn't do the job they needed it to do.
Why had we failed? We'd failed because we spent ten months building something in isolation from our customer. We hadn't involved them in the implementation, and too many assumptions had been made. Diagrammatically, this looked a little like the following:
We then spent the next six months reworking the software into something that was usable. Unfortunately, for the partner company working alongside our team, this meant a major variance in their contract, most of which they had to swallow. We eventually did deliver something to our customer that they wanted and needed but at a huge financial impact on us and our partner company.
This, unfortunately, is the path that predictive planning sets you on. You develop a fixed mindset around what is to be delivered because you know if you aren't dogmatic in your approach, you're likely to fail to meet the date, budget, or scope set in the contract.
One of the key characteristics to understand about the nature of predictive planning is that the minute someone says, "How much is this going to cost?", or "When can this be delivered?", they significantly constrain the value that will be created. And, at the end of the day, shouldn't it be about maximizing the value to your customer? Imagine if we spent one more month on a project and delivered twice as much value. Wouldn't that be something our customer would want over meeting a date or a particular set of features just because it's been speculatively laid out in a contract?
Instead, we focus on a date, an amount of money, and the functionality. And unfortunately, functionality doesn't always translate to value (as we saw in the previous section when we missed the point entirely).
This is why the predictive planning used in Waterfall style deliveries has also become known as faith-driven development because it leaves so much to chance, and usually right until the end of the project.
To focus on value delivery, we have to shift our mindset to use adaptive planning versus predictive planning, something that we will talk about later in this chapter.
It's not that the project mindset is bad, it's just that the mindset drives us to think that we need a big upfront design approach in order to obtain a precision estimate. As noted, this can lead to a number of issues, particularly when presented with a large chunk of work.
And it isn't that upfront design is bad; it's often needed. It's just the big part, when we try to do too much of it, that causes us problems.
There were a number of bad behaviors feeding the big thinking happening in the industry. One of these is the way that work is funded, as projects. For a particular project to get funded, it has to demonstrate its viability at the annual funding round.
Unfortunately, once people in the management seats saw a hard number, it often became set in stone as an expectation. During the execution of the plan, if new information was discovered that was anything more significant than a small variance, there would be a tendency to try to avoid doing it. The preference would be to try to stick to the plan, rather than incorporate the change. You were seen as a better project manager for doing that.
So, we had a chicken-and-egg scenario; the project approach to funding meant that:
- The business needed to know the cost of something so they could allocate a budget.
- We needed to know the size of something and its technical scope and nature so that we could allocate the right team in terms of size and skill set.
- We had to do this while accepting:
- That our business didn't know exactly what it needed. The nature of software is intangible, most people don't know what they want/need until they see it and use it.
- The business itself wasn't stationary; just because we recorded requirements at a particular moment in time, didn't mean that the business would stop evolving around us.
So, if we can avoid the big part, we're able to reduce the level of uncertainty and subsequent variability in our estimates. We're also able to deliver in a timely fashion, which means there is less likelihood of requirements going out of date or the business changing its mind.
We did try to remedy this by moving to prototyping approaches. This enabled us to make iterative sweeps through the work, refining it with people who could actually use the working prototype and give us direct feedback.
Rapid Application Development, or RAD as it's commonly known, is one example of an early iterative process. Another was the Rational Unified Process (RUP).
Working on the principle that many people didn't know what they wanted until they saw it, we used RAD tools such as Visual Basic/Visual Studio to put semi-working prototypes together quickly.
But we still always managed to bite off more than we could chew. I suspect the main reason for this is that our customers still expected us to deliver something they could use. While prototypes gave the appearance of doing that and did enable us to get feedback early:
- At the end of the session, after getting the feedback we needed, and much to our customer's disappointment, we'd take the prototype away. They had assumed our mockup was a working software.
- We still hadn't delivered anything they could use in their day-to-day life to help them solve real-world problems.
To try to find a remedy to the hit-and-miss approach to software delivery, a group of 17 software luminaries came together in February 2001. The venue was a cabin in Utah, which they chose, as the story goes, so that they could ski, eat, and look for an alternative to the heavyweight, document-driven processes that seemed to dominate the industry.
Among them were representatives from Extreme Programming (XP), Scrum, Dynamic Systems Development Method (DSDM), Adaptive Software Development (ASD), Crystal, Feature-Driven Development, and Pragmatic Programming.
While at least one of them commented that they didn't expect anything substantive to come out of that weekend, what they did in fact formulate was the manifesto for Agile software development.
The manifesto documents four values and twelve principles that uncover "better ways of developing software by doing it and helping others do it".
The group formally signed the Manifesto and named themselves the Agile Alliance. We'll take a look at the Agile Values and Principles in the following sections.
Here is the Manifesto for Agile Software Development (http://agilemanifesto.org/):
To understand the four values, you have first to read and understand the end subtext:
That is, while there is value in the items on the right, we value the items on the left more.
Let's look at how this works by looking at each value in more detail:
- Individuals and interactions over processes and tools: In an Agile environment, we still have processes and tools, but we prefer to keep our use of them light, because we value communication between individuals. If we're to foster successful collaboration, we need common understanding between technical and non-technical people. Tools and processes have a tendency to obfuscate that.
A good example is the User Story, an Agile requirement gathering technique, usually recorded on an index card. It's kept deliberately small so that we can't add too much detail. The aim is to encourage, through conversation, a shared understanding of the task.
In the same way, we should look at all of the following Agile values:
- Working software over comprehensive documentation: As a software delivery team, our primary focus should be on delivering the software—fit for purpose, and satisfying our customer's need.
In the past, we've made the mistake of using documents to communicate to our customer what we're building. Of course, this led to much confusion and potential ambiguity. Our customer isn't an expert in building software and would, therefore, find it pretty hard to interpret our documentation and imagine what we might be building. The easiest way to communicate with them is via working software that they can interact with and use.
By getting something useful in front of our customer as soon as possible, we might discover if we're thinking what they're thinking. In this way, we can build out software incrementally while validating early and often with our customer that we're building the right thing.
- Customer collaboration over contract negotiation: We aim to build something useful for our customer and hopefully get the best value for them we can. Contracts can constrain this, especially when you start to test the assumptions that were made when the contract was drawn up. More often than not there are discoveries made along the way, or the realization that something was forgotten or that it won't work the way we were expecting. Having to renegotiate a contract, or worse still, recording variances to be carried out at a later stage, both slow down and constrain the team's ability to deliver something of value to the customer.
- Responding to change over following a plan: When considering this Agile Value, it is worth drawing a comparison with the military.
The military operates in a very fluid environment; while they will undoubtedly have a plan of attack, this is often based on incomplete information about the enemy's strength and whereabouts. The military very much has to deal with known knowns, known unknowns, and unknown unknowns.
This is what we call a planning-driven environment; they're planning constantly throughout the battle as new information becomes available.
So when going into battle, while they have group objectives, the military operate with a devolved power structure and delegated authority so that each unit can make decisions on the ground as new information is uncovered. In this way, they can respond to new information affecting the parameters of their mission, while still getting on with their overall objective. If the scope of their mission changes beyond recognition, they can use their chain of command to determine how they should proceed and re-plan if necessary.
In the same way, when we're building software, we don't want to blindly stick to a plan if the scope of our mission starts to change. The ability to respond to new information is what gives us our agility; sometimes we have to deviate from the plan to achieve the overall objective. This enables us to maximize the value delivered to our customer.
The signatories to the Manifesto all shared a common background in light software development methodologies. The principles they chose reflect this. Again the emphasis is on people-focused outcomes. Each of the following principles supports and elaborates upon the values:
- Our highest priority is to satisfy the customer through the early and continuous delivery of valuable software: In encouraging incremental delivery as soon and often as we can, we can start to confirm that we are building the right thing. Most people don't know what they want until they see it, and in my experience, use it. Taking this approach garners early feedback and significantly reduces any risk to our customer.
- Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage: Instead of locking scope and ignoring evolving business needs, adapt to new discoveries and re-prioritize work to deliver the most value possible for your customer. Imagine a game of soccer where the goal posts keep moving; instead of trying to stop them moving, change the way you play.
- Deliver working software frequently, from a couple of weeks to a couple of months, with a preference for the shorter timescale: The sooner we deliver, the sooner we get feedback. Not only from our customer that we're building the right thing, but also from our system that we're building it right. Once we get an end-to-end delivery taking place, we can start to iron out problems in our integration and deployment processes.
- Business people and developers must work together daily throughout the project: To get a good outcome, the customer needs to invest in the building of the software as much as the development team. One of the worst things you can hear from your customer as a software developer is, "You're the expert, you build it." It means that they are about to have very little involvement in the process of creating their software. And yes, while software developers are the experts at building software, and have a neat bunch of processes and tools that do just that, we're not the expert in our customer's domain and we're certainly not able to get inside their heads to truly understand what they need. The closer the customer works with the team, the better the result.
- Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done: A software development team is a well-educated bunch of problem solvers. We don't want to constrain them by telling them how to do their jobs; the people closest to solving the problem will get the best results. Even the military delegate authority to the people on the frontline because they know if the objective is clear, those people are the ones who can and will get the job done.
- The most efficient and effective method of conveying information to and within a development team is face-to-face conversation: Face-to-face conversation is a high-bandwidth activity that not only includes words but facial expressions and body language too. It's the fastest way to get information from one human being to another. It's an interactive process that can be used to quickly resolve any ambiguity via questioning. Couple face-to-face conversation with a whiteboard, and you have a powerhouse of understanding between two or more individuals. All other forms of communication dwindle in comparison.
- Working software is the primary measure of progress: When you think about a software delivery team, and what they are there to do, then there really is nothing else to measure their progress. This principle gives us further guidance around the Agile value working software over comprehensive documentation.
The emphasis is on working software because we don't want to give any false indicators of progress. For example, if we deliver software that isn't fully tested, then we know that it isn't complete, it has to go through several cycles of testing and fixing. This hasn't moved us any closer to completion of that piece of work because it's still not done.
Done is in the hands of our customer, done is doing the job it was intended to do. Until that point, we aren't 100% sure we've built the right thing, and until that moment we don't have a clear indication of what we might need to redo.
Everything else the software team produces just supports the delivery of the software, from design documents to user guides.
- Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely: Putting a software delivery team under pressure to deliver happens all the time; it shouldn't, but it does. There are a number of consequences of doing this, some of which we discussed earlier in this chapter.
For example, put a team under pressure for long enough, and you'll seriously impact the quality of your product. The team will work long hours, make mistakes, take shortcuts, and so on to get things done for us. The result won't just affect quality, but also the morale of our team, and their productivity. I've seen this happen time and time again; it results in good people leaving along with all the knowledge they've accumulated.
This principle aims to avoid that scenario from happening. Which means that we have to be smart and use alternative ways of getting things done sooner. This means seeking value, ruthless prioritization, delivering working software, a focus on quality, and allowing teams to manage their work in progress so they can avoid multitasking.
Studies have shown that multitasking causes context switching time losses of up to 20%. When you think about it, when you're solving complex problems, the deeper you are into the problem, the longer it takes to regain context when you pick it back up. It's like playing and switching between multiple games of chess. It's not impossible, but it definitely adds time.
I've also seen multitasking defined as messing up multiple things at once.
- Continuous attention to technical excellence and good design enhances agility: By using solid technical practices and attention to detail when building software, we improve our ability to make enhancements and changes to our software.
For example, Test-Driven Development (TDD) is a practice which is as much about designing our software as it is testing it. It may seem counter-intuitive to use TDD at first, as we're investing time in a practice that seemingly adds to the development time initially. In the long term, however, the improved design of our software and the confidence it gives us to make subsequent changes enhances our agility.
Technical debt is a term first coined by Ward Cunningham. It describes the accumulation of poor design that crops up in code when decisions have been made to implement something quickly. Ward described it as Technical Debt because if you don't pay it back in time, it starts to accumulate. As it accumulates, subsequent changes to the software get harder and harder. What should be a simple change suddenly becomes a major refactor/rewrite to implement.
- Simplicity—the art of maximizing the amount of work not done—is essential: Building the simplest thing we can to fit the current need prevents defensive programming also known as "future proofing." If we're not sure whether our customer needs something or not, talk to them. If we're building something we're not sure about, we may be solving a problem that we don't have yet.
Remember the You Ain't Gonna Need It (YAGNI) principle when deciding what to do. If you don't have a hard and fast requirement for it, don't do it.
One of the number one causes of bugs is complexity in our code. Anything we can do to simplify it will help us reduce bugs and make our code easier to read for others, thus making it less likely that they'll create bugs too.
- The best architectures, requirements, and designs emerge from self-organizing teams: People nearest to solving the problem are going to find the best solutions. Because of their proximity, they will be able to evolve their solutions so that all aspects of the problem are covered. People at a distance are too removed to make good decisions. Employ smart people, empower them, allow them to self-organize, and you'll be amazed by the results.
- At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly: This is one of the most important principles in my humble opinion and is also my favorite. A team that takes time to inspect and adapt their approach will identify actions that will allow them to make profound changes to the way they work. The regular interval, for example, every two weeks, gives the team a date in their diary to make time to reflect. This ensures that they create a habit that leads to a continuous improvement mindset. A continuous improvement mindset is what sets a team on the right path to being the best Agile team they can be.
The Agile Manifesto advocates incremental delivery using adaptive planning. In this section, we contrast and compare this approach with the previously more traditional approach of Waterfall delivery/predictive planning.
In the following section, we'll look at both approaches and some of the impacts on how we deliver software. Before we launch into the detail, here's a quick comparison of the two approaches:
The traditional delivery model known as Waterfall was first shown diagrammatically by Dr Winston W. Royce when he captured what was happening in the industry in his paper, Managing the Development of Large Software Systems, Proceedings WesCon, IEEE CS Press,1970.
In it, he describes a gated process that moves in a linear sequence. Each step, such as requirements gathering, analysis or design, has to be completed before handover to the next step.
It was presented visually in Royce's paper in the following way:
The term Waterfall was coined because of the observation; just like a real waterfall, once you've moved downstream, it's much harder to return upstream. This approach is also known as a gated approach because each phase has to be signed off before you can move onto the next.
He further observed in his paper that to de-risk this approach, there should be more than one pass through, each iteration improving and building on what was learned in the previous pass through. In this way, you could deal with complexity and uncertainty.
For some reason, not many people in the industry got the memo though. They continued to work in a gated approach but, rather than making multiple passes, expected the project to be complete in just one cycle or iteration.
To control the project, a highly detailed plan would be created, which was used to predict when the various features would be delivered. The predictive nature of this plan was based entirely on the detailed estimates that were drawn up during the planning phase.
This led to multiple points of potential failure within the process, and usually with little time built into the schedule to recover. It felt almost de rigueur that at the end of the project some form of risk assessment would take place before finally deciding to launch with incomplete and inadequate features, often leaving everyone involved in the process stressed and disappointed.
The waterfall process is a throwback to when software was built more like the way we'd engineer something. It's also been nicknamed faith-driven development because it doesn't deliver anything until the very end of the project. Its risk profile, therefore, looks similar to the following figure:
No wonder all those business folks were nervous. Often their only involvement was at the beginning of Software Development Life Cycle (SDLC) during the requirements phase and then right at the end, during the delivery phase. Talk about a big reveal.
The key point in understanding a plan-driven approach is that scope is often nailed down at the beginning. To then deliver to scope requires precise estimates to determine the budget and resourcing.
The estimation needed for that level of precision is complicated and time-consuming to complete. This leads to more paperwork, more debate, in fact, more of everything. As the process gets bigger, it takes on its own gravity, attracting more things to it that also need to be processed.
The result is a large chunk of work with a very detailed plan of delivery. However, as already discussed, large chunks of work have more uncertainty and more variability, therefore calling into question the ability to give a precision estimate in the first place.
And because so much effort was put into developing the plan, there becomes an irrational attachment to it. Instead of deviating from the plan when new information is uncovered, the project manager tries to control the variance by minimizing or deferring it.
Over time, and depending on the size of the project, this can result in a substantial deviation from reality by the time the software is delivered, as shown in the following diagram:
This led to much disappointment for people who had been waiting many months to receive their new software. The gap in functionality would often cause some serious soul-searching on whether the software could be released in its present state or whether it would need rework first.
No-one wants to waste money, so it was likely that the rollout would go ahead and a series of updates would follow that would hopefully fix the problems. This left the people using the software facing a sometimes unworkable process that would lead them to create a series of workarounds. Some of these would undoubtedly last for the lifetime of the software because they were deemed either too trivial or too difficult to fix.
Either way, a business implementing imperfect software that doesn't quite fit its process is faced with, often undocumented, additional costs as users try to work around the system.
For those of us who have tried building a large complex project in a predictive, plan-driven way, there's little doubt it often fails to deliver excellent outcomes for our customer. The findings of the Standish Group's annual Chaos Report are a constant reminder, showing that we're still better at delivering small software projects over large projects, and Waterfall or predictive approaches are more likely to result in the project being challenged or deemed a failure regardless of the size.
Incremental delivery seeks to de-risk the approach by delivering small chunks of discrete value early and often to get feedback and reduce uncertainty. This allows us to determine sooner rather than later, whether we're building the right thing.
As you can see from the following hypothetical risk profile, by delivering increments of ready-to-use working software, we reduce risk significantly after only 2 or 3 iterations:
This is combined with an approach to planning that allows us to quickly pivot or change direction based on new information.
With an adaptive plan, the focus is on prioritizing and planning for a fixed horizon, for example, the next three months. We then seek to re-plan once further information has been gathered. This allows us to be more flexible and ultimately deliver something that our customer is much more likely to need.
The following diagram shows that each iteration or increment in an adaptive planning approach allows an opportunity for a correction to the actual business needs:
The final thing I'd like you to consider in this chapter is that Agile isn't one particular methodology or another. Neither is it a set of technical practices, although these things do give an excellent foundation.
On top of these processes, tools, and practices, if we layer the values and principles of the manifesto, we start to evolve a more people-centric way of working. This, in turn, helps build software that is more suited to our customer's needs.
In anchoring ourselves to human needs while still producing something that is technically excellent, we are far more likely to make something that meets and goes beyond our customer's expectations. The trust and respect this builds will begin a powerful collaboration of technical and non-technical people.
Over time, as we practice the values and principles, we not only start to determine what works well and what doesn't, but we also start to see how we can bend the rules to create a better approach.
This is when we start to become truly Agile. When the things we do are still grounded in sound processes and tools, with good practices, but we begin to create whole new ways of working that suit our context and begin to shift our organizational culture.
When discussing the Agile Mindset, we often talk about the difference between "doing Agile" and "being Agile."
If we're "doing Agile", we are just at the beginning of our journey. We've probably learnt about the Manifesto. Hopefully, we've had some Agile or Scrum training and now our team, who are likely to have a mix of Agile backgrounds, are working out how to apply it. Right now we're just going through the motions, learning by rote. Over time, with the guidance of our Scrum Master or Agile Coach, we'll start to understand the meaning of the Manifesto and how it applies to our everyday work.
Over time our understanding deepens, and we begin to apply the values and principles without thinking. Our tools and practices allow us to be productive, nimble, and yet, still disciplined. Rather than seeing ourselves as engineers, we see ourselves as crafts men and women. We act with pragmatism, we welcome change, and we seek to add business value at every step. Above all else, we're fully tuned to making software that people both need and find truly useful.
If we're not there now, don't worry, we're just not there yet. To give a taste of what it feels like to be on a team who are thinking with an Agile Mindset following is an example scenario.
Imagine we're just about to release a major new feature when our customer comes to us with a last minute request. They've spotted something isn't working quite as they expected and they believe we need to change the existing workflow. Their biggest fear is that it will prevent our users from being able to do a particular part of their job.
Our team would respond as a group. We'd welcome the change. We'd be grateful that our customer has highlighted this problem to us and that they found it before we released. We would know that incorporating a change won't be a big issue for us; our code, testing and deployment/release strategies are all designed to accommodate this kind of request.
We would work together (our customer is part of the team) to discover more about the missing requirement. We'd use our toolkit to elaborate the feature with our customer, writing out the User Stories (an Agile requirement gathering tool we'll discuss in Chapter 4, Gathering Agile User Requirements) and if necessary prototyping the user experience and writing scenarios for each of the Acceptance Criteria.
We'd then work to carry out the changes in our usual disciplined way, likely using TDD to design and unit/integration test our software as well as Behavior-Driven Development (BDD) to automate the acceptance testing.
To begin with, we may carry the work out as a Mob (see Chapter 12, Baking Quality into Our Software Delivery) or in pairs. We would definitely come together at the end to ensure we have collective ownership of the problem and the solution.
Once comfortable with the changes made, we'd prepare and release the new software and deploy it with the touch of a button. We might even have a fully automated deployment that deploys as soon as the code is committed to the main branch.
Finally, we'd run a retrospective to perform some root cause analysis using the 5-whys, or a similar technique, to try to discover why we missed the problem in the first place. The retrospective would result in actions that we would take, with the aim of preventing a similar problem occurring again.
In this chapter, we looked at two delivery styles, delivery as a software product and delivery as a software project.
We learned that delivery as a software project was hard to get right for multiple reasons. And giving our team only one shot at delivery gave them little or no chance of fine-tuning their approach. In a novel situation, with varying degrees of uncertainty, this could lead to a fair amount of stress.
There is a better chance of succeeding if we reduce the variability. This includes knowledge of the domain, the technology, and of each of our team members' capabilities. So, it is desirable to keep our project teams together as they move from project to project.
What we learned was that when a long-lived team works on a product, they have the opportunity to deliver incrementally. If we deliver in smaller chunks, we're more likely to meet expectations successfully. Plus, teams that work on products are long-lived and have multiple opportunities to fine-tune their delivery approach.
Those who build software, understand well the complex nature of the work we do and the degree of variability that complexity introduces. Embrace that, and we'll learn to love the new control we can gain from focusing on incremental value delivery in an adaptive system.
In the next chapter, will look at the different Agile methods for software delivery and delve into the mechanics of three of them in particular. See you there.