Towards the end of the last century, the software industry was in a state of turmoil. There were some significant software project failures, which were having a substantial reputational impact on the industry as a whole. One observation was that software projects over a certain size were more likely to fail.
In this chapter, we'll discuss the factors that led to a crisis and the subsequent major turning-point in the software industry. In the second part of this chapter, we'll introduce the Agile Manifesto and its origins and discuss the way it revolutionizedÂ the way we think about building software. We'll explain the Manifesto's values and principles, what they each mean, and the impact they have on how software professionals work.
In this chapter, we will cover the following topics:
- Why the software industry needed to change
- The origins of the Agile Manifesto
- A detailed look at the values and principles of the manifesto and how they translate to today's context
- Adaptive versus predictive planning
- Incremental versus waterfall delivery
- Agile isn't a process, it's a mindset built on guiding values/principles and requires solid technical practicesÂ
When I was outlining this book, I was in two minds about whether to include a section on the history of the software industry and howÂ Agile came to be. I figured most of you would be young enough not to know any different; you're probably doing something Agile or closely related to Agile already. However,Â I decided to include this chapter for two reasons:
- I still seeÂ echoes of the past affecting us today
- "Why" we do something is important to inform "what" we do and "how" we do it
When working as a software developer in the 1990s and 2000s, I worked in a range of different organization types, from software houses to software consultants, from large organizationsÂ such as corporate and central government toÂ small three-person startups. In my experienceÂ across these different organization types, we used twoÂ noticeably differentÂ styles of software delivery, either as a product or a project.
When building software products, we tended to form long-lived teams around them. The team would be responsible for the product throughout its life. During the initial build and subsequent revisions, they handled the entire Software Development Life Cycle (SDLC) end-to-end, including delivery into production. These teams were also often responsible for managing the production environment and providing support.
The scale of the product, and how widely it was adopted, determined whether it was managed by a single software team or by a network of teams all working for one product group. One thing that was noticeable about software product teams, apart from their long-lived nature, was that they often had good relationships with their customers as well as technology and operations staff. Sometimes they even had representatives of these groups within the teams.
Funding these products often took place on a yearly cycle, which coincided with setting the objectives for the product's development that year. A key thing to note is that the budget would often be allocated to fund the team(s) and the decisions on what features would be developed, enhanced, or fixed that year would be managed by the team itself.
Once built, the software would often be handed over to a separate team, known as the Business As Usual (BAU)Â team in business parlance. They would manage the maintenance and support of the product. There was a two-fold intention in handing over to a separate BAU team:
- They would handle the bulk of changes to the organization, for example, by training all impacted staff on how to use the new software and associated business processes. Once they'd introduced the changes, the BAU team's aim would then be toÂ create and support a stable business environment.Â
- The software delivery team would be free to move on to the next project. Software delivery teams were a scarce resource, and this was seen as a way to optimize software delivery.
Software projects required project managers who often operated in the change management capacity as well, although sometimes separate change managers were allocated. In this way, they would also be responsible for seeing that the introduction of the new software platform to the organization and the transition to BAU went smoothly. The project team itself would often be managed by aÂ separate unit who reported directly to the business known as the Project Management Office (PMO).Â
What I discovered by using these two contrasting approaches resulted in a profound shift in my thinking: When delivering software as a product, there was much more opportunity to focus on value because the product team was long-lived and was able to adopt an iterative/incremental approach to delivery. We, therefore, had multipleÂ opportunities to deliver, enhance our strategy, and get things right.
However, with software delivery as a project, more often than not the software delivery team only had one opportunity to get it right. Successfully delivering a software project to meet expectations with only one shot just didn't happen that often.
Even when we did deliver, it was often as a result of aÂ considerable effort on our part to get it across the line, including many lost evenings and weekends.
Subsequent revisions of the software were often handled as separate projects and likely by a different software team, often leading to a lack of continuity and knowledge sharing.
As a result, there was a distinct lack of trust between usâthe software professionals and the people we were building software for. Unfortunately, "us" oftenÂ became them and us with unmet expectations being so often the cause of theÂ rift in the relationship.
We tried to solve this problem byÂ making things more precise. Unfortunately, the version of precision the project mindsetÂ opted forÂ had onlyÂ three predictiveÂ aspectsâscope, time, and budget. All three of these things are veryÂ difficult to quantify when tied to a software projectÂ where complexity, uncertainty, and sheerÂ volume of work could and did amplify any errors in these calculations.Â
However, the single most significant problem when you tie down all three of these factors, scope, timeÂ and budget, is thatÂ something colloquially known as the Iron TriangleÂ forms. Refer to the following figure:
When you set scope, date, and budgetÂ likeÂ that, there is little room forÂ maneuverability to deviate fromÂ the plan. To help mitigate risks, most will create buffers in their schedules. However, the rigid nature of the triangle means that if and whenÂ overruns start to eat more and more into the buffers, something else has to give. And whatÂ usually occursÂ when aÂ software development team is under pressure to deliver? One or more of the following qualities of your software product will start to suffer:Â
- Functionality: Whether it works as expected
- Reliability: HowÂ available it is
- Usability: How intuitive it isÂ to useÂ
- Scalability: How performant it is
- Maintainability: How easy it is to change
- Portability: How easy it is to move to a different platformÂ
To understand why precision in predicting the outcomeÂ of a software project isÂ so complicated we need toÂ unpack thingsÂ a little.Â
At the time, many felt that the functionality of the delivered softwareÂ was the priority and would often seek to lock theÂ scope of the project.Â We allowed for this by having some degree of variability in the budget and the schedule.
At the time, many project managers would work to the PMI or ISO 9000 guidelines on the definition of quality. Both of these had a reasonably straightforwardÂ quality definition requiring the scope of the project to be delivered in a fully functional form.Â
To meet the scope expectation, we had to estimate, to a fair degree ofÂ precision, how long it would take us. In this way, we would beÂ able to determineÂ the length of time needed andÂ the number of people required for the team.
And itÂ was at the estimate stage that we often set ourselves up to fail.
One thing that wasÂ increasingly obvious in the software industry was that our approaches toÂ estimating or predicting the outcome of a project were out of whack.Â As a software team, we were often passed detailed requirements to analyze, thenÂ do some initial design work, and provide a work breakdown with estimates.Â
We'd be working on one project and then at theÂ same time asked to estimate on another. The estimates we were toldÂ would inform theÂ decision on whether the project was viableÂ and wouldÂ be funded.Â Some teams tried to be sophisticated, using algorithmic estimating techniques such as Constructive Cost Model (COCOMO), COCOMO II,Â Source lines of code (SLOC), and Function Point. Most estimatingÂ methods incorporated rule of thumb or experience-based factors as well as factors forÂ complexity and certainty.Â
Sometimes the estimates would be given by people who weren't going toÂ work on the project. This was either because the project leadership groupÂ didn't want to disturb an existing project team, or because, without funding, the project teams weren't formed yet.Â This meant the involvement ofÂ someoneÂ like an architect or a technical leaderÂ who could breakÂ downÂ the work and estimate based on the solution they devised. Most complicated problems have severalÂ solutions, soÂ if those who had doneÂ the solution work breakdown weren't available to provide guidance on how to implement their solution, obviously this could cause trouble later on.Â
Either way, whoever provided the estimate would give it based on the bestÂ solution theyÂ could theorize with the information they were given. More often than not, the first estimateÂ that wasÂ given would be used as a way to control the project, and it would be pretty much set in stone.Â This was aÂ pattern that I stumbled across many times.Â When this continuallyÂ happened, we tried to improve theÂ accuracy of our estimates by spending timeÂ doing more upfront work.Â But the reality was to get an exact estimate of effort so that time frames and budget can be drawn, you pretty much have to complete all the work in the first place.
It also became a bit of a standing joke that these estimates were not only painstakingly worked on overÂ days or weeks, but as soon as we gave them to a project manager, theyÂ would double the figures. When we challenged that practice, they'd remind us that all software developers are optimistic and that buffers were needed.
But it's not that we're optimistic by nature; in fact, a lot of us had already factored in our own buffer allowances.Â The explanation is more straightforward; the work we do is novelâmore often than notÂ we are working on something entirely different from what we've built before: different domain, different technology stack, different frameworks, and so on.Â
Some teams would combat this by offering two figures as part of their estimates. The first being the time estimate, the second being theÂ level of certainty that they couldÂ complete it within that timeframe.Â If the task was straightforward, then they would offer their estimate with 100% certainty. If the task was more complicated they would lower the percentage accordingly. The higher the uncertainty, the lower the percentage.
ThisÂ certainty factor could then be used to allocate aÂ buffer. As certainty got higher, the buffer would getÂ smaller, but even with 100% certainty, there would stillÂ be a buffer. After all, there is no such thing as an ideal day as much as we wouldÂ like to think there is.
At theÂ opposite end of the spectrum, the more the uncertainty in the estimate, the largerÂ the buffer. At theÂ extreme, it was notÂ uncommon to have buffers of 200%.
For example, we would often referÂ ironically to a task as just a "small matter of programming," when someone with little or no understanding of what was involved was telling us it looked easy.
We also developed ironicÂ laws about underestimation such as theÂ Ninety-Ninety rule, which states the following:Â
"The first 90% of the code accounts for the first 90 % of the development time. The remaining 10 % of the code accounts for the other 90 percent of the development time"Â
â Tom Cargill, Bell Labs
This rule was later made popular by Jon Bentley's September 1985 Programming PearlsÂ column in Communications of the ACM, where it is titled "Rule of Credibility".
The biggest problem with estimation is the amount of information we assume. We make assumptions on how to solve the business problem, the technologies we're going to use, and the capabilities of the people building it. So many factors.
The level ofÂ complexity andÂ uncertaintyÂ impacts our ability to give an accurate estimate because there are soÂ many variables at play. This, in turn, is amplified by the size of the piece of work. The result is something referred to as the Cone of Uncertainty:
Barry Boehm first described this concept in his book SoftwareÂ Engineering Economics, 1981; he called itÂ the Funnel Curve. It was named The Cone of UncertaintyÂ in theÂ Software Project Survival GuideÂ (McConnell 1997).Â
It shows us that the further we are away from completion, the larger theÂ variance in the estimate we give. As we move closer to completion, the more accurate our estimate will become, to the point where we complete the work and know exactly how long it took us.Â
So while it was felt that better precision could be gainedÂ usingÂ a gated process, such as Waterfall, because it led to aÂ tendencyÂ to bundle more of the stuff we wanted to get done together,Â it would significantly increaseÂ theÂ size of the work parcel. This, in turn, compounded the problem of getting an accurate estimate.
Sometimes, of course, we'dÂ fail to deliver something of use toÂ our customer and miss the point entirely.
I remember one project I worked on where we were given ideal working conditions. The teams were offsite, in our own offices, which meant we could be dedicated toÂ the project we were working on without beingÂ disturbed or shoulder-tapped by others. It felt like theÂ perfect setup for success.Â
WeÂ spent ten months painstakinglyÂ delivering precisely to requirements. Everything was built andÂ tested out according to the detailed designs we were given. We were even on budget and time when we delivered. Unfortunately, when the software went live and was in the hands of the people using it, they reported back that it didn't do the job they needed it to do.
Why had we failed? We'd failed because we spent ten monthsÂ buildingÂ something in isolation from our customer. WeÂ hadn't involved them in the implementation, and too many assumptions had been made. Diagrammatically, this looked a little like the following:
We then spent the next six months reworking the software into something that was usable. Unfortunately, forÂ the partner company working alongside our team, this meant a major variance in their contract, most of which they had to swallow. We eventually did deliver something to our customer that they wanted and needed but at a huge financialÂ impact on us andÂ ourÂ partner company.
This, unfortunately, is the path thatÂ predictive planning sets you on. YouÂ develop aÂ fixed mindset around what is to be delivered because you know if you aren't dogmatic in your approach, you're likely to fail to meet the date, budget, or scope set inÂ the contract.Â
One of the key characteristics toÂ understandÂ about the nature ofÂ predictive planningÂ is that the minute someoneÂ says, "How much is this going to cost?", or "When can this be delivered?", they significantly constrain theÂ value that will be created. And, at the end of the day, shouldn't it be about maximizing the value to your customer? Imagine if we spent one more month on a project and delivered twice as much value. Wouldn't that beÂ something our customer would wantÂ over meeting a date or a particular set of features justÂ because it's been speculatively laid out in a contract?
Instead, we focus on a date, an amount of money, and the functionality. And unfortunately, functionality doesn't always translate to value (as we saw in the previous section when we missed the point entirely).
This is why the predictive planning used in Waterfall style deliveriesÂ hasÂ also become known asÂ faith-driven development because itÂ leaves so much to chance,Â and usually right until theÂ end of the project.
To focus on value delivery, we have to shift our mindset to use adaptive planning versus predictive planning, something that weÂ will talk about later in this chapter.
It's not that theÂ project mindset is bad, it's just that the mindset drivesÂ us to think that we needÂ a bigÂ upfront design approach in order to obtain a precision estimate. As noted, this can lead to a number of issues, particularly when presented with a large chunk of work.Â
And it isn't that upfront design is bad; it's often needed. It'sÂ just the big part, when we try to do too much of it, that causes us problems.
There were a number of bad behaviors feeding the big thinkingÂ happening in the industry. One of these is the way that work is funded, as projects. For a particular project to getÂ funded, it has to demonstrate its viability at the annual funding round.Â
Unfortunately, once people in the management seats saw a hard number, it often became set in stone as an expectation. During the execution of the plan, if new information was discovered that was anything more significant than a small variance, there would be a tendency to try to avoid doing it. The preference would be to try to stick to the plan, rather than incorporate the change. You wereÂ seen as a better project manager for doing that.Â
So, we had a chicken-and-egg scenario; the project approachÂ to funding meant that:
- The businessÂ needed to know the cost of something so they could allocate a budget.
- We needed to know the size of something and its technical scope and natureÂ soÂ that we could allocate the right team in terms of size and skill set.
- We had to do this while accepting:Â Â
- That our business didn't know exactly what it needed. The nature of software is intangible, most people don't know what they want/need until they see it and use it.Â
- The business itself wasn't stationary; just because we recorded requirements at a particular moment in time, didn't mean that the business would stop evolving around us.Â Â
So,Â if we canÂ avoid the big part, we're able toÂ reduceÂ the level of uncertainty and subsequent variability in ourÂ estimates. We're also able to deliver in a timely fashion, which means there is less likelihood of requirements going out of date or the business changing its mind.
We did try to remedy this by moving to prototyping approaches. This enabled us to make iterative sweeps through the work, refining it with people who couldÂ actually use the working prototype and give us direct feedback.
Rapid Application Development, or RADÂ as it's commonly known, is one example of an early iterative process. Another was the Rational Unified Process (RUP).
But we still always managed to bite off more than we could chew. I suspect the main reason for this is that our customers still expected us to deliver something they could use. WhileÂ prototypes gave the appearance of doing that and did enable us to get feedback early:Â
- At the end of the session, after getting the feedback we needed, and much to our customer'sÂ disappointment, we'd take the prototype away. They had assumed our mockup was a working software.Â
- We still hadn't delivered anything they couldÂ use in their day-to-day life to help them solve real-world problems.
To try to find a remedy to theÂ hit-and-miss approach to software delivery, a group of 17 software luminaries cameÂ together in February 2001. The venue wasÂ a cabin in Utah, which they chose, as the story goes, so that they could ski, eat, and look forÂ an alternative to the heavyweight, document-driven processes that seemed to dominate the industry.
Among them were representatives from Extreme Programming (XP), Scrum, Dynamic Systems Development Method (DSDM), Adaptive Software Development (ASD), Crystal, Feature-Driven Development, and Pragmatic Programming.
The manifesto documents four values and twelve principles that uncover "better ways of developing software by doing it and helping others do it".
The groupÂ formally signed the Manifesto and named themselves the Agile Alliance.Â We'll take a look at the Agile Values and Principles in the following sections.
Here is the Manifesto for Agile Software DevelopmentÂ (http://agilemanifesto.org/):Â
That is, while there is value in the items on the right, we value the items on the left more.
Let's look at how this works by looking at each value in more detail:
- Individuals and interactions over processes and tools: In an Agile environment, we still have processes and tools, but we prefer to keep our use of them light, because we value communication between individuals. If we're to foster successful collaboration, we need common understanding between technical and non-technical people. Tools and processes have a tendency to obfuscate that.
A good example is the User Story, an Agile requirement gathering technique, usually recorded on an index card. It's kept deliberately small so that we can't add too much detail. The aim is to encourage, through conversation, a shared understanding of the task.
- Working software over comprehensive documentation: As a software delivery team, our primaryÂ focus should be on delivering the softwareâfit for purpose, and satisfying our customer's need.
In the past, we've made the mistake of using documents to communicateÂ to our customer what we're building. OfÂ course, this led to much confusion and potential ambiguity. Our customer isn't an expert in building softwareÂ and would, therefore, find it pretty hard to interpret our documentation and imagineÂ what we might be building. The easiestÂ way to communicate with them is via working software that they can interactÂ with and use.
By getting something useful in front of our customer as soon asÂ possible, we might discover if we're thinking what they're thinking. In this way, we can build out software incrementally while validating early and often with our customer that we're building the right thing.
- Customer collaboration over contract negotiation: We aim to build something usefulÂ for our customer and hopefully get the best value for them we can. Contracts can constrain this, especially when you start to test the assumptions that were made when the contract was drawn up. More often than not there are discoveries made along the way, or the realization that something was forgotten or that it won't work the way we were expecting. Having to renegotiate a contract, or worse still, recording variances to be carried out at a later stage, bothÂ slow down and constrain the team's ability to deliverÂ something of value to the customer.
- Responding to change over following a plan:Â When consideringÂ this Agile Value, it is worth drawing a comparison with the military.Â
The military operates in a very fluid environment; while they will undoubtedlyÂ have a plan ofÂ attack, this is often based on incomplete information about the enemy's strength and whereabouts.Â The military very much has to deal with known knowns, known unknowns, and unknownÂ unknowns.Â
Plan-driven versus Planning-driven: Plan-driven means a fixed plan which everyone follows and adheres to. This is also known as predictive planning. Planning-driven is more responsive in nature; when new information comes to light, we adjust our plan. It's called planning-driven because we expect change and so we're always in a state of planning. This is also known as Adaptive Planning
So when going into battle, while they have group objectives, the military operate with a devolved power structure and delegated authority so that each unit can make decisions on the ground as new information is uncovered. In this way, they can respond to new information affectingÂ theÂ parameters of their mission, while still getting on with their overall objective. If the scope of their mission changes beyond recognition, they can use their chain of command to determine how they should proceed and re-plan if necessary.
In the same way, when we're building software, we don't want to blindly stick to a plan if the scope of our mission starts to change. The ability to respond to new information is what gives us our agility; sometimes we have to deviate from the plan to achieve the overall objective.Â ThisÂ enables us to maximize the value delivered to our customer.
The signatories to the Manifesto all shared a common background in light software development methodologies. The principles they chose reflect this. Again the emphasis is on people-focused outcomes. Each of the following principles supports and elaborates upon the values:
- Our highest priority is to satisfy the customer through the early and continuous delivery of valuable software: In encouraging incremental delivery as soon and often as we can,Â we can start to confirm that we are building the right thing. Most people don't know what they want until they see it, and in my experience, use it. Taking this approach garners early feedback and significantly reduces any risk to our customer.
- Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage: Instead of locking scope and ignoring evolving business needs, adapt to new discoveries and re-prioritize work to deliver the most value possible for your customer. Imagine a game of soccer where the goal posts keep moving; instead of trying to stop them moving, change the way you play.
- Deliver working software frequently, from a couple of weeks to a couple of months, with a preference for the shorter timescale: The sooner we deliver, the sooner we get feedback. Not only from our customer that we're building the right thing, but also from our system that we're building it right. Once weÂ get an end-to-end delivery taking place, we can start to iron out problems in our integration and deployment processes.
- Business people and developers must work together daily throughout the project:Â To get a good outcome, the customer needs to invest in the building of the software as much as the development team.Â One of the worst things you can hear from your customer as a software developer is, "You're the expert, you build it." It means that they are about to have very little involvement in the process of creating their software. And yes, while software developers are the experts at building software, and have a neat bunch of processes and tools that do just that, we're not the expert in our customer's domain and we're certainly not able to get inside their heads to truly understand what they need. The closer the customer works with the team, the better the result.Â
- Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done: A software development team is a well-educated bunch of problem solvers. We don't want to constrain them by telling them how to do their jobs; the people closest to solving the problem will get the best results. Even the military delegate authority to the people on the frontline because they know if the objective is clear, those people are the ones who can and will get the job done.
- The most efficient and effective method of conveying information toÂ and withinÂ a development team is face-to-face conversation: Face-to-face conversation is a high-bandwidth activity that not only includes words but facial expressions and body language too. It's the fastest way to get information from one human being to another. It's an interactive process that can be used to quickly resolve any ambiguity via questioning. Couple face-to-face conversation with a whiteboard, and you have a powerhouse of understanding between two or more individuals.Â All other forms of communication dwindle inÂ comparison.
- Working software is the primary measure of progress:Â When you think about a software delivery team, and what they are there to do,Â then there really is nothing else to measure their progress.Â This principle gives us further guidance around the Agile value working software over comprehensive documentation.Â
The emphasis is on working software because we don't want to give anyÂ false indicators of progress. For example, if we deliverÂ software that isn't fully tested, then we know that it isn't complete, it has to go through several cycles of testing and fixing. This hasn't moved us any closer to completion of that piece of work because it's still not done.Done is in the hands of ourÂ customer, done is doing the job it wasÂ intended to do. Until that point, we aren't 100% sure we've built the right thing, and until that moment we don't have a clear indication of what we might need to redo. Everything else the software team produces just supports the delivery of theÂ software, from design documents to user guides.
- Agile processes promote sustainable development. The sponsors, developers, and usersÂ should be able to maintain a constant pace indefinitely:Â Putting a software deliveryÂ team under pressure to deliver happens all the time; it shouldn't, but it does. There are a number of consequences of doing this, some of which we discussed earlier in this chapter.
For example, put a team under pressure for long enough, and you'll seriously impactÂ the quality of your product. The team will work long hours, make mistakes, take shortcuts, and so on to get things done for us. The result won't just affect quality, but also the morale of our team, and their productivity. I've seen this happen time and time again; it results in good people leaving along with all the knowledge they've accumulated.
This principle aims to avoid that scenario from happening. Which means that we have to be smart and use alternative ways of getting things done sooner. This means seeking value, ruthless prioritization, delivering working software, a focus on quality, and allowing teams to manage their work in progress so they can avoid multitasking.
Studies have shown that multitasking causes context switching time losses of up to 20%. When you think about it, when you're solving complex problems, the deeper you are into the problem, the longer it takes to regain context when you pick it back up. It's like playing and switching between multiple games of chess. It's not impossible, but it definitely adds time.
I've also seen multitasking defined as messing upÂ multiple things at once.
- Continuous attention to technical excellence and good design enhances agility:Â By using solid technical practices and attention to detail when building software, we improve our ability to make enhancements and changes to our software.
For example, Test-Driven Development (TDD)Â is a practice which is as much about designing our software as it is testing it. ItÂ may seem counter-intuitive to use TDD at first, as we're investing time in a practice that seemingly adds to the development time initially. In the long term, however, the improved design of our software and the confidence it gives us to make subsequent changes enhances our agility.
Technical debt is a term first coined by Ward Cunningham. It describes the accumulation of poor design that crops up in code when decisions have been made to implement something quickly. Ward described it as Technical Debt because if you don't pay it back in time, it starts to accumulate. As it accumulates, subsequent changes to the software get harder and harder. What should be a simple change suddenly becomes a major refactor/rewrite to implement.
- Simplicityâthe art of maximizing the amount of work not doneâis essential:Â Building the simplest thing we can to fit the current needÂ prevents defensiveÂ programming also known as "future proofing." If we're not sure whether our customer needs something or not, talk to them. If we're building something we're not sure about, we may be solving a problem that we don't have yet. RememberÂ theÂ You Ain't Gonna NeedÂ It (YAGNI) principleÂ when deciding what to do. If you don't have a hard and fast requirement for it, don't do it.One of the number one causes of bugs isÂ complexity in our code. Anything we can do to simplify it will help us reduce bugs and make our code easier to read for others, thus making it less likely that they'll create bugs too.
- The best architectures, requirements, and designs emerge from self-organizing teams: People nearest to solving the problem are going to find the best solutions. Because of their proximity, theyÂ will be able to evolve their solutions so that all aspects of the problem are covered. People at a distance are too removed to make good decisions. Employ smart people, empower them, allow them to self-organize, and you'll be amazed by the results.
- At regular intervals, the team reflects on how to become more effective,Â then tunes and adjusts its behavior accordingly: This is one of the most important principles in my humble opinion and is also my favorite. A team that takes time to inspect and adapt their approach will identify actions that will allow them to make profound changes to the way they work. The regular interval, for example, every two weeks, gives the team a date in their diary to make time to reflect. This ensures that they create a habit that leads to a continuous improvement mindset.Â A continuous improvement mindset is what sets a team on the right path to being the best Agile team they can be.
The Agile Manifesto advocates incremental delivery using adaptive planning. In this section, we contrast and compare thisÂ approach with theÂ previously more traditional approach of Waterfall delivery/predictive planning.
In the following section, we'll look at both approaches and some of the impacts on how we deliver software. Before we launch into the detail, here's a quick comparison of the two approaches:
The traditional delivery modelÂ known as Waterfall was first shownÂ diagrammatically by Dr Winston W. Royce when he captured what was happening in the industry in his paper, Managing the Development of Large Software Systems, Proceedings WesCon, IEEE CS Press,1970.
In it, he describes a gated process that moves in a linear sequence. Each step, such as requirements gathering, analysis or design, has to be completed before handover to the next step.Â
It wasÂ presented visually in Royce's paper in the following way:
The term Waterfall was coined because of the observation; just like a real waterfall, once you've moved downstream, it's much harder to return upstream. This approach is also known as a gated approach because each phase has to be signed offÂ before you can move onto the next.Â
He further observed in his paper that to de-risk this approach, there should be more than one pass through, each iteration improving and building on what was learned in the previous pass through. In this way, you could dealÂ with complexity and uncertainty.
For some reason, not many people in the industry got the memo though. They continued to work in a gated approach but, rather than making multiple passes, expected the project to be complete in just one cycle or iteration.
To control the project, a highly detailedÂ plan would be created, which was used to predict when theÂ various features would be delivered. The predictive nature of this plan was based entirely on the detailed estimates that wereÂ drawn up during the planning phase.
This led to multiple points of potential failure within the process, and usually with little time built into the schedule to recover. It felt almostÂ de rigueur that at the end of the project some form of risk assessment would take place beforeÂ finally deciding to launch withÂ incomplete and inadequate features, often leavingÂ everyone involved in the process stressed and disappointed.Â
The waterfall process is a throwback to when software was built more like the way we'd engineer something. It's also been nicknamed faith-driven developmentÂ because it doesn't deliver anything until the veryÂ end of the project. Its risk profile, therefore, looksÂ similar to the following figure:
No wonder all those businessÂ folks were nervous. Often their only involvementÂ was at theÂ beginning of Software Development Life Cycle (SDLC) during the requirements phase and then right at the end,Â during the delivery phase. Talk about a big reveal.
The key point in understanding a plan-driven approach is thatÂ scope is oftenÂ nailed down at the beginning. To then deliver to scope requires precise estimatesÂ to determine the budget and resourcing.
The estimation needed for that level of precision is complicatedÂ and time-consuming to complete. This leads to more paperwork, more debate, inÂ fact, more of everything. As the process gets bigger, it takes on its own gravity, attracting more things to it that also need to be processed.
The result is a large chunk of work with a very detailed plan of delivery. However, as already discussed, large chunks of work have more uncertainty and more variability, therefore calling into questionÂ the ability to give a precision estimate in the first place.
And because so much effort was put into developing the plan, there becomes an irrational attachment to it. Instead of deviating from the plan when new information is uncovered, the project manager tries to control the variance by minimizing or deferring it.
Over time, and depending on the size of the project, this can result in a substantial deviation from reality by the time the software is delivered, as shown in the following diagram:
This led to much disappointment for people who had been waiting many months to receive their new software. The gap in functionality would often cause some serious soul-searching on whether the software could be released in its present state or whether it would need rework first.
No-one wants to waste money, so it was likely that the rollout would go ahead and a series of updates would follow that would hopefully fix the problems. This left the people using the software facing a sometimes unworkable process that would lead them to create a series of workarounds. Some of these would undoubtedly last for the lifetime of the software because they were deemed either too trivial or too difficult to fix.
Either way, a business implementing imperfect software that doesn't quite fit its process is faced with, often undocumented, additional costs as users try to work around the system.Â
For those of us who have tried buildingÂ a large complex projectÂ in a predictive, plan-driven way, there's little doubt it often fails to deliver excellent outcomes for our customer. The findings of the Standish Group's annual Chaos Report are a constant reminder, showing that we're still better at delivering small software projects over large projects, and Waterfall or predictive approaches are more likely to result in the project being challenged or deemed a failure regardless of the size.
Incremental delivery seeks to de-risk the approach by delivering small chunks of discrete value early and often to get feedback and reduce uncertainty. This allows us to determine sooner rather than later, whetherÂ we're building the right thing.Â
As you can see from the following hypotheticalÂ riskÂ profile, by delivering increments of ready-to-use working software, we reduce risk significantly after only 2 or 3 iterations:
This is combined with an approach to planning that allows us to quickly pivot or change direction based on new information.
With an adaptiveÂ plan, the focus is on prioritizing and planning for a fixed horizon, for example, the next three months. We then seek to re-plan once further information has been gathered.Â This allows us to be more flexible and ultimately deliver something that our customer is much more likely to need.
The following diagram shows that each iteration or increment in an adaptive planning approach allows an opportunity for a correction to the actual business needs:
The final thing I'd like you to consider in this chapter is that Agile isn't one particular methodology or another. Neither is it a set of technical practices, although these things do give an excellent foundation.
On top of these processes, tools, and practices, if we layer the values and principles of the manifesto, we start to evolve a more people-centric way of working. This, in turn, helps build software that is more suited to our customer's needs. Â
InÂ anchoring ourselves to humanÂ needs while still producing something that is technically excellent, we are far more likely to make something that meets and goes beyond our customer's expectations. The trust and respect this builds will begin a powerful collaboration of technical and non-technical people.Â
Over time, as we practice the values and principles, we not only start to determine what works well and what doesn't, but we also start to see how we can bend the rules to create a better approach.
This is when we start to become truly Agile. When the things we do are still grounded in sound processes and tools, with good practices, but weÂ begin to create whole new ways of working that suit our context andÂ begin to shift our organizational culture.
If we're "doing Agile", we are just at the beginning of our journey. We've probably learnt about the Manifesto. Hopefully, we've had some Agile or Scrum training and now ourÂ team, who are likely to have a mix of Agile backgrounds, are working out how to apply it. Right now we're just going through the motions, learning by rote. Over time, with the guidance of our Scrum Master or Agile Coach, we'll start to understand the meaning of the Manifesto and how it applies to our everyday work.
Over time our understanding deepens, and we begin to apply the values and principles without thinking. Our tools and practices allow us to be productive, nimble, and yet, still disciplined. Rather than seeing ourselves as engineers, we see ourselves as crafts men and women. We act with pragmatism, we welcome change, and we seek to add business value at every step. Above all else, we're fully tuned to making software that people both need and find truly useful.Â
If we're not there now, don't worry, we're just not there yet. To give a taste of what it feels like to be on a team who are thinking with an Agile Mindset following is an example scenario.
Imagine we're just about to release a major new feature when our customer comes to us with a last minute request. They've spotted something isn't working quite as they expected and they believe we need to change the existing workflow. Their biggest fear is that it will prevent our users from being able to do a particular part of their job.
Our team would respond as a group. We'd welcome the change. We'd be grateful that our customer has highlighted this problem to us and that they found it before we released.Â We would know that incorporating a change won't be a big issue for us; our code, testing and deployment/release strategies are all designed to accommodate this kind of request.
We would work together (our customer is part of the team) to discover more about the missing requirement.Â We'd use our toolkit to elaborate the feature with our customer,Â writing out the User Stories (an Agile requirement gathering tool we'll discuss in Chapter 4, Gathering Agile User Requirements) and if necessary prototyping the user experience and writing scenarios for each of the Acceptance Criteria.Â
We'd then work to carry out the changes in our usual disciplined way, likely usingÂ TDDÂ to design and unit/integration test our software as well as Behavior-Driven Development (BDD) to automate the acceptance testing.
To beginÂ with, we may carry the work out as a Mob (see Chapter 12, Baking Quality into Our Software Delivery) or in pairs. We would definitely come together at the end to ensure we have collective ownership of the problem and the solution.
Once comfortable with the changes made, we'd prepare and release the new software and deploy it with the touch of a button. We might even have a fully automated deployment that deploys as soon as the code is committed to the main branch.Â Â
Finally, we'd run a retrospective to perform some root cause analysis using the 5-whys, or a similar technique, to try to discover why we missed the problem in the first place. The retrospective would result in actions that we would take, with the aim of preventing a similar problem occurring again.
In this chapter, we looked at two delivery styles, delivery as a software product and delivery as a software project.
We learned that delivery as a software project was hard to get right for multiple reasons. And giving our team only one shot at delivery gave them little or no chance of fine-tuning their approach. In a novel situation, with varying degrees of uncertainty, this could lead to a fair amount of stress.
There is a better chance of succeeding if we reduce the variability. This includes knowledge of the domain, the technology, and of each of our team members' capabilities. So, it is desirable to keep our project teams together as they move from project to project.
What we learned was that when a long-lived team works on a product, they have the opportunity to deliver incrementally. If we deliver in smaller chunks, we're more likely to meet expectations successfully. Plus, teams that work on products are long-lived and have multiple opportunities to fine-tune their delivery approach.
Those who build software, understand well the complex nature of the work we do and the degree of variability that complexity introduces. Embrace that, and we'll learn to love the new control we can gain from focusing on incremental value delivery in an adaptive system.
In the next chapter, will look at the different Agile methods for software delivery and delve into the mechanics of three of them in particular. See you there.