Minimum Valuable Increment

MVI Full

Overview

The Minimum Valuable Increment allows you to deliver your Strategic Area in small parts. By considering the minimum required to provide value, you gain the following benefits:

  • Be clear on outcomes and expected results to avoid scope creep.

  • Able to stop or pivot quickly if results aren’t as expected.

  • Time to market reduced.

  • Value gained from the work is sped up (for example, increased revenue or fewer support calls).

This does not mean releasing features with very little functionality and testing them. It means figuring out the balance between sufficient functionality to solve a problem and nice to-haves.

MVI process overview

Each MVI will incrementally contribute to the results of a Strategic Area. They provide leading indicators that tell you if your strategy is working. For example, in your Strategic Area, you are improving a low Net Promoter Score (NPS) due to a poorly designed user interface. Each MVI improves a screen, and you want to know what a customer thinks about your changes without running the NPS survey each time.

A canvas is provided to structure this activity. This is strongly recommended when:

  • The work is exploratory, and a lot of testing and learning is required.

  • There are several MVIs required before it is possible to measure the Strategic Area results. This is especially important if each MVI will take a long time.

You may not find the canvas helpful if you don’t meet these criteria. You should build the work and track the Strategic Area results at an appropriate time.

MVI canvas

The MVI Canvas helps you structure your discovery phase. During discovery, you may find that you have significant assumptions that prove to be wrong and decide to stop. For example, you are creating a new feature or product and discover that customers aren’t interested in it (desirability), or your solution is much more complex and costly than you thought (feasibility).

A summary of the canvas is provided below the canvas, followed by detailed instructions.

Minimum Valuable Increment Canvas

Feel free to recreate the canvas in a tool of your choice. Please attribute the author (Timothy Field), the source of the canvas (this webpage) and add the CreativeCommons BY-SA license

Process overview

Change purpose

The reason for this change. Most importantly, what problem it solves. When creating customer-focused change, you should validate that the problem it solves is big enough rather than focus on the solution.

Solution definition

There may be multiple ways to solve this. Often, when working with software, you will have options of:

  • More costly strategic solutions that are cheaper in the long run to maintain.

  • Cheaper tactical solutions that are faster to release.

Identify assumptions

Making assumptions, such as how easy it is to develop something, can result in much longer timescales. In the worst case, making assumptions about customers wanting something (desirability) can mean all your effort is wasted.

MVI build process overview

MVI Build Process
  • Test - Assumptions testing is run for the MVI

  • Estimate - If testing is successful, a build estimate will be provided, and there will be a decision to proceed

  • Build - The build phase is then monitored. It can be tested internally (Alpha) or by customers (Beta) before it goes live.

  • Assess - The results of the live MVI are reviewed, and if necessary, the Strategic Area can be modified (pivot) or stopped

Test results and build estimate

This stage allows you to assess your assumptions testing results to see if the change is still worth doing. Your tests may completely or partially fail. In this case, you may decide not to proceed. At this stage, you should have removed a lot of risk from the change and can estimate its size.

Build (not a canvas section)

This is where you build out a backlog and manage the change.

Assess (not a canvas section)

The results of your change in live. For example, did it achieve the cost savings you were expecting?

Process in detail

1. Change Purpose

Problem definition

  • Using the problem statement format:

    • The problem of [description]

    • Affects [persona]

    • The impact is [ideally quantified]

  • The problem that this solves. This will be linked to the Strategic Area’s problem statement, but at a more granular level. For example, you have created a data quality strategy, and this MVI is for functionality to de-duplicate data. You should still consider research to quantify the problem statement. For example, on average, customers spend 1 hour per day on this activity. The research effort to produce this should be balanced against the size of the change.

Objectives and Key Results

  • Using the OKR format

    • Objective - a single sentence

    • Key Result - From [X] to [Y] by [Z] date

  • The objective should be an easy-to-understand sentence

  • Key Results can be “committed” where outcomes are guaranteed or “stretched” if they cannot be

The Strategic Area also has OKRs. However, the key results may take a long time to come back. Therefore, OKRs are also created for the MVI to provide faster feedback. Using the example in the problem definition section, we may have an OKR for the overall usage of all data quality features, whereas this OKR may measure the usage of a single data quality feature.

2. Solution definition

Solution options

There may be multiple options. Consider the impact of tactical solutions, even if cheaper, will they create technical debt or be hard to support?

Selected solution details

You may wish to build out more details about your chosen solution. This will help you better understand it in order to assess its size and risks.

User Experience (UX) concept

If you are working with any sort of visual interface, the UX concept can help bring understanding and alignment. Visualisations can be much easier to understand than text-based descriptions. It can be used to validate business and technology solutions and form the basis of user testing. The concept does not need to be high resolution, especially if this means its creation will be slow and make it difficult to change.

3. Identify assumptions

Identifying assumptions

Assumptions should be recorded where they are high impact:

  • Feasibility - can it be built and run for a sensible cost? Proven wrong, these assumptions will cause high development costs. For build, consider a time-boxed spikes to test these. For run considerations, model out business support processes.

  • Desirability - do customers want this? If the problem isn’t big enough, they may not use this. Additionally, they may have a better alternative solution.

  • Viability - does the business case stack up? Customers may say they like something, but this is not the same as paying for it. Consider if the cost of development meets your expected revenue from sales or cost savings.

Once assumptions are recorded, tests should be created to remove them. For example, you may not be sure if a new server architecture will be performant (feasibility) and want to run a 3-day spike. Those that cannot be easily tested upfront can be taken into the build phase. We won’t always have assumptions that need testing. In this case, move straight to estimating the solution.

Assumptions testing estimate

Consider building a discovery plan here. For example, you have 5 days of testing required with one feasibility test and two desirability tests. The rough estimate is the length of the discovery.

Decision to proceed

This is the first stage gate designed for senior leadership to decide whether to proceed or stop. They may stop if the assumptions look too risky, the solution selected looks weak, or the assumptions testing will be too expensive. Alternatively, they can ask the team to do further work to firm things up. For example, running the design past a systems expert.

4. Test results and build estimate

Build estimate

Having removed the significant assumptions, particularly those relating to feasibility (can we build it?), we can now estimate the build stage more accurately. The team should now create a detailed backlog. User stories and tasks are recommended for this.

Decision to proceed

This is the second stage gate designed for senior leadership to decide whether to proceed or stop. Typical reasons for stopping would be poor results on the assumptions testing and high build estimates.

Build (not a canvas section)

Progress is tracked here. The product burnup is a useful artefact for understanding the speed at which the team is delivering and the level at which scope is increasing. Delivery risks and issues should also be carefully managed.

We may also run additional testing such as:

  • Alpha testing - internal user testing to check quality and overall functionality.

  • Beta testing - testing with a subset of real users.

Assess (not a canvas section)

The results of the MVI’s OKRs should be reported back whenever they become available, such as in a weekly product meeting. If very poor results come back, you may consider your overall strategy rather than waiting for a formal review meeting. When enough MVIs are delivered, you will be able to assess the overall OKR results in the Strategic Area. It is very beneficial to set up automated data gathering. If data is hard to get, it can make results tracking difficult and cause people to give up. The picture below represents how MVIs are delivered and results are tracked: