A software delivery plan is usually based on a mix of facts and assumptions that lead us down what we think is the correct approach. Since you can’t be sure of anything until the product is in the hands of actual users and you receive feedback based on their experience, this is okay.
There is hardly ever a set theory that you can apply to guarantee a 100% success rate when you set out to build a new application or product. Now, if you don’t know what success means, you don’t even have a way of telling if you are having a positive impact or not!
First, let us agree that we want to focus on Outcomes instead of Outputs to measure success. With set Outcomes, we have a much better way of tracking what value or impact our services have for our users than we would have with just gauging success by how many features we deliver or how many bugs we are fixing.
However, it is much easier to measure outputs than outcomes. Outputs are usually a tangible set of planned features that can be easily checked off once delivered whereas Outcomes focus on the impact we make and how satisfied our users are. Hence, more effort goes into finding ways to capture success with outcomes than with outputs.
So, how can we set our outcome-based goals, figure out how to measure our impact, and know if we are being successful? Can we lean on science to accomplish all that, and to ensure we are working on the right things? Instead of following a set plan, could we take a more scientific approach in driving towards success?
Fortunately, there is already a scientific approach defined and used as a practice in software development. Perhaps it is exactly what you and your team are looking for to jolt you in the right direction. It is called Hypothesis Driven Development (HDD) and focuses on experimentation to guide you towards reaching your goals.
Basically, it leans on formulating a hypothesis, and then requires you to to prove or disprove it. You literally make an educated guess about a certain phenomenon and test it using the steps of the scientific method: Observation, Question, Hypothesis, Experiment, Results, and Conclusion.
The scope of the method can be summarized as below:
It is noteworthy to call out, as you might have already concluded, that HDD is well aligned with agile methodologies, where we rely on early and frequent feedback loops. It promotes experimentation, learning, and adaptation as well as iterative product development.
First, set your desired outcomes, i.e., what are the results you are trying to achieve?
Example: Improve user satisfaction by 25%
Next, define a hypothesis on how you think the desired results can be achieved? What are our best ideas that will drive us toward the outcomes?
Example: We believe that more than 50% of users will select a customizable theme on their account and improve their user engagement by 10%
This next step is crucial for success; to come up with a way we can prove or disprove our premise. We need to establish quantitative and actionable metrics that can validate, or invalidate our hypothesis.
Example: Implement user engagement tracking to measure activity before and after implementation of experiment
We now rely on our process to build and release our changes out in the wild. We implement our experiment(s) to enable us to observe our ideas in the hands of our users/customers.
If we managed to implement a change with metrics that can tell us how our experiment fared, we should be able to validate our hypothesis.
If our hypothesis was validated, we just continue down that path, feeling reinforced that we are onto something. We now have new information that we can leverage to continue building an even better product.
If the hypothesis was proven invalid, don’t be disheartened, that is ok too. That still gives us valuable information that we need to come up with something else to make an impact, and move us toward our outcomes. We adjust and tweak our approach.
In either scenario we go on crafting new hypotheses, run the next batch of experiments, and continue refining our product.
There are many more aspects of this methodology in which we can go into more details. How do we come up with a good hypothesis in the first place? What are the best metrics to apply? How do you validate accurately to pivot in the right direction? Common sense and the principles of agile still apply. For example, don’t complicate the process of framing the hypothesis. It should be a product of ideas crafted by the collective and informed mind - the same team that is discovering, brainstorming, backlog-refining, sprint planning and delivering your product today. You vote and prioritize the best ideas and learn as you go.
This is really just another way to implement iterative product development, keeping the emphasis on true value, through the lens of science. Exploring your assumptions and running experiments to prove them out should bring confidence that you are always moving in the right direction, not just basing your approach on unproven assumptions. True value comes after all from knowing, not assuming, that you made a positive impact.
Before you put your lab coats on, and reach for your test tubes, think of this as a point of view, an approach to tweak in a way that works best for you. We always want to work on the “Right Problem”, and HDD might just be your ticket.
For those wanting to dig deeper and learn more about Hypothesis Driven Development here is some good reading: