#strategy
#testing101
In This blog

    The User Story Value Hypothesis

    I’ve been working on a concept for a series of webinars and workshops around Test Process Orchestration and the New Role of the QA Architect.

    Among the concepts I’ve been developing and covering around these areas there is one that is really important and most of us fail to pay enough attention to, and so I want to bring it our attention now.

    I am talking about the User Story Value Hypothesis, or UVH for short.

    Why are we even developing this feature?

    This sounds like a simple question, maybe even a naive question, but because of this we sometimes skip it all together.

    Needless to say, not asking the question and failing to make sure we have a clear answer is both wrong and dangerous.

    Logic dictates that if we are going to invest time and efforts to define, write, test, deploy and operate a new feature, then we are expecting it to have a positive impact on the product.

    This desired impact can be an increase in the activity of users, or a positive jump in the revenue of the company, or in the case of maintenance operations it can even be avoiding potential issues from happening.

    The problem is that sometimes, and even many times, we assume that everyone in the team is aware and aligned on the positive impact we are looking to achieve, while in practice this might not be the case.

    Schedule a Demo

    Why is this alignment important?

    After all we are talking about the same feature?
    What does it matter what we want to achieve with it once it is released???
    There are a number of good reasons, let’s review a couple of examples.

    If the product designer is targeting a given impact in the usability and accessibility of the product, while at the same time the developer believes she is aiming at improvements in the reaction time, then the way the feature is developed may differ from what the designer intended.

    This may be even worse if the Operations people, who will deploy, run and monitor this new feature, are not aware of what the impact should be on the product at all. Given they lack this knowledge they will not deploy an important monitor in the system, and this way the Product Management team is not able to evaluate whether the idea they brought forward, and the small project everyone worked on, is effective or if some corrections are required.

    Trying to explain it on different terms, it’s like being in a family where someone says “let’s go out to have some fun today” thinking they are going to the beach, while the person packing believes they are going to a fancy restaurant for lunch, and the person driving takes everyone to the mountains and parks the car at the beginning of a nice trek in the woods.
    Everyone had good intentions, but they ended up fighting and arguing about who was to blame for the fiasco…

    Define where you are heading so everyone knows what to do

    As in the previous example of the family-trip, the important thing is to ensure we are all aligned and pushing towards the same end-result, each of with his or her own job and responsibility.

    The idea is simple, if you are doing something it is because you want to change a behavior, so clearly define and state what this behavior is and what change are we aiming to achieve…

    Sounds easy, right? Well, not always…
    So let’s review an example of how this can work.

    Let’s say you are working on an e-commerce site. You are adding a new feature that will allow your shoppers to see what their friends bought in the last month from your site.

    In this case you are looking to increase overall revenue and you can define the UVH as “an increase in between 3% and 5% in shopping from users who enabled this feature”. This will allows us to understand, once the feature has been released, if we are meeting our goal or not.

    Still many times we need to be more specific, since the end result may depend on intermediate milestones along the way.

    For example, in here we are talking about a feature that is not activated by default to you our users, so you may also need to measure how well received and adopted this feature is. In this case you can add another UVH stating that within 3 months of launch, you expect above 33% of the users to have this feature active in their accounts.

    Why do we need this? Because this is a prerequisite for your Value Hypothesis to happen, and so we need to measure it and ensure we are also meeting targets on it.

    Trying to be specific without being inflexible

    It is at this stage that I want to point something out, as it may be a problem that will arise if you try to work with the UVH approach.
    Different people respond to targets differently.

    For some of us targets are just beacons that show the direction we are headed. Yes, we want to increase sales and we are aiming at a 5% target, but when I see this I think to myself that if we do 4% or 6% it will also be OK.

    For other people numbers are almost sacred. For them hitting a 4.2% sales increase will be disappointing and be defined as a failure.

    And yet for others specific numbers can become a blind target. For these people hitting 5% is a most, and they will do it even if it comes at the expense of internal fighting and/or the well being of some of your employees. Why? Not because this was a do or die situation, but because this is the target that was defined for us…

    As you may understand from what I wrote, personally I believe targets are important as beacons. And we should use them to set a course and correct it when needed. But I also know other members of my team look at targets differently, and so I need to make sure to convey this message clearly as well.

    Visibility to help our stakeholders make the right decisions.

    I have been writing for years that the goal of testing is to “provide visibility so that our stakeholders can make the right decisions”. And the concept of the UVH is only an extension of that information arsenal that is at your dispos

    I invite you all to start looking at the way your teams define their User Stories, and look for ways in which you can integrate the UVH concept in order to get better visibility and alignment into the effect of our work on the business.

    I will also be happy to learn if you have other aspects to keep in mind, or lessons learned from your experience while working with this or other similar approaches.

    Schedule a Demo
    PBS LogoDXC Technology LogoBoots LogoMcAffee LogoNCR LogoRoblox LogoAIA LogoEnvisionHealthcare LogoWendy's Logoeasyjet LogoAST LogoUCSF Logo
    PBS LogoDXC Technology LogoBoots LogoMcAffee LogoNCR LogoRoblox LogoAIA LogoEnvisionHealthcare LogoWendy's Logoeasyjet LogoAST LogoUCSF Logo
    About Joel Montvelisky

    Joel Montvelisky

    Joel Montvelisky is a Co-Founder and Chief Solution Architect at PractiTest. He has been in testing and QA since 1997, working as a tester, QA Manager and Director, and Consultant for companies in Israel, the US, and the EU.
    Joel is a Forbes council member, and a blogger. In addition, he's the founder and Chair of the OnlineTestConf, the co-founder of the State of Testing survey and report, and a Director at the Association of Software Testing. Joel is a conference speaker, presenting in various conferences and forums worldwide, among them the STAR Conferences, STPCon, JaSST, TestLeadership Conf, CAST, QA&Test, and more.

    Related resources

    Blog

    Navigating Through Modern Software Testing Complexities

    Webinar

    Optimizing Performance Testing with Federico Toledo

    Article

    Taming the Chaos: How to Manage Testing in Complex & Robust Environments

    Ebook

    The 2024 State of Testing™ Report is now live!

    Resource center
    In This blog
      mail twitter linkedin facebook