How Data Teams Can Succeed Through Failure With Data Prototyping

If you’ve worked on or with a data team, you’ve probably struggled to quickly generate value from your data. The challenge of rapidly producing value from data is often frustrating for everyone involved – from executives down to data analysts – and it can negatively impact team and company culture, working dynamics and morale. Developing data products often takes longer than anyone would like, the end result isn’t perfect, and the value of the data might be different than originally promised. I believe the root cause of these frustrations is due to one core problem: data product teams aren’t failing enough.

It may sound counter intuitive (failure is often frustrating, after all). But, in my experience, data product teams take too long to sort through all their ideas and determine which are viable, so they can focus their investment in only the most valuable projects. Not taking the time to narrow focus translates to delays, lack of team clarity about where things are headed, and competing priorities that rarely (if ever) deliver on time.

In software development, we’ve seen a change from a waterfall methodology to Agile with the goal of getting feedback earlier. UX teams use rapid prototyping as a key method to fail fast. However, data product teams have been slow to aggressively leverage these same techniques. In this article I’ll describe how we test at HealthSparq and share a model for rapid prototyping that data product teams can use to deliver value more quickly.

Getting started with rapid prototyping

Regardless of how your teams operate and are organized today, if your team is ready to adopt a rapid prototyping framework, I have some good news. UX teams have explored this area and have a lot of great models. Jared Spool’s books and articles are a gold standard on UX strategy and how to prototype.  The process of planning, implementing, measuring and learning should be a model for all prototyping work and can be adopted to data products. Below you will find an outline of six steps for your team to follow as you begin practicing this way of working to develop products.

Step 1: Define the what

Before implementing a prototype test, you have to determine what you want to test. Do you already have a roadmap or backlog full of ideas? Are they ranked by priority? If so, great! This is where you’ll start. Pick your top priorities and start running through the rest of the steps below.

If your team doesn’t have a backlog or doesn’t have a prioritization process, rapid prototyping can help you prioritize ideas, but you’ll also need a roadmap planning process 

Step 2: Determine the howRapid Prototyping TestsOnce you have candidates for rapid prototyping, you’ll need to identify the type of test you plan to run. There are four main types of prototype tests we run at HealthSparq including concept validation, data availability, viability and deployment/delivery tests. Here’s a little about each to help you to decide how you’ll be running your test:

  • Concept Test: A pretty basic, but necessary test. This helps you answer: Is this even a good idea? Will our end users care? Will it generate value?
  • Data Availability Test: Sometimes you have a really cool idea, but you don’t know if it’s even possible to pursue due to the data needed. This test answers: do we even have the data for that?
  • Viability Test: Beyond data, is your test even possible in the realm of your company (or even reality)? This is the test for you if you need to answer: How would we even do that?
  • Delivery/Deployment Test: WOW! You have a super cool idea! How do you get it out the door to deployment? This is the test if you need to discover what that process to deployment looks like for your idea.

These test types are the core of your process and what you will continue to cycle through as you move your products and ideas through rapid prototyping. Below you’ll find even more detail about the tests and what to consider as you run your ideas and prototypes through them.

Concept Validation Test. The concept or market validation process verifies that your end users care about the problem you’re solving and that it has tangible value. Concept validation is a standard part of most product management work and the same tools can be leveraged for data products. The concept validation process may highlight that this model isn’t as valuable as other work or might identify specific requirements. Concept validation naturally flows into requirements and discussions about the data availability

Things to consider in this test:

  • Who are the end users for this concept?
  • Do they find it valuable?
  • What is the estimated value?
  • Do they find it more valuable than other concepts?

Data Availability Test. Data products succeed or fail based on whether the necessary data is available and useable. If the idea you are testing proposes to use new data, then an availability test may be necessary. These validations could focus on different areas based on whether your availability test is focused on acquiring the data or understanding the data. In cases where the data is easy to acquire, you’ll likely focus on understanding the data, but when data is more restrictive, you’ll also want to drill into your data acquisition plan.

Things to consider in this test:

  • Do we already have access to the data or will we need to acquire it?
  • Will there be licensing agreements or costs associated with getting the data?
  • Does the data have restrictions on its use due to contracts or legal obligations?
  • What is the data format and are transformations necessary?
  • How and how often will we ingest the data?
  • Are there data quality issues we’ll need to address?

Viability Test. For an organization that already have data products in the market and already has a robust data platform, viability tests will likely be most of your tests. Viability tests can cover a wide variety of issues that may arise during development. The type of viability test can vary based on your existing knowledge. In early phase prototyping an exploratory data analysis can help you understand the problem space and lead to new insights. With a more mature idea, you might focus on developing a lightweight model to measure performance.

In some cases, viability tests will require interaction with production systems. If that is the case, you can consider a few options such as simulating the process that allows you to validate the basic concept you want to test. If you have an A/B testing framework, you could also seek to deploy the prototype to a small portion of your production system to validate results. This allows you to test while limiting the impact to your entire production environment.

Things to consider in this test:

  • Do we already understand the data or should we perform an exploratory data analysis?
  • Is the proposed approach viable? Can we build a light weight prototype of the concept to test its performance?
  • What are the performance requirements for a production system and how far off are we?
  • Can we leverage a simulation or A/B testing framework to validate results?

Deployment & Delivery Test. Finally, if your data product team has a science experiment problem where they already run lots of prototypes but fails to release them to production, it might be the case that you need more prototypes about the delivery or deployment process. If you’re team hasn’t deployed something to production, can you run a prototype of that process to identify whether technical challenges are holding you back?

In other cases, you might need to prototype different delivery mechanisms. Perhaps you need to decide if an API or a data drop is the right approach for a process. That could be a good concept test, but another option would be to create a lightweight version to test your end users’ interest. I’d recommend checking out the Lean API methodology for further thoughts in this area.

Things to consider in this test:

  • How will we deliver the results?
  • Do we already have a well-established method to deliver results using our preferred method?
  • Do we already have a standard process to deploy to production?
  • If any of these are unclear, consider a deployment and deliver test

Step 3: Run your test!

Now that you’ve determined what and how – it’s time to run the test! This step is really as simple as that. Give yourself a timeframe for running the test and stick to it. Remember, this process is all about rapid learning and weeding through ideas quickly to identify those that are best for your business. Stick to your test and your schedule without straying. You’ll get to run a new test once this entire process is through.

The final steps: Documenting and sharing your failures

You’ve run through your test and…failed. That’s great!

When working through the iteration cycle many of the learnings should read like failures. Perhaps the concept wasn’t as valuable as assumed or building the model is harder than anticipated. This is great news since you learned something quickly rather than investing a huge effort into something that would have failed. Now you should evaluate why you failed. Common failures include an incorrect hypothesis, data quality issues, model challenges or lack of value. Regardless, there are two more things to consider to round out your testing process: capturing what you’ve learned and sharing the results.

Step 4: Sharing what you’ve learned

Once you’re performing rapid iteration and prototyping and learning from the results, it will be important to share what you learned. You’ll want a method for sharing different results. In some cases, you’ll only need to share the results with your team. In other cases, you’ll want to share the results with a wider group of stakeholders including leadership or customers. Establishing a cadence that allows this communication will be important.

Step 5: Documenting learnings

Another key consideration is how to allow others in your organization to learn from your previous prototypes. This could include new team members, other divisions or even yourself in the future. If you don’t have a method for storing the information that is searchable and open, the value of your prototypes will be limited. Airbnb’s Knowledge Repo is a great example of how to address this need.

Step 6: Start the process all over again

That’s right, this process is all about rapid, iterative learning and once you reach the end of the cycle, it’s time to start all over again! You’ll either run a new test on the same “what” you just went through this process with, or you’ll move to the next item in your backlog or idea in your head. Once you’ve run through a few tests, you’ll start to feel the cadence of process within your team and it will become a natural part of the way your team works through things.

By focusing on rapid prototyping your team can identify which ideas are viable, better estimate the work to bring them to market, and more quickly deliver value to your users and organization. Identifying the areas of the iteration loop that you struggle with can also highlight where you might need to invest whether that’s in data, infrastructure, talent or culture. If you have any questions or thoughts you want to discuss, feel free to reach out to me at keith.lomurray@healthsparq.com.