Six Principles for Creating Impactful Analytics Reporting
HealthSparq provides web-based SaaS products to our clients, so we rely on analytics tools to capture end user interactions with our products and programs. Accurate, engaging reporting is key to helping our clients understand how our tools are performing and which member engagement actions they implement have the most impact. Needless to say, a robust analytics reporting program is central to our business and successful client relationships.
Previously, we used the dashboarding capabilities of industry-leading web analytics tools to provide ongoing reporting to our clients. Unfortunately, this system came with several limitations, and at the end of the day, while we could get some raw numbers over to our clients, the impact of the reports stopped there. We knew we could (and had to) do better, so our Analytics organization chose to bring reporting in-house. We empowered our analysts with state-of-the-art dashboarding and visualization software and developed data ingestion and ETL pipelines that provided limitless possibilities for reporting. This gave us the flexibility to create reports the way we wanted them to look, and we were prepared to own the process from start to finish.
This shift from limitations to possibilities brought new challenges. Feeling the weight of these possibilities, we moved past option paralysis by discussing many open-ended questions, such as:
- What should reports look like?
- Which metrics should they cover?
- Which visualizations should be used to communicate those metrics?
- How should raw data be processed to make insights easier to consume?
- How frequently should we deliver reports?
Our north star during these conversations was impact — ensuring that whatever choices we made, the intent behind them would be making our reports more impactful to their consumers.
The new reports were first delivered to clients over a year ago, and since then we’ve learned that some choices we made were spot on, some were off, and (perhaps most importantly) we forgot to discuss some important questions from the get-go. Iterating on these learnings, we have improved our reporting since that initial release, and there is plenty of work on our roadmap to continue to make our reports better.
As we develop new reporting today, we keep in mind six guiding principles that summarize what we’ve learned and help us deliver impactful reporting with the report’s first release. For organizations embarking on developing their own reporting, we share these principles with you and hope they help make your new reporting roll-out as smooth as possible.
Tell a story
Start constructing your report with the story you want to tell in mind. Once the story is identified you can begin to discuss the metrics and visualizations that best tell the story.
One process that helps here is listing the specific questions you want to speak to in a report, then ordering them into narrative with a logical flow to it. For example, the monthly report that accompanies our flagship product, HealthSparq One, begins by answering the important question how many visits were there last month? The pages that follow dive deeper into questions about those visits. For example, when did they happen? Were they from new or returning users? How long were they, on average? Were they on desktop or mobile? At its core, HealthSparq One is a search tool, so once we’ve spoken to those general questions we then dive into questions about searches. How many searches were there last month? How many per visit? Which search terms were most popular?
The activity of outlining, discussing and iterating on the narrative before beginning report development not only elevates the quality of the final report, but it can save loads of time. Including stakeholders and eventual end-users at this step of the process is definitely encouraged.
For additional resources here, many of our analysts have read Cole Nussbaumer Knaflic’s storytelling with data multiple times, and highly recommend it!
Target a specific audience
Those who consume your reporting will vary in technical expertise and familiarity with the data you are presenting, so keep your target audience in mind and tailor the story to them. It could be that you develop multiple reports that speak to the same topic, differing only in the targeted audience.
For example, many of our products have three reports dedicated to them.
- One for the product owner/management team that provides details about how the product is doing and being used
- Another higher-level report for less involved internal stakeholders
- And a third, more polished and formalized version that is client-facing
We’ve also had the need to customize client-facing reporting depending on the report’s audience. For example, if a report is meant to be consumed by executives, we keep it short and begin with an introduction and executive summary to be mindful of their time and how often they switch contexts. If intended for analysts, we might augment visualizations with raw data and tables to empower analysts to dig deeper on their own. Or, if the audience is a wide range of stakeholders, we might add an appendix and include tables there so they are accessible but don’t disrupt the narrative.
There are many strategies for how to develop reporting depending on the intended audience, including which visuals to use, how many colors to use, and report length, to name a few. There is an entire chapter devoted to audience in Knaflic’s bestseller, mentioned previously (see chapter 4). However, the mere practice of identifying and being mindful of your audience during development goes a long way.
Follow a well-designed, consistent layout
Once you’ve outlined your story, identified your audience, and have selected metrics and visuals that best complement that story, it’s time to design a simple, attractive layout that is consistent throughout the report. Once established, use this layout for all of your reports. Doing so will help your consumers see how reports fit into the bigger picture and will make new reports feel familiar.
When developing our layout template, we partnered with our User Experience and Design team and incorporated a lot of ideas they use for our products. Jointly we developed a standardized reporting template for cover pages, section headers, pages with text only (like introduction pages), pages with two columns of text (like tables of contents), pages with one figure, and pages with multiple figures.
Each layout follows a carefully chosen design framework. We use a font family that is also used in our products for familiarity. Slide titles, logos, footers, body text and supporting text have predetermined sizes and weights. We’ve chosen default primary, secondary and accessory colors. At a higher level, each report begins with a cover, introduction and table of contents to prepare the reader for what they are about to see. Reports culminate in a glossary that defines necessary terms.
On occasion, we deviate from these predefined layouts and defaults when the need arises, but having them helps us move quickly when developing reports and ensures we stay true to a consistent look and feel with each report.
We’ve found that unless you are in the weeds with data day-in and day-out, it is easy to forget how metrics should be interpreted or what they really say (wait, what does the bounce rate mean, again?). Help your consumers by including a glossary and definitions at the end of the report and spell out how to read each page (and, in some cases, what the take home is) on the page itself. We accomplish this by including a context pane on the left-hand side of every page.
For example, in our report that trends the bounce rate, we use the context pane to remind the reader that a bounce session is a visit in which a meaningful interaction (such as performing a search or viewing benefit information) didn’t occur. We also call out the bounce rate for the reporting month and state whether it is a decrease or increase from the previous six-month average. Thus, if the reader hasn’t looked at the report in a while or doesn’t grasp the take home from the visualization itself, the context pane can help bring them up to speed.
On some slides we use the context pane to remind the user what good or bad looks like. For example, a decreasing bounce rate is generally a good thing. We also use it to give historical context to some metrics. On our sessions trend slide we remind the reader that traffic is generally highest at the beginning of the year, during open enrollment and after engagement campaigns.
We’ve found the context pane to be a flexible, valuable element to our reports, and we continue to find new ways to use it to lead the reader through our intended narrative and drive the story home.
The latest data is more valuable when viewed through the lens of recent history. For all the metrics in our reports we provide historical trend lines and/or quick comparisons to recent history to assess if metrics are increasing or decreasing.
One question we’re frequently asked is along these lines: This metric is interesting…are you seeing the same for your other clients? This is an important question, since such a comparison is helpful in distinguishing signal from noise in recent trends. As a result, our reports include a peer group benchmark. This benchmark is hand-selected for each report to provide a meaningful comparison for the metric on display.
As an example of impact, together these comparisons make it easier for our clients to assess the effect of outreach initiatives. Suppose a health plan wants to promote colonoscopy screenings in conjunction with Colorectal Cancer Awareness Month (every March), with the aim of increasing the number of searches for colonoscopy screenings. After implementing the outreach, a high-level analysis of the campaign’s success can be derived simply by viewing our reporting the following month. Our slide dedicated to top searches will show the share of searches related to colonoscopies and will compare it with data from the previous month (to quantify the increase) and the peer group (serving as a control in this case). Granted, the plan will probably want to dig deeper into analysis of specific metrics related to the campaign, but our desire is that our standard reports can do a lot of the analytical heavy lifting for them, and these comparisons help achieve that.
Regardless of the power of a report, without automation it will become a chore (potentially someone’s full-time job) to maintain. We design all our reports with automation in mind. That is, when reporting time comes each month, we click a button to kickstart the process, QA a few things, and then click send. Even the words that make up the context on each slide automatically respond to the data each time a report is generated, and no manual intervention is necessary to produce the report.
Our aspiration for automation does introduce some limitations. For example, the context we’d like to provide isn’t always able to be automated, so we have to say something else. However, the time saved with automation empowers our analysts to tackle important tasks and initiatives that can’t be automated, such as research, developing statistical models and ad-hoc reporting.
The decision we made more than 18 months ago to develop and produce reporting in-house has been worth it. We’ve enhanced the insights one can glean from our reports and leveled up the impact they can have for our clients. From the moment we embarked on this journey to today we have learned a lot about how to make meaningful reports. Many enhancements from those learnings have been implemented and our roadmap is filled with plans to implement many more.
From everything we’ve learned along the way, our core six guiding principles have given us a great foundation for amplifying the impact of reporting for any analytics team. We hope you’ve found them useful!