Spot by NetApp · Flexera · Ocean

Turning Recommendations Into Actions

Role Product Designer
Scope End-to-end design
Methods Pendo analysis · Session recordings · Usability testing · AI-assisted synthesis
Year 2024

Five percent. That's how many users were engaging with Recommendations.

Spot Ocean's dashboard already had a Recommendations section. DevOps, FinOps, and SecOps teams could use it to improve cluster cost, security, and configuration. In practice, almost no one did.

The number came from Pendo: 5% engagement. Not because the feature was hidden, but because when users found it, it didn't give them enough reason to act.

5%
Engagement rate on the existing Recommendations feature

After analyzing session recordings and usage data, a clearer picture emerged. The recommendations felt generic. There was no signal for what each one actually meant, how serious it was, or what you were supposed to do with it. The interface displayed information. It didn't invite action.

Before: Recommendations buried inside the main dashboard
!
No dedicated space. Recommendations compete for attention inside a dense dashboard with no way to manage them as a whole.
!
No priority signal. Every recommendation looks the same. There's no way to know what's urgent and what can wait.
!
No clear next step. Recommendations appear without instruction. Users don't know if they can act, how, or if it's even their responsibility.
!
No relevance signal. Nothing indicates which clusters or teams a recommendation applies to. Acting felt risky.

Users weren't ignoring recommendations. They didn't trust them.

Four friction points emerged from the research. Session recordings were synthesized with AI assistance to surface patterns across multiple sessions faster.

No relevance signal

Recommendations were generic. Users couldn't tell if they applied to their specific clusters or not.

No trust signal

There was nothing indicating the potential value or risk of acting. Without that, inaction felt safer.

No clear next step

Recommendations appeared without any instruction. Users didn't know if they could act, how, or whether it was even their responsibility.

No dedicated space

Recommendations competed for attention inside a dense dashboard. There was no place to manage them as a whole.

Research insights: four friction points

From display to management.

The core shift wasn't cosmetic. This wasn't about redesigning how recommendations looked. It was about rethinking what they were. Instead of a passive list on a dashboard, Recommendations would become a dedicated workspace where users actively manage their optimization backlog.

That meant a new standalone page. And a set of decisions that made every recommendation feel like something you could act on, not just read about.

Impact labeling

Every recommendation carries a clear impact level (High / Medium / Low), so users can immediately know what to prioritize.

Status-based organization

Recommendations are organized as a backlog: Open, Dismissed, Processed. This turns the list into something you can actually work through.

Card-based design with clear action types

Each recommendation is a card with a concise summary, impact level, and a defined action: navigating somewhere, triggering a change directly, or simply acknowledging it.

Contextual filters and free search

Filters by category, cluster, status, and time, so teams managing many clusters can cut through the noise and find what they need.

New Recommendations page: card layout with impact labeling
1
Flexible filtering, search, and bulk actions allow users to quickly focus on relevant recommendations and manage them efficiently, even at scale.
2
Turns a passive list into a backlog you can actually work through.
3
Users had no way to prioritize. Labels make the decision for them before they even open a card.
4
Every recommendation ends with one clear next step, not just information.
Explore the full prototype →

What we got wrong the first time.

Two rounds of usability testing surfaced friction that wasn't visible in the designs alone. Edge cases were stress-tested with AI to identify failure states before reaching usability testing.

The Filter button was the most significant issue. Users didn't understand they needed to "activate" filters. They expected filtering to happen inline, the way most tools they used worked. We removed the button entirely and moved to direct filter interaction.

The Unread indicator also needed work. The icon wasn't clearly communicating "new since your last visit." We refined the visual and added a tooltip to make the behavior explicit before users had to discover it on their own.

Iteration: filter interaction before and after

The project didn't ship. The thinking did.

Priorities shifted after an acquisition and the feature never reached production. But the design was complete, and so was the measurement plan.

Before the acquisition, we had defined exactly what success would look like:

Are users actually using the page?

Page entries and session depth as a baseline engagement signal. Did we fix the 5%?

Are users focusing on what matters to them?

Filter and search usage rate. Are users cutting through the noise or still overwhelmed?

Is decision-making easier?

Action completion rate per impact level. Does surfacing impact actually change behavior?

Are the recommendations worth acting on?

Dismiss rate as a relevance signal. High dismiss means we're showing the wrong things.

Defining what you'd measure before building is the work. The model we created became the foundation for future recommendation types across the product, without needing a structural redesign each time.

Next Project

Untangling a Complex Configuration Dialog →