Optimizely: Opal

Optimizely is a web-based platform used by marketers to test and optimize digital marketing experiences (e.g. websites or campaigns). Opal is Optimizely's AI Assistant that accelerates this experience.


I redesigned the interaction model for Opal, Optimizely’s AI assistant, to reduce credit anxiety and help users explore confidently under a usage-based pricing system.

Skills

UX Strategy

Interaction Design

Team

1 Principal PM
1 Senior Design Director
4 Designers (Me)

Timeline

10 weeks, Spring 2025

Overview

Designing Transparent & Trustworthy AI Interactions

👎 Problem

Opal’s current design lacked trust indicators and credit transparency, making users hesitant to engage and less likely to adopt it; especially given its usage-based pricing model.

Solution

A preview-based interaction model that lets users see a partial response for fewer credits before committing to a full output. This gives users greater control and confidence.

What I did

My Contribution

Business-Informed Solutions

I aligned design proposals with Optimizely’s commercialization strategy: balancing user value with business needs. This included advocating for a subscription-based preview tier to support first-time users while managing resource usage.

Design Sprints and Critiques

I participated in weekly design critiques with Optimizely’s Principal PM and Senior Director of Product Design, where I presented work-in-progress, defended design decisions, and rapidly iterated based on direct stakeholder feedback.

Navigating unclear expectations

Optimizely’s team had open-ended questions rather than defined goals. I helped frame ambiguous problems, synthesized scattered feedback, and proactively shaped a clear design direction grounded in product and UX principles.

business Challenge

Open Questions Around Opal

1️⃣ Initial Question: Open Questions Behind Trust in AI

We partnered with Optimizely to explore questions in relation to their AI assistant, Opal. Optimizely's team surfaced open questions around trust, transparency, and adoption of Opal, which they had not yet defined should look like in Opal's experience.

"When do users feel confident acting on AI output, and when do they seek verification or override them?"

"What cues or signals do users rely on to initially determine the trustworthiness of agentic AI output?"

Kickoff Meeting with Optimizely to understand open questions

2️⃣ Business Constraint: Usage-Based Pricing Model ➡️ leads to Credit Anxiety

Midway through the project, the Optimizely team revealed a critical constraint: Opal would soon adopt a usage-based pricing model, where each interaction would cost credits.


Optimizely feared this would trigger “credit anxiety”: where users fear wasting credits on low-value output. Now, users would not just question Opal’s trustworthiness; they might hesitate to explore and adopt Opal in the first place.

Research

Identifying Ways to Improve Transparency

🛑 Research Constraints

🚫 No Generative Research

We already know this is a problem; we just don’t know about the design.” — Principal PM, Optimizely

Since the problem of low transparency and trust in AI output was already understood, we were asked to focus solely on how design could address it.

➡️ Without direct user input, we could not ground our decisions in user behaviors or quotes. Instead, we relied on competitive analysis and UX heuristics.

🔬 Limited Use Case: Market Research

We lacked access to Opal’s actual users and had limited time to ramp up on its other marketing tools (like web experimentation)

➡️ We focused on Opal’s chat interface, specifically for the market research use case; the most defined and accessible interaction.

Heuristic Evaluation of Opal

We conducted a heuristic evaluation (Nielsen Norman’s Usability Heuristics) to identify gaps that could impact trust, clarity, and control. The top 2 violations identified were:

  1. Blank Canvas Problem

#6 Recognition Rather Than Recall

Opal's opening message is lengthy and does not highlight its unique selling point (from other AI interfaces). It also does not provide suggested prompts or scaffolding. These increase users' cognitive load and friction.

  1. No Collaborative Process

#3 User Control and Freedom

#5 Error Prevention

There is no collaborative process, such as confirming intent or allowing user correction.

Research Insight 1

Opal lacked clear guidance and feedback mechanisms. This makes it hard for users to begin and adjust interactions. This uncertainty, combined with the fear of wasting limited credits, would lead to hesitation and lower engagement.

Competitive Analysis 1 of 2: Designing for Trust

We conducted competitive analysis across AI tools to identify trust signals, confidence cues, and oversight mechanisms. I organized the findings along the 3 stages of the user journey. I focused on Stages 1 and 2 (see image below), where users had not yet spent credits and were still deciding whether to engage. These stages would have the highest drop-off risk, because a lack of visibility and fear of wasting credits create barriers to trust and adoption.

Research Insight 2

Most trust cues appear after AI output is generated. This was too late in the journey to influence users to engage with Opal.

Competitive Analysis 2 of 2: Designing for Credits

Token Count Provides User Control

Platforms showed exact numbers (“5684 tokens”), remaining quota (“25 left”), and/or availability timelines (“More on August 9”). This helps users understand usage and make informed trade-offs.

Research Insight 3

Users need visibility and control when credits are on the line. Without clear indicators of how many credits an action will use or whether it can be reversed, users hesitate to explore and trust.

Design Concepts

Giving Users Control Before They Commit

🛑 Design Constraints

❓ Undefined & Opaque Credit System

There was no finalized logic for how credits would be calculated, charged, or displayed.

➡️ We had to design interactions that built user confidence, without knowing exactly how the credit system would work.

🎯 Optimizely's Business Goals

We had to ensure Opal was both user-friendly and commercially viable.

➡️ Our designs had to build trust while still encouraging credit use; the point of Optimizely's commercialization strategy.

Stage 1: Addressing the Blank Canvas Problem

Suggested Prompts

Provided to reduce cognitive load and clarify Opal's capabilities. Hover states give users more context, helping them choose the right prompt and provide necessary context.

Impact

Suggested prompts (without the hover state) has since been adopted by Optimizely and now appears on Opal's chat interface.

"your first idea around, like the suggested prompts is, like, we are like literally in a conversation about suggested prompts right now." — Principal PM, Optimizely (June 2025)

Stage 1.5: User Choice of Preview or Full Output

Users have control over output fidelity: Preview or Full Output

  1. Preview

Generates a short preview of the full output for fewer credits. It allows users to quickly assess the direction, structure, and relevance of the output before committing.

  1. Full Output

Generates the full output, which is complete and detailed, for more credits.

Stage 2: Previews to Build Confidence

We tested two preview-based flows to help users feel more in control before spending credits.

Flow A: Preview & Edit Opal's Output

  • Show users a partial preview of Opal’s response

  • Allow edits or refreshes before committing credits

  • Trigger full output only when the user is ready

Flow B: Preview & Edit User's Input

  • Users get lightweight suggestions to refine their prompt

  • Reduces likelihood of failed or low-relevance output

  • Encourages strategic, intentional usage

Business Strategy: Recommending a Subscription-Based Model

To complement our design proposals, we proposed a flat-rate subscription plan:

  • A limited number of preview edits per prompt

  • A monthly credit cap

This allowed users to explore Opal safely without worrying about unexpected costs, while helping Optimizely manage resource-use.

Impact

Testimonials from the Optimizely Team

PC

Senior Design Director (Optimizely)

"For Preview, it's actually a very interesting concept … that's actually a very interesting way to solve the trust and token issue. Yeah, so I will definitely share that feedback back to the internal team. If there's a way that we could actually maybe use the cheaper model to read the answer or part of the response. I'm not sure how much we can manipulate LLM to do that, but it's actually very interesting."

"I discussed your team's deck containing your competitive analysis and research in a meeting about Opal and also provided my team with a link to that research in a work chat."

Final Presentation with the Optimizely Team

JZ

Principal Product Manager (Optimizely)

"There's utility in these [designs] because of credit management. Also, your first idea around, like the suggested prompts is, like, we are like literally in a conversation about suggested prompts right now."

"…the results of this have surpassed my expectations. So thank you so much. Can we get access to your Figma and your this presentation? Would you guys be open to presenting to our Opal platform team"