Labs

Tactile, responsive, and personalized mini textbooks

FrameLabs interface demo overlay

abstract

Added an AI layer on top of a previous product, re-thinking it's use, and developing it from concept through launch.

My Contribution
Designed a UI by simplifying interaction patterns and researching with prototypes. Developed the frontend and AI infrastructure.

Outcome
Built a multimodal interface for reading which is fully responsive, balances concerns from students and teachers, and is designed for learning - but not necessarily solutions.

Role: Design Lead and Developer

Functions: Product Strategy, UX Research, UI/UX, Development

This project began off of a weekend experiment, researching how AI could be used in a previous product.

Good news was it worked! Bad news was we didn't know for what.

Whilst tinkering, discovered:

  1. With a series of models, we could generate data compatible with our previous app
  2. Inside that app, users could further prompt and add content natively
Browser chrome bar
FrameLabs interface showing first view
FrameLabs interface showing first view
FrameLabs interface showing second view

Early generated pages used a no-code tool requiring us to copy/paste a massive returned story data object, which included layouts, text, images, etc. into our codebase manually. Only working UI component was bottom left, generating content that we slotted into new pages.

Challenge

From a previous journalism product, understand how implementing AI changes its use, design an interface for that use, and build a consumer-ready product.

Goal

Find a fitting problem space then
launch and try to grow.

Step 1: Research

Starting from the bottom, we considered current product strengths and weaknesses, studied where AI was being used, and explored different potential uses.

Product Review

Reviving an old product in Frame's article, we revisited prior feedback.

Previous users liked the visual reading experience, but at times expressed the interface felt flat and unresponsive.

Secondary Research

Articles, anecdotes, peer-reviewed papers, industry reports, and competitor analyses all contributed to an understanding of who was using AI and how.

Insights here informed who we sought to have further conversations with.

Exploratory Conversations

Hypothesizing a few domains, we began having conversations with people, builders, and leaders in different industries.

Many offered enthusiasm, but only a few expressed needs that might be addressed.

We considered marketing materials, internal company trainings, and travel tours amongst a few other use cases - eventually settling broadly on education because:

Deliverables

AI is a great help for getting from A to B - and has a generous preference for doing the heavy lifting. Providing the writing, code, deck, etc.

Still being understood is its efficacy when "B" is a deeper understanding - and to be fair, what that means and if it's needed.

There was opportunity to create an interface designed around asking questions rather than providing deliverables.

Misalignment

AI was being deployed quickly in educational settings - and however useful tools were, they were similarly controversial. Teachers, students, administrators, parents, and others hold diverse opinions on AI and how it should be used.

Solutions in Ed Tech would likely be plural, and with room for different innovations tailored to different concerns.

Preliminary product concept: Quick visual and interactive textbooks, where everything on page was responsive, providing custom learning journeys.

Step 2: Prototyping

Pushing on all fronts - frontend, backend, and AI architecture, I developed a working prototype to validate the concept and gather feedback.

With a cohort of high school students and teachers, we gathered early UX feedback and discussed why AI tools were or weren't working in their classrooms.

Primitive prototypes, but students could create stories and give first thoughts on what it needed to become useful to them.

Browser chrome bar
FrameLabs interface showing first view
FrameLabs interface showing second view
FrameLabs interface showing third view
FrameLabs interface showing third view

Though functioning, the product was an early prototype. The initial generation phase was unreliable and took 4-6 minutes, and the UI was more of a sketch. Critically though users could create accounts and save stories.

Insights:

Gen Time

Generation time was inversely correlated to the expected capabilities of the app.

We needed to shorten the response time, or add features.

Validating the Use

Enough positive feedback to continue building, and while education was a use, it wasn't the only one.

A tension between narrowing the use case, design, and roadmap at the potential expense of others surfaced.

Students ↔️ Teachers

Students and teachers had different - sometimes opposing - interests.

While our product didn’t fulfill student’s desire for quick solutions, its awkward inability to do so was uniquely appealing to educators.

Levels of Depth

Google, chatGPT, deep research, notebookLM, books, videos, etc. all respond to different degrees over different lengths of time. The latest models can take minutes, but give comprehensive, well formatted responses.

We wanted to design a surface that responded quickly, but could grow in any direction at any length, enabling users to shore up gaps as they go.

Step 3: Developing

Strengths of our interface were chunking info across pages, combining text and visuals, the play on interacting with anything, and (to teachers) its inability to write essays.

Everything else remaining in question:

  • Design of the UI/UX across devices
  • What types of questions this tool best answers, and how critical answering them is (an ongoing cat and mouse)
  • Quickest roadmap to a useful, usable product
  • Performance of the AI architecture

^ That is to say, pretty much everything. We continued prototyping with a cohort of students and teachers, and produced the following.

* Below it are deeper dives into the process of how we arrived here, addressing the above questions.

Cleaning the top region of the pages gave more breathing room for layouts. As users began moving through stories more naturally, we focused on improving microinteractions and animations.

Browser chrome bar

Toggle states add cleanliness, pages have highlights for legibility, and sources moved outside text blocks amongst other improvements.

The primary metric needed to signal success for the UI above was first-time users intuitively picking it up off the bat. Beyond that, we started to look for a growing user base and returners requesting features.

We’ve begun building a suite surrounding the core interface, improving its utility at different points throughout the education cycle (first lesson ↔️ test and review).

simplifying the story ui

The first UI, while functioning, was largely unclear how to use. That's okay.
First to design was the UX for interacting beyond the initial generation.

For pace, I opted to skip Figma and code prototypes. Quicker rounds of live user testing and steady dev progress, at the expense of a maybe messier design process.

Desktop

Early examples of generations and layouts here.

As we began designing, a fun interplay evolved between the improving underlying AI infrastructure, and the UI it served.

That, along with UXR feedback, incrementally shifted what we were designing for.

Browser chrome bar
FrameLabs interface showing first view
FrameLabs interface showing second view
FrameLabs interface showing third view
FrameLabs interface showing fourth view
FrameLabs interface showing fifth view

Early progress included establishing consistent layouts and UI iterations. Much time spent here on backend infrastructure - auth, accounts, storage, etc.

Example intermediate step in the right direction.

Generating suggestions was a good update, lowering the friction to prompt pages, but the first design was unclear.

Toying with nesting, consolidating controls, and ux writing.

Browser chrome bar

Generating suggested branches was a good idea, both helpful contextually and in communicating the feature, but weren't quite intuitive enough. Also evident battles with text formatting.

Current desktop UI whilst in a story.

A one-click action to the core function is highest in the hierarchy, giving users quick access to the feature.

Buoyed by suggested options and a prompt input box, we finally saw users navigating through stories intuitively.

Still haven't quite figured out the UX writing.

Browser chrome bar

Toggle states add cleanliness, pages have highlights for legibility, and sources moved outside text blocks amongst other improvements.

Mobile

The product wasn’t really useful on mobile with gen times north of 1-2 minutes, once we hit that milestone we started building a complimentary ux.

Still prioritizing a sense of tactility and responsiveness, but designing for shorter experiences and inquiries.

Cursor’s adaptation of the first desktop ui - a starting point.

It worked, but crowded, confusing, and the page layouts needed some love.

Initially bringing in all of the desktop functionality, from here we began de-cluttering the UI, trimming and nesting features to provide a more focused UX.

Early UI mobile iterations followed desktop updates, while trying to strike the balance between utility and information overload for new users.

We wanted to give everything “juice” - pages, images, text boxes, controls, all to respond even if acutely. Little wiggles, zooms, highlights all contribute to a sense of liveliness from the pages.

The strict single-screen height of pages yielded dev challenges, requiring solutions which initially felt awkward, for example with the above input field. Also working on text-formatting, here for example sources are a bit disruptive to the text’s legibility.

Current UI designed for briefer experiences.

Organizing the model network for UX improvements, for example here leveraging a context object to enable the one-click action, making the core conversation feature easier to use.

Cleaning the top region of the pages gave more breathing room for layouts. As users began moving through stories more naturally, we focused on improving microinteractions and animations.

As the initial generation time shrunk from minutes to seconds, the product's intent changed, guiding the UI's design.

A simplified mobile environment now successfully serves quick breakdowns, while still able to flex to deeper explanations.

The desktop UX called for a richer feature suite. UXR and development moved on from the core story-page interface, to the functions surrounding it.