Labs
Tactile, responsive, and personalized mini textbooks
abstract
Added an AI layer on top of a previous product, re-thinking it's use, and developing it from concept through launch.
My Contribution
Designed a UI by simplifying interaction patterns and researching with prototypes. Developed the frontend and AI infrastructure.
Outcome
Built a multimodal interface for reading which is fully responsive, balances concerns from students and teachers, and is designed for learning - but not necessarily solutions.
Role: Design Lead and Developer
Functions: Product Strategy, UX Research, UI/UX, Development
This project began off of a weekend experiment, researching how AI could be used in a previous product.
Good news was it worked! Bad news was we didn't know for what.
Whilst tinkering, discovered:




Early generated pages used a no-code tool requiring us to copy/paste a massive returned story data object, which included layouts, text, images, etc. into our codebase manually. Only working UI component was bottom left, generating content that we slotted into new pages.
Challenge
From a previous journalism product, understand how implementing AI changes its use, design an interface for that use, and build a consumer-ready product.
Goal
Find a fitting problem space then 
 launch and try to grow.
Step 1: Research
Starting from the bottom, we considered current product strengths and weaknesses, studied where AI was being used, and explored different potential uses.
Product Review
Secondary Research
Exploratory Conversations
We considered marketing materials, internal company trainings, and travel tours amongst a few other use cases - eventually settling broadly on education because:
Deliverables
Misalignment
Preliminary product concept: Quick visual and interactive textbooks, where everything on page was responsive, providing custom learning journeys.
Step 2: Prototyping
Pushing on all fronts - frontend, backend, and AI architecture, I developed a working prototype to validate the concept and gather feedback.
With a cohort of high school students and teachers, we gathered early UX feedback and discussed why AI tools were or weren't working in their classrooms.
Primitive prototypes, but students could create stories and give first thoughts on what it needed to become useful to them.





Though functioning, the product was an early prototype. The initial generation phase was unreliable and took 4-6 minutes, and the UI was more of a sketch. Critically though users could create accounts and save stories.
Insights:
Gen Time
Validating the Use
Students ↔️ Teachers
Levels of Depth
Step 3: Developing
Strengths of our interface were chunking info across pages, combining text and visuals, the play on interacting with anything, and (to teachers) its inability to write essays.
Everything else remaining in question:
^ That is to say, pretty much everything. We continued prototyping with a cohort of students and teachers, and produced the following.
* Below it are deeper dives into the process of how we arrived here, addressing the above questions.
Cleaning the top region of the pages gave more breathing room for layouts. As users began moving through stories more naturally, we focused on improving microinteractions and animations.

Toggle states add cleanliness, pages have highlights for legibility, and sources moved outside text blocks amongst other improvements.
The primary metric needed to signal success for the UI above was first-time users intuitively picking it up off the bat. Beyond that, we started to look for a growing user base and returners requesting features.
We’ve begun building a suite surrounding the core interface, improving its utility at different points throughout the education cycle (first lesson ↔️ test and review).
The first UI, while functioning, was largely unclear how to use. That's okay.
First to design was the UX for interacting beyond the initial generation.
For pace, I opted to skip Figma and code prototypes. Quicker rounds of live user testing and steady dev progress, at the expense of a maybe messier design process.
Desktop
Early examples of generations and layouts here.
As we began designing, a fun interplay evolved between the improving underlying AI infrastructure, and the UI it served.
That, along with UXR feedback, incrementally shifted what we were designing for.






Early progress included establishing consistent layouts and UI iterations. Much time spent here on backend infrastructure - auth, accounts, storage, etc.
Example intermediate step in the right direction.
Generating suggestions was a good update, lowering the friction to prompt pages, but the first design was unclear.
Toying with nesting, consolidating controls, and ux writing.

Generating suggested branches was a good idea, both helpful contextually and in communicating the feature, but weren't quite intuitive enough. Also evident battles with text formatting.
Current desktop UI whilst in a story.
A one-click action to the core function is highest in the hierarchy, giving users quick access to the feature.
Buoyed by suggested options and a prompt input box, we finally saw users navigating through stories intuitively.
Still haven't quite figured out the UX writing.

Toggle states add cleanliness, pages have highlights for legibility, and sources moved outside text blocks amongst other improvements.
Mobile
The product wasn’t really useful on mobile with gen times north of 1-2 minutes, once we hit that milestone we started building a complimentary ux.
Still prioritizing a sense of tactility and responsiveness, but designing for shorter experiences and inquiries.
Cursor’s adaptation of the first desktop ui - a starting point.
It worked, but crowded, confusing, and the page layouts needed some love.
Initially bringing in all of the desktop functionality, from here we began de-cluttering the UI, trimming and nesting features to provide a more focused UX.
Early UI mobile iterations followed desktop updates, while trying to strike the balance between utility and information overload for new users.
We wanted to give everything “juice” - pages, images, text boxes, controls, all to respond even if acutely. Little wiggles, zooms, highlights all contribute to a sense of liveliness from the pages.
The strict single-screen height of pages yielded dev challenges, requiring solutions which initially felt awkward, for example with the above input field. Also working on text-formatting, here for example sources are a bit disruptive to the text’s legibility.
Current UI designed for briefer experiences.
Organizing the model network for UX improvements, for example here leveraging a context object to enable the one-click action, making the core conversation feature easier to use.
Cleaning the top region of the pages gave more breathing room for layouts. As users began moving through stories more naturally, we focused on improving microinteractions and animations.
As the initial generation time shrunk from minutes to seconds, the product's intent changed, guiding the UI's design.
A simplified mobile environment now successfully serves quick breakdowns, while still able to flex to deeper explanations.
The desktop UX called for a richer feature suite. UXR and development moved on from the core story-page interface, to the functions surrounding it.