Agentic workflows and AI architecture. Projects that lie outside the scope of design.

Caleb Kim

Builder

1 year of experience

Agentic workflows and AI architecture. Projects that lie outside the scope of design.

Caleb Kim

Builder

1 year of experience

3PO - The AI Agent that can See and Talk

Designing the sound and onboarding user experience of a Tamagotchi-style companion app that gets users to open their Bible every day.

The Project Scope

Entry and onboarding wireframes

Background music and interaction SFX

Community-driven feature testing through Discord

Onboarding flow I designed, paired with the background music I scored.

Understanding the Pain Points

Shepherd gamifies daily Bible reading through a Tamagotchi-style virtual companion. Users feed the companion by reading verses — skip days and it goes hungry, come back daily and it grows. It's a behavioral design bet: attachment to a character is a more durable motivator than streaks and badges.


The video below is a broader look at the product, showing features shipped across the team. The audio you hear is mine.


Redesigning with Intent


Because I didn't own the sprite, my audio work had to be reactive to motion I couldn't change. Every sound had to earn its place against animations I inherited as well as convey the desired user feeling — which sharpened the criteria.

On the UX side, we wanted the user to feel connected to the app emotionally, and more than anything, feel comfortable.

My onboarding wireframes front-loaded the emotional hook: colors are warm, fonts are friendly, users meet the companion before any account setup or feature tour, and feeding happens in the very first session.

Entry and onboarding wireframes. The goal was getting a new user to their first feeding fast enough to form attachment before they decided whether to keep the app.

The Outcome


Feature requests I shipped from the community server, organized for my own tracking.


What shipped. A steady stream of Discord-sourced updates between May 2025 - January 2026 — A-B testing, onboarding and user journey refinements, new interaction SFX, tuning to existing sounds users found off. Each ships with a visible paper trail: request, response, reaction.

What I learned. Audio ended up being the part of the app I could most tangibly shape, even without owning the visual layer. When users describe the companion feeling alive, they're often reacting to sound they don't consciously notice — which is the work doing exactly what it should.