Helping users manage their lives, not just their tasks

AutoHome AI Mood-Aware AI Assistant for Everyday Routines

COMPANY

UI/UX Design for AI Products Certification: Stanford College of Engineering

ROLE

Sole Designer and Researcher

EXPERTISE

AI in UX/UI Products

YEAR

2025

AutoHome AI: Designing a Mood-Aware Household Assistant That Reduces Decision Fatigue

As part of Stanford’s course on UI/UX Design for AI Products, I designed AutoHome AI—a voice- and interface-based intelligent assistant that supports home management without overwhelming users. Built for people managing households independently or neurodiverse individuals, AutoHome helps reduce cognitive load with contextual nudges, emotional intelligence, and flexible automation.

  • Role: UX/Product Designer (solo)

  • Timeline: 2 months

  • Project Type: Real-world capstone project

  • Tools Used: Figma, Miro, Whimsical, Google Forms

  • Focus Areas: AI-first UX, Human-in-the-Loop Design, Conversational Design, Ethical AI

  • Deliverables: Research insights, emotional journey map, concept architecture, low-fidelity wireframes, AI risk audit

Problem Research
  • Conducted 3 interviews and 6 surveys to understand cognitive overload in home management.

  • Identified 3 key pain point clusters: Fatigue, Tool Overload, Emotional Triggers.

  • Users described juggling tools and feeling drained by mid-day, with low trust in current assistants.

Insight Synthesis
  • Created emotional journey maps highlighting stress peaks.

  • Mapped daily friction points to opportunity areas for intervention.


Design Framing
  • Defined the challenge: How might we reduce mental load while maintaining control?

  • Core design values: Emotional tone sensitivity, explainability, mood-aware adaptability, and human-in-the-loop control.

Concept Development

Designed 5 core features:

  • Smart Suggestions with inline feedback

  • Manual / Hybrid / Auto Modes

  • Mood Detection

  • Tone Personalization (Friendly, Neutral, Assertive)

  • "Why this?" explainability tooltips

Interaction Flow Architecture

To visualize how AutoHome AI processes user input and delivers adaptive suggestions, I created this high-level interaction model:

  • User Input: Triggers can include calendar actions, manual entries, or system events.

  • Mood Detection Layer: Optional but enhances empathy; adapts response tone and task volume.

  • Context Processor: Synthesizes user data, preferences, and behavioral history.

  • AI Suggestion UI: Displays contextual suggestions with override options and tone tags.

  • Feedback Loop: User reactions (e.g., “Not in the Mood”) retrain future recommendations.

This flow demonstrates how AutoHome balances automation with human agency—always keeping the user in control.

Prototyping
  • Created wireframes of the assistant dashboard, suggestion cards, task settings, and mode toggle.

  • Focused on visual hierarchy, tone shifts, and fallback responses.

Ethical Review
  • Identified risks like over-automation, data privacy, psychological safety.

  • Mitigated through opt-ins, review logs, role-based household access, and explainable AI behaviors.

AutoHome AI is a contextual, emotionally intelligent assistant that helps users manage their daily lives without taking over. It offers smart suggestions, adapts to user energy levels, and uses transparent decision-making to build trust.

Core Functional Features:
  • Contextual suggestions with feedback buttons and fallback states

  • Tone customization for each user (Friendly, Neutral, Assertive)

  • Three-level automation mode: Manual, Hybrid, Auto

  • “Why this?” explanations on every assistant action

  • Mood-aware responses like “Skip optional tasks today?”

AutoHome functions as a system-level support layer, integrated into the user's life rhythm rather than acting as a separate app.

AutoHome AI is a context-aware, mood-sensitive household assistant that intelligently supports daily life without overwhelming or replacing user agency. It combines conversational design, progressive automation, and explainability to build trust—especially for users who experience decision fatigue, fragmented workflows, or emotional burnout in managing household responsibilities.

Core Design Philosophy

The assistant is not just smart—it’s supportive. It augments human decision-making using AI principles like intelligence amplification, human-in-the-loop control, and emotional adaptability.

Key Solution Components
Smart Suggestions

Auto-generated nudges based on time, behavior patterns, and task history.

  • Example: “Try this 10-min Chickpea Stir Fry?”

  • Action options: Accept, Customize, Suggest Something Else, Save for Later

  • Feedback buttons: 👍 Helpful / 👎 Not Helpful

Mood & Energy Detection
  • Detects low-energy days via user input or pattern changes

  • Adjusts tone and frequency of suggestions

  • Example: “Noticed a slower morning—want to skip optional tasks today?”

Manual / Hybrid / Auto Modes
  • Manual Mode: User reviews and approves all actions

  • Hybrid Mode: AI handles low-risk tasks; user reviews others

  • Auto Mode: AI acts on trained preferences, always with explainability

  • Visual toggle on dashboard allows mode switching at any time

Tone Personalization

Users can set communication tone for suggestions:

  • Friendly (“Hey, how about a small win today?”)

  • Neutral (“Here’s a task that might fit your energy level”)

  • Assertive (“You have 2 pending tasks that need attention”)

Task Settings Panel

Each task includes quick-access controls to adjust:

  • Priority level

  • Notification tone

  • Reorder logic (e.g., push to evening, weekly batch, etc.)

“Why this?” Explanations

Every suggestion is accompanied by a tooltip or explanation showing:

  • Why the AI recommended it

  • What data triggered it (e.g., past behavior, time, calendar event)

  • Optional: edit the logic or turn off similar suggestions

Progressive Disclosure & Feedback Loops
  • Users can start small and expand trust over time

  • Feedback buttons (“Not in the Mood”, “Helpful”, “Not Helpful”) train the system

  • Action logs allow for undo and review

Simulated Outcomes
  • Estimated 30% reduction in support/helpdesk requests

  • 90% task completion success rate in prototype usability sessions

  • Positive emotional feedback on trust and tone customization

Ethical Design Impact
  • Created opt-in-only AI behavior with high transparency

  • Designed to support mental wellness without creating dependency

  • Established system that respects mood, context, and consent

Reflections & Future Improvements
  • Add multilingual and culturally localized tone options

  • Expand mood sensing using journaling or passive signals

  • Visualize AI decisions and create an “AI Action History” for transparency