AI-assisted chatbox - a prototype to help older adults live more independently

AI-assisted chatbox - a prototype to help older adults live more independently

AI assistant that empowers older adults, supports their families and reduces pressure on care providers.

The Challenge

Providing home care is complex as it involves older adults receiving care, their families and frontline providers and each of them have different needs:

  • Clients want dignity, clarity and choice in their care
  • Families timely updates and peace of mind
  • Care providers want efficiency, up-to-date records and better coordination

The Proposed Solution

Experiment with designing and building an AI-assisted chatbox (prototype) that helps:

  • End-clients view, manage schedules and request a call-back
  • Family members updates without compromising privacy
  • Care providers schedule more efficiently and effectively

The Expected Impact

  • More clarity and independence for end-clients
  • Timely updates and peace of mind for their families
  • Reduced admin and better coordination around scheduling

Where I started: human needs & problems to be solved

End-clients can sometimes feel unsure and wonder “Who’s coming today?” or “Can I change the time if I have a doctor’s appointment?”_

Family members often call for updates so they can keep their parent up to date. Delays or gaps in communication cause anxiety for them and their loved one.

Care teams are stretched. Admin staff handle calls, manage schedules and log care plans manually, which is time-consuming and can lead to human error or missed communication.

AI-assisted chatbox - a prototype to help older adults live more independently
AI-assisted chatbox - a prototype to help older adults live more independently
AI-assisted chatbox - a prototype to help older adults live more independently

End-clients want more clarity and independence so they have more control over their daily activities.

Why an AI chatbox?

  • It's always available, no waiting on hold
  • It speaks in clear, respectful language
  • It can learn and adapt to their needs and tone

Use cases:

  • View/change schedule
  • Cancel a visit
  • Request a call

Family members want more more transparency and peace of mind, while respecting privacy.

Why an AI chatbox?

  • Reduces chase calls
  • Builds trust by respecting boundaries
  • Offers proactive summaries (e.g. changes in routines)

Use cases:

  • View schedule
  • Get alerts
  • Respond to issues
  • Make changes on behalf of their loved one

Care providers want reduced admin and more efficiency without compromising care outcomes.

Why an AI chatbox?

  • Reduces admin as it lets clients and families self-serve simple requests
  • Assigns carers based on skills and availability
  • Spots patterns (e.g. repeated missed visits) and flags risks

Use cases:

  • Assign and reassign carers
  • Update care plans
  • Flag patterns
AI-assisted chatbox - a prototype to help older adults live more independently

Future journey for end-clients: more control, less anxiety

Older adults told us they often feel unsure: “Who’s coming today?” or “Can I change the time if I have a doctor’s appointment?”

The voice or text-based AI assistant that lets them:

  • See who’s visiting and when
  • Make small changes
  • Schedule a call
  • Set preferences about what’s shared with family

“Hi Mary. Your next visit is today at 12:30pm with Maria. Would you like to delay this by 30 minutes?”

Future journey for family: peace of mind

Family members often call for updates—but don’t want to intrude. Delays or gaps in communication cause anxiety.

A secure AI assistant lets them:

  • See real-time updates
  • Get carer notes (if allowed)
  • Schedule follow-ups

With built-in privacy logic: if David doesn’t want health notes shared, the assistant respects that.

“Maria visited today. All went well. Would you like to schedule a follow-up call?”

AI-assisted chatbox - a prototype to help older adults live more independently
AI-assisted chatbox - a prototype to help older adults live more independently

Future journey for care providers: less admin and better care outcomes

Teams are stretched. Admin staff handle calls, manage schedules, and log care plans manually—leaving room for error.

A conversational agent connected to calendars and care data helps:

  • Let clients and families self-serve simple requests
  • Reassign carers based on skills and availability
  • Spot patterns (e.g. repeated missed visits)

This means more time can be spent improving care, not chasing spreadsheets.

Simple, guided conversational flows

An end-client wants to check schedule for tomorrow and possibly make a change.


A family member wants to make sure carer has been to mother and everything is okay



AI-chatbox, accessible via mobile phone make self-service easier (there may be digital barriers for some users, especially older adults)

AI-assisted chatbox - a prototype to help older adults live more independently

End-client chat examples

  • End-client: “I want to know who is coming to visit me and when and make a change if I need to."
  • Chatbox: “Good morning John. Maria is due to visit today at 12:30pm. Would you like to change this time?”
  • End-client: “Yes, can she come later?”
  • Chatbox: “I can offer 2:00pm. Would you like me to confirm that?”
  • End-client: “Yes.”
  • Chatbox: “Done. Maria will visit at 2:00pm. Would you like me to notify your daughter?”

Logic & rules behind the chat

  • Retrieves data from real-time care schedule (e.g. Google Calendar)
  • Offers only time slots where carers are available and match skill profile
  • Respects client privacy settings: only prompts to notify next of kin if consent is pre-set
  • Handles refusals or “I’m not sure” with fallback options (e.g. “Would you like me to call the office?”)

Family member chats may go like this:

  • Family: “I want to know if Mum’s carer has been and if everything’s okay?”
  • Chatbox: “Maria visited your mother today at 2:00pm. All notes were positive. Would you like to see a summary?”
  • Family: “Yes.”
  • Chatbox: “Maria helped with lunch. No concerns raised. Would you like to speak with your mother or a care coordinator?”
  • Family: “No, thank you.”
  • Chatbox: “Okay. I’ll only notify you again if there’s a change or concern.”

Logic & rules behind the chat:

  • Checks care record updates and notes from staff
  • Applies consent rules from the client (e.g. “share summary only, not medical details”)
  • Escalates only if predefined concern patterns appear (e.g. missed visit, unwell flag, repeated refusals)

Building prototype in practice – JotForm, no code tool

This AI assistant was prototyped using Jotform, a no-code tool that supports conditional logic, user input flows, and integrations with third-party tools.

Key features:

  • Secure role-based prompts (“Are you the client or a family member?”)
  • Dynamic schedule lookups
  • Voice/text input options
  • Consent-aware data sharing
  • Feedback prompts to improve tone and trust

Core Data (no-code inputs)

The working prototype depends on entering structured data directly into Jotform.

Examples of data to add:

  • Care schedule: tells AI who is visiting and when
  • Client profile: name, preferred tone, privacy settings, language
  • Care plan summaries: key care goals, medication, support preferences
  • Carer profile: skills, availability, compatibility

Optional, additional data, some coding required

To enhance interactivity and real-time responses, optional integrations with Zapier, Google Calendar or Google Sheets can be used. These integrations let the assistant behave more dynamically. These require basic automation logic but not full-scale coding.
Google Calendar: real-time care schedule, AI can check/update upcoming visits instantly
Zapier: Trigger workflows based on form input, e.g. if client cancels visit AI can notify coordinator update next of kin

Training the prototype

Respectful AI isn't just about function, it's about how it speaks.

  • Adult-to-adult language
  • Clear options (“Would you like to change your visit time?”)
  • Privacy-first prompts (“Would you like your carer to speak with you before sharing?”)

The system learns tone preferences over time. For example, some clients may want direct answers. Others prefer softer phrasing.

Initial training - before publishing

To deliver the expected experience, Jotform AI Fields (or embedded GPT agents) must be primed with clear system instructions and user intents. This design-stage training is fixed when building the prototype and forms the baseline assistant personality and behaviour.

Prompts

  • “You are a care assistant helping older clients view and change visits.”
  • “Use a respectful, clear tone. Avoid jargon. Confirm all actions.”
  • “If the client cancels a visit, offer rescheduling or notify coordinator.”
  • “If next of kin asks for update, check client privacy settings first.”
  • “Ask for short feedback occasionally to improve tone and clarity.”
AI-assisted chatbox - a prototype to help older adults live more independently

Feedback & learning loop

Once in use, the assistant can be iteratively updated based on user responses and structured feedback. This is not self-learning unless you use advanced AI backends, but you can simulate learning through regular updates to prompts or rules. Responses are stored and used to adjust how the AI interacts next time. Over time, this supports a truly adaptive experience, even without persistent memory.

Sample Feedback Questions (asked occasionally, not every session):

  • “Was this explanation clear and respectful?”
  • “Did I offer enough choices just now?”
  • “Would you like to change how I update your family?”

Feedback & how to update the assistant

  • Users find tone too casual: adjust system prompt: “Use more formal phrasing for older clients”
  • End-clients always pick phone calls: adjust flow, offer “Speak to someone now” earlier
  • Families want shorter summaries: change template and use bullet points for visit summaries
  • Confusion around visit changes: add clarification e.g. “This only changes today’s visit, not your routine.”

What you learn over time:

  • Preferred communication tone (formal vs informal)
  • Comfort with autonomy vs shared decision-making
  • Desired level of contact with family or care staff

Benefits to the End-Client:

  • Learns their preferred language style (e.g. more formal for some, conversational for others)
  • Adjusts how much it automates versus escalates to humans
  • Strengthens trust by giving users control over privacy and communication

This is a human-in-the-loop training process. Designers monitor feedback, update prompts/rules weekly or monthly, and test new flows.

Dependencies, constraints & risks

Dependencies:

  • Accurate, structured data: Carer availability, visit schedules, care plans, consent levels must be up to date.
  • Reliable calendar integration: Google Calendar or other care system must reflect changes immediately.
  • Staff engagement: Carers and coordinators need to log notes clearly and consistently.

Constraints:

  • Tool integration: Jotform, Zapier and Airtable work for prototyping but may not scale securely for production use.
  • Consent logic complexity: Fine-grained privacy controls are hard to manage across multiple users unless built into core systems.
  • Tech literacy varies: Must design for clients with varying comfort with digital tools.

Risks:

  • Client non-adoption: Some end-clients may not trust or understand the AI assistant. A human-first fallback is essential.
  • Privacy erosion: Poorly designed prompts or permissions could share sensitive information inappropriately.
  • Limited generalisability: May work well only with clients who are cognitively and technologically capable.
  • Data drift or delay: If carers forget to log visits or updates, the assistant will be inaccurate or misleading.

Concluding comments

This is still a prototype but it shows what’s possible when you start with people, not tech. I hope this case study shows how you can combine service design, systems thinking and human-centred AI to meet real needs. By listening carefully, designing clearly and prototyping quickly, we can make complex systems more humane.