I think, overall, using AI is just another decision support tool. So we always want the health professionals to use their best judgment and use this sometimes to do the heavy lifting, but more to pre-crunch, pre-chew a lot of the data for you, give suggestions, summarize things, make life easier, bubble information up to your fingertips, but ultimately they make the decisions based on their training and their experience.
Erin Vallier:
Welcome to another episode of the Home Health 360 podcast, where we speak to home-based care professionals from around the globe. I’m your host, Erin Vallier, and today I am joined by Naomi Goldapple. Naomi has the unique experience of being both a startup entrepreneur and a seasoned business executive. She is currently the SVP of Data and Intelligence at AlayaCare, where she leads a team of data scientists and engineers to leverage emerging technologies to build production-ready products that will drive present and future value for AlayaCare clients. Naomi is a data intelligence professional with a lot of experience, leading innovation teams, specifically in the field of artificial intelligence and machine learning, which is our topic for today. Having worked in healthcare tech and home care for the past six years, she has become recognized as a thought leader on leveraging data for better decision making, creating operational efficiencies with the goal of improving health outcomes for those who wish to recover and age in the place they call home.
Erin Vallier:
Welcome to the show, Naomi. Thanks so much. Really happy to be here. Oh, I’m so excited to have you on. Every time I talk to you, I learn something new and I come away inspired in some way. I’m very excited for this conversation. I want to start by just asking a general question about AI because it feels like magic to some people and a little bit scary, especially as we’re applying this to healthcare. So I’m wondering how do you specifically explain AI’s role in home care to customers who’ve never used it? They don’t know anything about it. Is there anything you can say to make them feel more comfortable when adopting these tools?
Naomi Goldapple:
Yeah, sure, chatgpt has really blazed the trail for us, which has been really nice. I noticed a huge difference over the past couple of years talking to health professionals about AI than in previous years, because almost everybody unless you’ve been living under a rock has at least asked ChatGPT one question, one prompt, over the past few years. So I feel like the general population is a lot more open to using AI or using these large language models, probably more than they ever thought. It is definitely an imperfect technology. It does make mistakes, but it’s amazing how people are okay with that. It does make mistakes, but it’s amazing how people are OK with that.
Naomi Goldapple:
So now, having said that, in health care, we’re not OK with that and we need AI to give very reliable results. So definitely one of the challenges is building the trust and making sure that everybody feels comfortable and that the security and the privacy concerns are taken care of and everyone feels comfortable with that. I think overall, using AI is just another decision support tool. So we always want the health professionals to use their best judgment and use this sometimes to do the heavy lifting, but more to pre-crunch, pre-chew a lot of the data for you give suggestions, summarize things, make life easier, bubble information up to your fingertips, but ultimately they make the decisions based on their training and their experience, gotcha, so I like that perspective.
Erin Vallier:
It’s not just a robot doing all the work for you. It’s helping you, pulling all the information that would take you a lot of time to consolidate, and making your job a lot faster by making some suggestions and letting you make the final decision, absolutely.
Naomi Goldapple:
In fact we specify that it is a decision support tool. If it makes decisions on its own, then it becomes a device, and if it’s a device that makes decisions, then you actually need FDA approval, and so we stay very far from that. So it stays as a decision support tool, so that you can use it to augment the worker, but not necessarily make decisions on its behalf. Then we get into a kind of messier territory.
Erin Vallier:
Gotcha, I want to expand on something that you just said about how we can use it. So are there some stories or examples of how AI is currently being used specifically in home care to improve the employee experience? And the folks I’m thinking of are like caregivers, administrators, schedulers, billers. What have you seen?
Naomi Goldapple:
Yeah, all of the above.
Naomi Goldapple:
I’ll go a little bit backwards in terms of what I see a lot of right now, which is very exciting and I know it’s something that we’re actively building, which is really like the AI scribe, so the ability to use ambient listening.
Naomi Goldapple:
So if a caregiver is going to a client’s home, they can have the device listen to their conversation with an opt-in and make sure everybody’s comfortable with that, and it can transcribe the visit and then afterwards it can actually take that transcription and automatically fill in whatever reports are needed to be filled in. So whatever forms need to be filled in Not all, but it can fill in most of what’s needed and then the caregiver obviously has the last say they can go, they can edit, they can accept. But that is a huge time saver, instead of taking a lot of notes and having your nose in your tablet while you could be looking your client in the eye and having more interactions with them, knowing that the information, the vital information that you want to capture during the visit, is being captured and can actually nicely be categorized into the proper forms at the same time. So that’s a huge cost savings and can really see tools popping out of the woodwork, offering different flavors of this in the home care space for different forms especially.
Erin Vallier:
Cool, that’s like the nurse’s dream oh yeah, absolutely. And the quality management’s dream, because they’re always chasing the nurses to get their notes done.
Naomi Goldapple:
Yeah, and it’s a huge time saver.
Naomi Goldapple:
So instead of having to go to your car afterwards and fill in a bunch of information, a lot of the heavy lifting 80% of it can be done for you and then you can get onto your next visit really quickly.
Naomi Goldapple:
So while talking about getting onto the next visit, so some other applications that could be really great is, the caregivers are spending a lot of time on the road right, so they’re going visit to visit and sometimes to be able to prepare themselves for the next visit, they have to go, find the information, see what happened since the last visit. Maybe it’s not somebody they see all the time, so they have to go into their file and get a little more of their medical history, see what are the latest progress notes, so that they can be well prepared. And we’ve seen the ability to either asking an assistant for a client summary so give me a summary or what changed since the last time I saw this client, and that can just bubble up to their fingertips. They can see right away a summary, read it very quickly, or have it read to them very quickly and then they can be ready to go visit their next client. So, again, huge time savers to bring that important information up to their fingertips.
Erin Vallier:
Absolutely, and it seems like something like that would help us deliver or produce better outcomes, because things that we may have missed are going to be bubbled up to the service.
Naomi Goldapple:
Absolutely, absolutely In terms of missing things.
Naomi Goldapple:
Another thing is actually using predictive models, so being able to pre-chew all of the data.
Naomi Goldapple:
So a clinician and a clinical supervisor it’s hard for them to remember everything about every client or every patient and know what happens at every visit. Some are visited three times a day. There’s a lot of information that is being collected at every single visit so that data can be captured and fed into a predictive model that will predict things like who is at risk of an adverse event, who is at risk of a hospitalization, and those alerts can be sent out to clinical supervisors, to the clinicians, to the caregivers, and they can mitigate those risks. Hopefully, sometimes not, but they can at least have that information to perhaps go see them right away, to perhaps make sure that they see their physician right away, make sure to call a family member, make sure to remove a carpet if they’re prone to falls all kinds of information that can be almost like a companion, that can tap you on the shoulder and say, hey, have you thought about this? Did you know that this just happened, and maybe you can do something to prevent an adverse event.
Erin Vallier:
That’s fantastic Real-time data fed up to understaffed agency because, let’s face it, there’s a shortage of nurses, there’s a shortage of healthcare workers in general, and we all have to do more with less now, and I feel like this would be an excellent tool to draw your attention to the people who really need to pay attention to, if you will, that they need immediate assistance, and that’s just going to help deliver better care and streamline workflows and save people a bunch of time. That’s awesome.
Naomi Goldapple:
Yeah, and it doesn’t even need to necessarily be real-time data. A lot of the clinical supervisors we work with sometimes they’ll just look in the morning, so it’s a good way to start their day. Who is on my list, what is my cohort of patients, and let me filter by who is at highest risk, and that can help them to allocate their resources for the day. So it can also help in that type of planning. And maybe it’s not something where you want to be poked every two minutes and have alert fatigue, but you want to use it to make better decision-making and better resource allocation to those who need it the most to make better decision making and better resource allocation to those who need it the most.
Erin Vallier:
Still very exciting possibilities there. I want to shift to talk a little bit about security and privacy. You did mention that as one of the challenges and, understandably, healthcare organizations are really cautious when it comes to all of these things. It seems like healthcare organizations always have a target on their back, so I’m curious how do we ensure that AI tools are secure, private and compliant? Is there anything specific that providers should be asking or looking for when they’re evaluating tools?
Naomi Goldapple:
When you’re building these tools, you always have to make sure that when you use your EHR, not everybody has access to all information, right? So there’s certain permissions that different roles have. Those permissions should follow in whatever AI tools that you’re using. So, in the examples that I gave before, if I’m a caregiver and I’m asking for a certain client’s giving me their health summary, but I actually don’t have access to those patients because they’re not in my cohort, I should not be able to ask that. I should not be able to get that information. So it should really follow all of the permission rules that are already in place. That’s definitely.
Naomi Goldapple:
Also, you have to be careful with some of the tools that are free, some of the open tools. You don’t want to be sharing any PHI, any personal health information, with third-party tools that are going to use that information for training their models or making their data better, because you don’t want that to be in the public arena. So that’s important to take care of. And you want to make sure that the HIPAA compliance is there in the vendors that you choose. You want to make sure that the data is housed securely within your regions and make sure that all of the privacy controls are there. A lot of it still applies, but I think the big difference is to make sure that these tools that you’re working with you have to make sure that you’re not passing information that you shouldn’t be passing to these open source tools.
Erin Vallier:
That makes sense and it sounds like a lot to consider when selecting the right tool. The next question I want to ask is more on the reliability, because, as we know, ai is trained on historical data and it’s only as good as the data behind it. What are some key factors that determine accuracy in these tools for home care, and how do we make sure that our AI solutions provide really reliable insights?
Naomi Goldapple:
If we split this into almost like traditional AI and then really like these large language models, there’s a bit of a difference. So in the traditional AI, where you are using historical data to train your models, when you train to have some sort of prediction or some recommendation, there’s many things that are important in your data sets. You want to make sure that you’re using enough data. You want to make sure that you have a balanced data set, no-transcript, trained your model for a certain level of accuracy and you have to make sure you monitor to make sure that, as it’s ingesting new data, that the results aren’t starting to slide right, that you’re not starting to have decreased accuracy or precision in your models because the data may have really changed. So you have to make sure that you’re monitoring and also that you use new data sets to retrain if the distribution of the data has changed. You might be going into a completely new market and you have very different data distribution than you had in a previous training and therefore your results are going to be different. So you have to make sure to monitor them. Now, in the world of large language models, we know there was a lot of funny stuff at the beginning when people were playing with ChatGPT and with Gemini and it was just giving some wrong answers. So that definitely can happen. You’ve heard the term hallucinate. They can tend to hallucinate Sometimes if they don’t know the answer, they will just make it up because these models they aim to please, which is great. But the models have gotten a lot better. There are a lot more reasoning that goes into the models and a lot more fact checking that goes into the model. So they’ve gotten better.
Naomi Goldapple:
One of the ways that you can make sure is that you can direct it to fetch the data from certain data sets. If you are asking it about certain patients, you’re only going to be basing it off of the data that is brought back from the APIs, from your particular system. So you’re not asking it to go out into the interwebs and find this information. You’re saying go here to fetch that information and you can also load up, for example, some of your standard operating procedures. So you can ask it in general, tell me how to use a Hoyer lift. But if you would like it to pull back that information specifically on your organization’s operating procedures, then you can load up your knowledge bases and direct it to only fetch the information from there, and that’s a technique called RAG, which is retrieval, augmented generation. You don’t just go out into the wild, but you actually go retrieve it from certain knowledge bases that you direct it to. So that’s a good way to keep it in control.
Erin Vallier:
That’s fascinating. So it’s a lot of ongoing maintenance and some really good prompt engineering. I’m curious, though is it a shared responsibility between the end user and the person who’s maintaining or providing the service of that AI tool to make sure that the models are constantly trained Like? How does that really work?
Naomi Goldapple:
Your provider. If they’re providing you with predictive models and they have trained them to be a certain level of accuracy, absolutely, it’s up to them to make sure that they’re being monitored and to continuously update them and retrain so that they stay within there. However, they usually make a data contract with the providers. So the providers you say okay, my model has this information that it’s been trained on. You have to make sure to keep capturing that information Because if you stop feeding that information then the results aren’t going to be as good. So we usually call it like a data contract and we say okay, for a patient risk model, you have to make sure that you capture falls and hospitalizations and your diagnoses and your medical history and the visit details and all that because we’ve trained a model with all that data. So if you stop including that, it’s going to have a negative outcome on the results of that model. So we usually have a data contract to make sure that everybody has a shared understanding of what the inputs and the outputs will be.
Erin Vallier:
Gotcha. So it’s a shared responsibility. The provider continues to input and the service provider of the tool continues to monitor and tweak and make sure that it’s reliable. Ok, that makes sense and I know you’ve had a real played a real pivotal role in the development of Alaya Care’s tools. So I’m curious, as you’ve been doing this development, what are the biggest challenges and learnings from this process?
Naomi Goldapple:
So definitely, adoption and change management is huge. So when I think of nurses and clinicians, they’re suspicious and they should be, because these are people’s lives. So a lot of times they want to know why are they higher risk than they were yesterday? What happened? So they really want to be able to peek into these models. You know sometimes they say that AI the models are like a black box. You don’t know why it comes up with these answers. That doesn’t really fly. In this situation, you need to make your models very explainable so that you can gain their trust. That’s something that’s very important, and the way that we do that is a few different ways. One is we give a lot of data to show the why. So if you see that a patient is now a higher risk than they were the day before, we have little icons to show. Maybe their medications were increased, maybe there was a fall, little icons of somebody falling, of this and that. So they can get little glimpses as to what happened and to know what happened there. And we also give a view this is what happened over the past 28 days, so they can really see the trend and see what were the events and what were the key events that happened that actually triggered a change in risk.
Naomi Goldapple:
We have another model that we worked on, which is really a visit optimizer, so for the scheduler to be able to have decision support to choose who is the best match for this they can visit, or who can fill in this call off right away and it will then serve up. Well, it should be Sally. And they’re like, oh, why should it be Sally? I thought Mary would have been the best choice, but we try to give as much data as possible, so it would be Sally, because look how many miles she has to travel, look how many times she has seen this patient before, look she has the skills that match. Look, we give all that information so they can really start to trust and go oh yeah, okay, I see that makes a lot more sense.
Naomi Goldapple:
And the last thing is really giving it the ability to even communicate. We have something that is called Notable that automatically reads all of the notes that are captured during the day by clinicians, by caregivers overview notes, progress notes, adl notes, visit notes and sometimes clinical supervisors or supervisors. They don’t have time to read all those notes and sometimes there’s gold in those notes, right. So we built something where the large language model will automatically read those notes and then pull out what’s most important and sometimes the clinical supervisor will be. I don’t know if I agree with you. You said that was a situation of medication non-adherence. I don’t know if I agree with that. So we give them the ability to X that out and then say actually I think it was this, so they can actually participate and help us to make it better. And that gives them more sense of control that they don’t have to take 100% what that model recommends, but they can actually choose their own.
Erin Vallier:
I like that. I think it’s very important in terms of being able to increase adoption is to give them the opportunity to help us train it Because, like you said, it doesn’t always have the right answer, and then, given the visibility of where all of these answers are coming from, I think that’s super important. I have another question along this line, because we did develop our own in-house tools, but there are so many things that providers can choose from and sort of bolt on. I’m curious, from your perspective, how does building AI in-house compare to using a third-party solution, especially when it comes to security, accuracy and all the workflow integrations?
Naomi Goldapple:
Yeah, that’s a great question. Obviously, me and my team we build things in-house to integrate directly into the platform and I think the key is really you want to make those workflows as smooth as possible and you want to avoid clinicians or back office workers having to log into different systems, so you want to make sure it is as smooth as possible so that it reduces the amount of time that they have to spend doing whatever it is that they do. The security is something that you have to take into consideration, especially with PHI data. You want to make sure that with a third-party vendor you are giving them, they’re going to have access to that data. So you want to make sure that they are HIPAA compliant. You want to make sure that they’re SOC 2 compliant. You want to make sure that they have all the controls.
Naomi Goldapple:
Obviously, in-house, you can control that a bit better yourself. I think the other thing is when it’s in-house, you can really tailor it or they call fine-tune it to the home care domain, whereas sometimes the third-party tools they can be used for many different industries and adapted, but when you have it in-house, you can really tailor it to your specific needs, sometimes how quickly you want it and for somebody to build it, it could take longer. If you want to buy it off the shelf, you might be able to bolt it on a lot quicker. So it depends where you’re at and how quickly you want to get going.
Erin Vallier:
Yeah, Sounds like there are some pros and cons to each, but I feel like, ultimately, if it can be included in a solution that you already have it’s like in there embedded, fully integrated and make things a little bit easier and a little bit more secure, Maybe even provide some better information down the line in terms of accuracy and all that stuff what do you think about that?
Naomi Goldapple:
Yeah, sure, and it’s become so democratized like with the advent of these LLMs that you know a lot of these, the models, a lot of the technology is open source. People can play around with it. It’s not as difficult as, let’s say, deep learning from a few years ago, where you want to start integrating deep learning models, and it can be easier for people to learn in-house to be able to do this. It’s worth testing and worth trying it out because there are a lot of fun to work with and the costs are definitely coming down.
Naomi Goldapple:
The LLMs can be expensive. They take a lot of compute, so there’s GPUs that are processors that can be quite expensive to use. But obviously when you’re in-house you can control things a bit more and when it’s third party, you just have to pay whatever it is that they want you to pay. So there’s definitely pros and cons. For sure, there’s one thing that’s a guarantee is that it is evolving extremely quickly and the improvements are amazing and it’s a very exciting time to be integrating this technology. I think we’re very fortunate to be at this period of time when we could really take advantage of these advancements.
Erin Vallier:
And that’s a nice segue to the last question I want to ask you because I want you to give us something to be excited about when do you see AI going in the home care space and the health care in general?
Naomi Goldapple:
Oh, my See, it’s going in so many directions. But what I’m really excited about is voice. I really find that there’s so much that we can do with voice. I gave an example about the forms but being able to transcribe, but even being able to dictate progress notes. To be able to dictate a progress note and you say I want it in this particular tone and I want it in this particular language and I want it sent to this family.
Naomi Goldapple:
This is all very exciting, and where the industry is going is not just these assistants, where you ask a question and it gives you an answer, but actually going into actions, where it can actually take actions when you tell it to, but then even autonomously. So we’ve heard about real autonomous AI agents. To really be able to come and do a lot of those repetitive tasks with intelligence is a very exciting frontier that we’re getting into the way to use them with these large language models, where it can actually make some smart decisions and execute autonomously, but making sure that it’s staying within the guardrails. This is within reach now. So these are things that people are actively working on and I think within the next year we already see a lot of them coming out, but within the next year there’s going to be like a real paradigm shift from how we use software and I think this is a very exciting time, I would agree.
Erin Vallier:
Let’s do more with less. You know and I love chat. Tpt Makes me sound much more professional and kind as my first draft of the email sometimes.
Naomi Goldapple:
Yeah, my favorite these days is Gemini 2.5. Really, really enjoying that, yeah, awesome.
Erin Vallier:
Thank you so much for coming on to the show. This has been really informative and I think that it’s really an exciting time to be in the industry and just watch what AI is going to do to revolutionize it. So thank you for coming on and sharing some of the wisdom with us. Absolute pleasure, thank you. Home Health 360 is presented by Alaya Care and hosted by Erin Valliere. First, we want to thank our amazing guests and listeners. Second, new episodes air every month, so be sure to subscribe today so you don’t miss an episode. And last but not least, if you like this episode and want to learn more about all things home-based care, you can explore all of our alayacare. com/ episodes or visit us on your favorite podcast platform.
Listen to episodes on your favourite platform:
Episode Description
AI in home healthcare serves as a decision support system that helps professionals make better choices by analyzing data, surfacing key information, and providing suggestions while keeping final decisions in human hands.
• AI scribes can transcribe caregiver-client conversations and automatically populate required forms • Digital assistants prepare caregivers for visits by providing quick client summaries and highlighting recent changes • Predictive models identify patients at risk of adverse events, helping clinicians prioritize care • AI tools must maintain the same security permissions as existing systems and comply with HIPAA regulations • Explainable AI models help build clinician trust by showing why specific recommendations were made • Voice technology represents the exciting future of AI in healthcare, moving from assistive to autonomous capabilities