The evolution of artificial intelligence technologies has led to rapid changes in many industries. In healthcare, everything from billing to diagnostics to conversing with a hospital about what ails you has now been augmented by AI.
In tech-savvy, life science-rich San Diego, regional hospitals are leading the AI frontier by developing or adopting the latest advancements that promise to improve speed and quality of care and reduce costs for patients and providers alike.
AI Communications Tools
“Scripps Health is utilizing artificial intelligence in a number of ways,” said Shane Thielman, corporate senior vice president and chief information officer for Scripps Health.
Corporate SVP & Chief Information Officer
Scripps Health
Scripps is currently piloting an AI tool that helps draft physician responses to patient inquiries where responses are generated using Microsoft Azure OpenAI Services’ large language model, which uses a simple query that includes the patient’s message, current medications and recent results, among other clinical data.
“The AI tool crafts a draft reply that reads like natural language,” Thielman said, adding that the AI tool is not designed to predict treatments, provide evidence-based decision support, diagnose or prescribe treatment.
“It serves to assist clinicians and the AI-generated, HIPAA-compliant drafts must always be reviewed by a provider for accuracy and appropriateness,” he said. “The goals of this are to allow clinicians to focus more time on patient care, while potentially reducing physician burnout that is commonly associated with administrative tasks.”
Scripps ambulatory care physicians are also using an AI note taking tool called Dragon Ambient eXperience that translates conversations between patients and doctors – from both in-person and telemedicine visits – into clinical notes by combining natural language processing and machine learning “to capture and put into context every word in the patient encounter,” Thielman said. “It creates the clinical documentation automatically in the medical record, to be reviewed and signed by the physician. Today, there is a quality review process to validate accuracy, but in the near future the notes will complete in real time, fully automated, for signature by the physician.”
Chief Medical Information Officer
Palomar Health
Palomar Health Chief Medical Information Officer Bret E. Ginther M.D. said that large language model virtual assistants and chatbots offer hospitals a “cost-effective option for patients and families to access services at any time of day or night.”
Well implemented, these tools can increase value without courting unrealistic expectations or a poor patient experience,” he said, but also pointed out that AI tools “do have an implementation cost, lack human interaction, and can only manage situations for which they have been programmed.”
Sharp HealthCare Vice President and Chief Data and Software Development Officer Jon McManus described AI virtual assistants and chatbots as “a significant frontier in patient engagement and communication” that can improve patient access to care by assisting Sharp team members with answering patient questions and provide support around the clock.
VP & Chief Data and Software Development Officer
Sharp HealthCare
For the hospital, McManus said virtual assistants offer cost reductions by automating tasks such as information look up, answering basic questions about insurance or eligibility, and providing basic medical wayfinding which can free up medical staff for more complex tasks and direct patient care services.
“Sharp also sees as an opportunity with increased patient engagement from virtual assistants,” he added. “By providing patients with a convenient and easy way to access information and support, there is growing research that shows patient prefer the personalization and expediency of virtual assistants powered by ethical AI.”
TrueCare is explore using a new conversational AI tool to augment its call center services, after the healthcare provider’s experience using a virtual chatbot assistant on its website during the COVID-19 pandemic.
Chief Information Officer
TrueCare
“The chatbot was instrumental in helping our organization manage a large surge of COVID-related calls in early 2022,” said TrueCare Chief Information Officer Tracy Elmer. “Based on that success, we are actively seeking a funding source to implement to utilize it to answer simple inquiries where the answers reside within our data system.”
Elmer said TrueCare’s priority when it comes to adopting AI is to focus on the innovative ways it can enhance patient experience.
“Our approach revolves around aligning people, processes, and technology for the greater good,” she said. Our goal is to harness the power of AI to enhance the patient experience while ensuring the personal relationship with have with our patients and community we serve remains intact.”
AI-Powered Diagnostics
The capabilities of large language model chatbots that converse with patients with human-like responses are more apt to grab headlines, but an even more promising role for AI in healthcare is in its ability to quickly sort through data and support doctors in diagnosing patients.
“We’ve integrated AI-driven algorithms into some of our diagnostic imaging tools,” said Sharp’s McManus. “This aids our radiologists by highlighting potential areas of concern, which can lead to earlier and more accurate diagnoses.”
Sharp utilizes RapidAI, an FDA-approved AI model trained on over 10 million imagining studies, for every stroke code patient to help measure cerebral blood flow volume from CT scans. Sharp uses this AI technique on more than 60,000 imaging study analysis each year for over 30,000 patients.
“This has drastically expedited door to decision time, time to treatment and reduces unnecessary procedures,” McManus said.
Palomar Health also uses an AI tool for helping patients with acute stroke care.
“Since using an AI tool, we have been able to reduce our time to thrombolytic (clot buster) treatment by 30% for acute stroke patients,” Ginther said, adding that Palomar is currently looking to expand the AI technology for the detection of pulmonary embolisms.
Scripps Health data scientists are using machine learning techniques to build predictive patient models that play a role in helping doctors and patients prepare for future needs. Leveraging patient data – including electronic health records, images, claims, and demographic information – Scripps applies sophisticated algorithms to identify patterns and extract valuable insights.
“These predictive models analyze historical data and uncover hidden correlations, allowing us to forecast patient outcomes, predict disease progression, evaluate and improve patient care,” Thielman said. “In our emergency departments we use a sepsis predictive model, which generates an alert if patients present with certain clinical indicators that may lead to sepsis. Based on the alert, clinicians can initiate a standard protocol for care.”
Administered by AI
Chatbot communication and machine learning diagnostic tools are the most patient-facing technologies, but even more widely adopted in healthcare are the AI tools that help speed up the many time-consuming administrative tasks at hospitals.
Thielman pointed to a “prominent example” of how AI predictive modeling was used at Scripps to help hospital operations during the pandemic.
“Scripps used a recognized data model and adapted it to Scripps data to accurately project demand for hospital beds and make important decisions about how we plan and allocate our resources, including our workforce and supplies,” he said, adding that the model’s accuracy was “extremely high in this area, ranging from the low to mid 90% range during all three of the major COVID surges.”
Ginther said Palomar Health has also found the value AI capabilities bring to its “foundational operations.”
“AI has already brought efficiencies in revenue cycle processes, appointment scheduling, managing patient queries for a few examples,” he said. “AI can also be used to organize your email inbox to keep the most important communication to the front – something I think most would find of value.”
AI’s value in streamlining administrative tasks prompted Sharp to develop and deploy its own internal AI assistant called SharpAI to support operational staff, McManus said.
Some of SharpAI’s capabilities include automatically creating meeting minutes in Sharp formatted templates from Microsoft Teams Meeting Recordings; providing text summarizations to expedite complex document review; and providing sentiment and thematic categorization analytics for various survey commentary collection.
Through a partnership with nonprofit health care innovation center OCHIN, TrueCare is currently in the process of becoming an early beta adopter of a cutting-edge AI-powered clinical documentation tool designed to reduce administrative burdens.
“Our primary goal is to enhance workflow and alleviate provider burnout,” Elmer said. “As a proud partner of OCHIN, we are privileged to be among the first to preview and test this groundbreaking tool with a select group of our providers. By the end of the year, we anticipate seeing AI become an integral part of their daily routines, acting as their trusted co-pilot.”
TrueCare’s business side of its operations is also using AI tool ChatGPT-4 for assisting in writing tasks.
“According to our marketing and communications department, AI has become an integral part of their daily workflow,” Elmer said. “They leverage AI to generate content ideas, fine tune copywriting, and create more compelling materials that resonate with the communities we serve. It has also proved invaluable in persona building, SEO, keyword research, and analysis.”
Elmer noted that while content-generating tools like ChatGPT can significantly boost productivity, they are still reliant on editors to validate the information they generate, especially when it comes to matters of regulations and governance.
“This underscores the need for establishing essential guardrails,” she said. “We are prioritizing the development of policies to guide and educate our staff on the best practices for AI use. Our approach is not about replacing human expertise but enhancing it with AI support.”
AI Concerns
The call for guardrails in AI extends beyond the hospital setting but concerns about the technology in healthcare are especially acute due to the necessity for patient privacy and equality.
“It’s important to acknowledge that, like any emerging technology, AI does come with its limitations,” Elmer said. “At present, the tools we are piloting are only compatible with the English language. This poses a unique challenge for us, as over 60% of our patient population is Hispanic. Thus, we recognize the need to carefully analyze the tool’s best use cases after the pilot phase to ensure its optimal integration.”
Ginther also pointed out that diversity-related bias may limit an AI tool’s ability to fulfill its intended function for the population it was intended to serve.
“Mitigating this will require diligence and continual corrective feedback by data-surveillance and reports from personnel using these AI tools,” he said.
Thielman agreed that AI needs to be evaluated and monitored for unintended outcomes and consequences because “data bias can inevitably exist with AI.”
“It is incumbent in the adoption and application of AI to ensure its efficacy and safety, similar to a new medicine or treatment,” he said, adding that key considerations are understanding the technology and the databases it uses, as well as “a commitment to continuous learning and adaptation.”
To address ethical concerns such as the potential for bias in AI, Sharp has put in place an interdisciplinary AI Ethics and Oversight committee with reporting accountabilities to Sharp’s executive team and board of managers.
“Sharp believes that AI has the potential to revolutionize healthcare delivery, but only if it is used in a responsible and ethical way,” McManus said. “We are committed to working with others to ensure that AI is used to benefit all patients, regardless of their race, ethnicity, gender, or age.”
In addition to issues of potential bias, privacy and data security are also top of mind for healthcare providers as they adopt new AI tools.
Potential risks include data breaches by hackers looking to obtain sensitive patient medical records; data being misused in unintended ways, such as being sold to third-party advertisers or in the development of biased algorithms; and an overall lack of transparency of how patient data is used and controlled.
“We have policies in place that sets out how patient data can be collected, used, and shared. AI is not exempt from these requirements, and they are regularly reviewed and updated to ensure alignment with best practices,” McManus said, adding that Sharp regularly monitors its AI and trains its staff on data privacy and security. “We believe that AI has the potential to revolutionize healthcare delivery, but only if it is used in a responsible and ethical way.”
To ensure the safety, security and ethical use of Scripps’ AI virtual assistant tools, Thielman said all the data processed by them is encrypted and is segmented from OpenAI and is not used to improve the large language model.
“Prompts are developed by Scripps, which instruct the AI model on criteria in the patient record to use in responding to an incoming message from the patient,” he said.
Scripps also adopted security protocols for its Dragon Ambient eXperience listening program, which captures and converts patient-provider exam room interactions into a note.
“Patients consent to the exam being recorded and all recordings are purged within a short period of time after the exam visit, after the note has been reviewed and signed by the physician,” Thielman said, adding that responsible and safe use of AI “is paramount and data privacy and security are part of the core principles in adopting AI.”