blog

Landing Page Image 5

Dyna AI: Bringing in a New Era of Fast, Trustworthy Clinical Decision Support

May 7th, 2024
May 7th, 2024

Providing rapid access to accurate, trustworthy clinical decision support is vital in healthcare, but what is the best way to provide that support? A recent HealthStream webinar tackled this issue. The webinar was moderated by Daniel Pawlus, HealthStream’s Senior Manager of Digital Events and featured presenters: 

 

  • Pete Darcy, VP Product, EBSCO
  • Al Stevens, PhD, Information Strategist, EBSCO
  • Paige Owen, PhD, MSN, RN, CEN, Senior Product Manager, HealthStream

 

EBSCO Dynamic Health and EBSCO Clinical Decisions 

Darcy began by sharing an overview of EBSCO Clinical Decisions. This HealthStream partner leverages technology in order to provide evidence-based resources that help clinicians to have a positive impact on patient outcomes, and ultimately, on the global healthcare community. Dynamic Health is an evidence-based point-of-care tool that helps clinicians obtain fast, accurate answers to questions. Users will find decision support content that includes information on diseases and conditions, signs and symptoms, drug monographs, and skills checklists. 

Dr. Owen shared that the partnership between EBSCO and HealthStream created opportunities to integrate this content into the HealthStream platform making it possible for nurses to quickly and easily access clinical information at the point-of-care. This same information is also available through HealthStream Learning Center (HLC) making it easy for providers to re-visit topics in greater detail at a time of their choosing. The integration means 24/7 access to content covering 2,500 evidence-based skills, EHR and policy links as well as CEs for skills completion.

 

Large Language Models Present Significant Risks

Large language models (LLMs) are Artificial Intelligence (AI) systems that are capable of understanding and generating human language by processing large amounts of text data. They are able to make inferences from context and generate coherent and relevant responses, summarize text and answer questions. Models such as Chat GPT and Google AI could be used by providers at the bedside, but they are far from the best option.

When asked about the biggest misconceptions about LLMs and AI, Dr. Stevens shared that the biggest misconception is that they are actually intelligent. “While they do a good job of summarizing information, it does not function like a human brain. They can produce information based on prior information, but they do not have motivation or any of the other characteristics of a human brain,” said Dr. Stevens. AI is a useful tool, but we need to ensure that we are taking steps to use it responsibly. Bedside reliance on LLMs poses serious risk of misinformation or misinterpretation.

 

EBSCO’s Principles for Responsible Use of AI

Dr. Owen shared the parameters that EBSCO has established to ensure that clinicians are relying on only the best, most accurate, and up-to-date information.

 

  • Quality: Dyna AI’s “walled garden” (a curated collection of content) assures users of access to trusted, evidence-based content that is developed by subject matter experts and subjected to a rigorous editorial process.
  • Security and Patient Privacy: Systems are designed and monitored in accordance with AI safety principles and HIPAA standards.
  • Transparency: The technology is clearly-labeled to support stakeholder’s decision-making processes and clinical information is presented with evidence sources.
  • Governance: The development, validation and ongoing monitoring is overseen by healthcare professionals.
  • ·Equity: EBSCO is committed to reducing health disparities through the introduction of measures that identify and mitigate bias.

 

Dyna AI - Reliable Access to Curated Content

Darcy explained that typically LLMs are trained on the open Internet and incorporate large amounts of information. Because the models are not actually intelligent, they are unable to discern the difference between good and bad Information resulting in the production of factual errors. Dyna AI relies on Retrieval Augmented Generation (RAG) to produce answers to queries, while also limiting the answers to specific content, in this case, Dyna AI's highly curated content. This improves search efficiency and limits access to the content as users will only be able to retrieve the content within the "walled garden."

 

Creating the "Walled Garden"

In order for an LLM to be safe and reliable enough for bedside use, Dyna AI needed to create a "walled garden". It ensures that employee queries result in access to highly curated, expert-generated content - and only that content. EBSCO's cross-functional teams that include clinicians and developers meet daily to ensure high quality, responses. This results in "noise reduction" and reduces the amount of irrelevant, or inaccurate information typically produced in an open Internet search. Providers know that they can trust the information that their queries return.

 

Dyna AI - Clinical and Patient Assist

The process begins when a clinician enters a question into the search bar. The system then searches for that content exclusively within the carefully controlled library of clinical content. The clinical prompt engineering team consisting of physicians, advanced practice providers, pharmacists, nurses, and search engineers have harnessed AI retrieval technology to deliver the right information to the user as quickly as possible. Information on diseases and conditions, signs and symptoms, tests, labs, interventions and skills along with information about cultural sensitivity, drugs and patient handouts are readily available to providers at the bedside.

Request Demo