Week 10- Patient engagement and adherence applications

Patient engagement and adherence applications

Patient engagement and adherence has long been seen as the ‘last mile’ problem of healthcare – the final barrier between ineffective and good health outcomes. The more patients proactively participate in their own well-being and care, the better the outcomes – utilisation, financial outcomes and member experience. These factors are increasingly being addressed by big data and AI.

Providers and hospitals often use their clinical expertise to develop a plan of care that they know will improve a chronic or acute patient’s health. However, that often doesn’t matter if the patient fails to make the behavioural adjustment necessary, eg losing weight, scheduling a follow-up visit, filling prescriptions or complying with a treatment plan. Noncompliance – when a patient does not follow a course of treatment or take the prescribed drugs as recommended – is a major problem.

In a survey of more than 300 clinical leaders and healthcare executives, more than 70% of the respondents reported having less than 50% of their patients highly engaged and 42% of respondents said less than 25% of their patients were highly engaged.

If deeper involvement by patients results in better health outcomes, can AI-based capabilities be effective in personalising and contextualising care? There is growing emphasis on using machine learning and business rules engines to drive nuanced interventions along the care continuum. Messaging alerts and relevant, targeted content that provoke actions at moments that matter is a promising field in research.

Another growing focus in healthcare is on effectively designing the ‘choice architecture’ to nudge patient behaviour in a more anticipatory way based on real-world evidence. Through information provided by provider EHR systems, biosensors, watches, smartphones, conversational interfaces and other instrumentation, software can tailor recommendations by comparing patient data to other effective treatment pathways for similar cohorts. The recommendations can be provided to providers, patients, nurses, call-centre agents or care delivery coordinators.

Week 9- Diagnosis and treatment applications

Diagnosis and treatment applications

Diagnosis and treatment of disease has been a focus of AI since at least the 1970s, when MYCIN was developed at Stanford for diagnosing blood-borne bacterial infections. This and other early rule-based systems showed promise for accurately diagnosing and treating disease, but were not adopted for clinical practice. They were not substantially better than human diagnosticians, and they were poorly integrated with clinician workflows and medical record systems.

More recently, IBM’s Watson has received considerable attention in the media for its focus on precision medicine, particularly cancer diagnosis and treatment. Watson employs a combination of machine learning and NLP capabilities. However, early enthusiasm for this application of the technology has faded as customers realised the difficulty of teaching Watson how to address particular types of cancer and of integrating Watson into care processes and systems. Watson is not a single product but a set of ‘cognitive services’ provided through application programming interfaces (APIs), including speech and language, vision, and machine learning-based data-analysis programs. Most observers feel that the Watson APIs are technically capable, but taking on cancer treatment was an overly ambitious objective. Watson and other proprietary programs have also suffered from competition with free ‘open source’ programs provided by some vendors, such as Google’s TensorFlow.

Implementation issues with AI bedevil many healthcare organisations. Although rule-based systems incorporated within EHR systems are widely used, including at the NHS, they lack the precision of more algorithmic systems based on machine learning. These rule-based clinical decision support systems are difficult to maintain as medical knowledge changes and are often not able to handle the explosion of data and knowledge based on genomic, proteomic, metabolic and other ‘omic-based’ approaches to care.

This situation is beginning to change, but it is mostly present in research labs and in tech firms, rather than in clinical practice. Scarcely a week goes by without a research lab claiming that it has developed an approach to using AI or big data to diagnose and treat a disease with equal or greater accuracy than human clinicians. Many of these findings are based on radiological image analysis, though some involve other types of images such as retinal scanning or genomic-based precision medicine. Since these types of findings are based on statistically-based machine learning models, they are ushering in an era of evidence- and probability-based medicine, which is generally regarded as positive but brings with it many challenges in medical ethics and patient/clinician relationships.

Tech firms and startups are also working assiduously on the same issues. Google, for example, is collaborating with health delivery networks to build prediction models from big data to warn clinicians of high-risk conditions, such as sepsis and heart failure. Google, Enlitic and a variety of other startups are developing AI-derived image interpretation algorithms. Jvion offers a ‘clinical success machine’ that identifies the patients most at risk as well as those most likely to respond to treatment protocols. Each of these could provide decision support to clinicians seeking to find the best diagnosis and treatment for patients.

There are also several firms that focus specifically on diagnosis and treatment recommendations for certain cancers based on their genetic profiles. Since many cancers have a genetic basis, human clinicians have found it increasingly complex to understand all genetic variants of cancer and their response to new drugs and protocols. Firms like Foundation Medicine and Flatiron Health, both now owned by Roche, specialise in this approach.

Both providers and payers for care are also using ‘population health’ machine learning models to predict populations at risk of particular diseases or accidents or to predict hospital readmission. These models can be effective at prediction, although they sometimes lack all the relevant data that might add predictive capability, such as patient socio-economic status.

But whether rules-based or algorithmic in nature, AI-based diagnosis and treatment recommendations are sometimes challenging to embed in clinical workflows and EHR systems. Such integration issues have probably been a greater barrier to broad implementation of AI than any inability to provide accurate and effective recommendations; and many AI-based capabilities for diagnosis and treatment from tech firms are standalone in nature or address only a single aspect of care. Some EHR vendors have begun to embed limited AI functions (beyond rule-based clinical decision support) into their offerings, but these are in the early stages. Providers will either have to undertake substantial integration projects themselves or wait until EHR vendors add more AI capabilities.

Week 8- Robotic process automation

Robotic process automation

This technology performs structured digital tasks for administrative purposes, ie those involving information systems, as if they were a human user following a script or rules. Compared to other forms of AI they are inexpensive, easy to program and transparent in their actions. Robotic process automation (RPA) doesn’t really involve robots – only computer programs on servers. It relies on a combination of workflow, business rules and ‘presentation layer’ integration with information systems to act like a semi-intelligent user of the systems. In healthcare, they are used for repetitive tasks like prior authorisation, updating patient records or billing. When combined with other technologies like image recognition, they can be used to extract data from, for example, faxed images in order to input it into transactional systems.

We’ve described these technologies as individual ones, but increasingly they are being combined and integrated; robots are getting AI-based ‘brains’, image recognition is being integrated with RPA. Perhaps in the future these technologies will be so intermingled that composite solutions will be more likely or feasible.

Step 1 of 2
Please sign in first
You are on your way to create a site.