This is a summary of a presentation I watched at the National Kidney Foundation Spring Clinical Meeting.
Speakers: Peter Kotanko MD, Dr. Hanjie Zhang PhD, and Roxanne Wang MSc from the Renal Research Institute
As you know I have an interest in how AI might impact our work in kidney care, so I was excited to see a presentation talking about this topic here at NKF. And guess what, one of the examples was using ChatGPT to create menus and do nutrient analysis.
Objectives:
- Foundations of AI
- Strengths and limitations of AI
- Potential applications for AI
When and where did AI start?
A critical event in AI creation occurred in Dartmouth. In the summer of 1956 a group of researchers met to develop a machine with intelligence. Their plan was to recreate information processing done by the human brain by creating an artificial neuron to produce an output.
What is AI?
There is no universal definition. Ultimately, AI typically means that machines can learn something. Machine learning is a subset of AI. Deep learning is a subset of machine learning and uses highly complex artificial networks. Large Language Models are also AI and use HUGE networks. Generative AI is able to generate text, images and media.
Machine and Deep Learning are more classic senses of AI.
Will AI replace the need for humans in health care delivery?
This speaker doesn’t think so. Domain Experts (e.g. physicians or dietitians) are an essential component of AI development. Domain experts define the goal of the AI model and direct the work of the AI builders. Without a domain expert it can be difficult to set up the AI in a way that will generate meaningful results.
What is AI good at? What is AI bad at?
AI is good at identify patterns and examining huge data sets for hidden associations. For example: Data sources and applications of classical AI – use EMRs or info from HD machines to figure out hospitalization rates.
AI is bad at understanding mechanistic, it can’t avoid bias or phase shifts. A phase shift is when phase changes e.g when a method of travel changes (e.g. a carriage moves to a car), or a light source changes e.g. candles to light bulbs.
Other concerns with AI:
- AI is a black box – this means it is difficult to understand the decision-making process.
- AI is unable to identify causal links, and it lacks creativity.
- AI requires large data sets and is unable to adapt to previously unknown data.
Guidelines are starting to be created to help guide AI in medicine.
What is our role as HCP?
- Maintain patient centered perspective
- protect privacy
- communicate AI output to our patients in clear terms
- serve as the intermediary between patients and AI experts
- stay critical of AI output
Applications for AI in Kidney Care – Image Identification
Example: Will this AV fistula rupture?
In this example, AI image detection was used to classify the risk of rupture of AV fistulas aneurysms. This was developed because in NYC alone it has been reported that 1 person per day has died from exsanguination related to rupture of their AV fistula.
What did they do? The AI team collected over 1000 images of fistulas. Some images showed a fistula without an advanced aneurysm and some with an advanced aneurysm. Then they built a model that could output the probability of rupture. They tested this by having a vascular access expert assess the fistula vs AI output. The AI model did pretty good. While this model (being developed into an app) isn’t available yet it is going through review.
Applications for AI in Kidney Care – Language Learning Modules
Chat GPT is a Language Learning Method. ChatGPT use a massive dataset, transformer architecture and reinforcement learning from human feedback.
What is LLM not good at?
LLM doesn’t understand the underlying reality that language describes. LLMs generate text that sounds fluent, appears information and is grammatically correct but can be completely incorrect. LLMs have no other objective than to satisfy statistical consistency with the prompts. ChatGPT will also not provide it’s sources which can make verification of the answers difficult.
It takes over $100 million dollars to train a LLM bots.
Example: The use of ChatGPT to Assist Nutritional Guidance to Dialysis Patients
Background – it is challenging to give personalized nutritional information in kidney disease because of comorbidities, nutritional status, SES, income, preferences and cultural background. Chat GPT was tested for it’s ability to develop a menu plan, translate into different languages and schedules and complete nutrient analysis.
What did they do? The AI team entered a fake patient profile into ChatGPT and had it create a menu and nutrient analysis. Then the renal dietitian evaluated the recommendations. The patient profile included age, ht, BMI, labs, sex, ethnicity, health history, food preferences, budget and asked for a 1-day sample menu.
How did it do?
- Recipes – were good at cultural interpretation with high quality cooking instructions.
- Nutritional Analysis – underestimated the energy, protein, and PO4, K, Na by 28-50% (Wang et al. ASN Kidney Week (2023). However, if you make it use online pre-defined recipes the nutritional analysis was much better.
Their conclusion: LLMs could allow for personalized nutrition guidance but there is room for improvement. Next step will be to create a better renal menu database and see how things are improved.
Where do we think AI may be best adapted to help healthcare?
- Patient education is likely a great use for AI. For example you could feed AI a PD training manual and then develop an avatar to answer patient questions that come up.
- Translation – ChatGPT likely has great applicability in language translation.