Skip to main content
Rural Health Information Hub

Applying AI to Rural Health, with Jordan Berg

Date: August 6, 2024
Duration: 43 minutes

Jordan Berg An interview with Jordan Berg, Principal Investigator for the National Telehealth Technology Assessment Resource Center. We discuss the many ways AI can aid in rural healthcare and detail the guardrails in place to protect patient information.

Listen and subscribe on a variety of platforms at PodBean.

Organizations and resources mentioned in this episode:

Transcript

Andrew Nelson: Welcome to Exploring Rural Health, a podcast from the Rural Health Information Hub. My name is Andrew Nelson. In this podcast, we'll be talking with a variety of experts about providing rural healthcare, problems they've encountered, and ways in which those problems can be solved. Today I'm speaking to Jordan Berg. Jordan's the Principal Investigator of the National Telehealth Technology Assessment Center, also known as T-TAC. Thanks for joining us today, Jordan.

Jordan Berg: Hey, thanks, Andrew. It's really a pleasure to be here and talk with you.

Andrew Nelson: In the last few years, we've all been hearing a lot about AI and how it's changing the ways in which human beings can get things done. When we talk about artificial intelligence in healthcare, people might imagine something like a robot operating on a patient, or something without any human oversight. But what are we really talking about? What is AI, and what are some of the types of AI that are currently being used in healthcare?

Jordan Berg: Yeah, that's a really good question. I think one of the things that's useful to note right off the bat is there are actually quite a few definitions of AI or artificial intelligence that are out there, ranging from technical definitions that are used in [the] computer science realm, to more application-based definitions. So, it's important for organizations to look more broadly than a single definition and realize that the definitions that pop up are often reflections of the different organizations that are issuing them. For example, the FDA [Food and Drug Administration] has a definition that's more concerned about how AI is used practically, what jobs it can do, and how to manage those jobs, where the Office for Civil Rights and CMS [the Centers for Medicare and Medicaid Services] are more concerned about how to make sure that AIs are doing things that aren't discriminatory or biased.

As for a general working definition, I really do like the Wikipedia definition that just basically says it's simulated intelligence. So it's intelligence exhibited by machines; particularly computer systems. So really, the point of it is, AI is machines that are simulating thinking like people. And why would we want that? Why is that an important thing? Well, we should think a little bit about what things humans are good at, and we should think a little bit about what things machines are good at.

Machines are really good at doing repetitive tasks for long periods and not needing breaks; [being] able to just continue to do things. They're good at chewing large, massive amounts of data in very little time. What are humans good at? Humans are good at seeing patterns. We're good with working with limited or missing information. We're good at writing down and expressing ourselves through text. We're good at weighing two decisions and being empathetic in how we reason about things, thinking about how other people are going to be affected by the things that we do. Well, AI is really the attempt to use technology to simulate machines that are more like people and how they think and how they reason and how they work. So what does that mean for healthcare? Well, there's a few different types of artificial intelligence that have applications in healthcare. And we can kind of break those down into a couple of large categories.

The first is computer vision and machine vision. And this has been a technology that's been around for decades now, but the idea is, what if we showed a piece of software a picture? Would it be able to tell us what it's looking at? How much like another image that it's seen before? And then tell us what degree of confidence it has that it's that thing. And we've seen this before where you have AI-powered optical imaging that's able to see a picture of a cat and know with 84% certainty it's a picture of a cat. Well, the interesting thing about this technology is you can actually train it on anything. So, from a healthcare perspective, that's really useful in any sort of visually-based image processing, diagnostic lab work. So that we can train a piece of AI software on a cancer cell, and it would be able to see that cancer cell and give you a probability of how much, how certain it is a cancer cell.

Another thing that AI is doing within this kind of machine vision and computer vision is the ability for it to recognize text. And this goes beyond just being able to read text off a screen. You can actually show it a piece of paper with text on it, and it's actually able to process that language and then actually read and add context from what it's seeing. And it's called optical character recognition, or OCR. So this opens a whole variety of different opportunities for us to feed the documentation, whether that's in paper or in digital format, and have a computer be able to read in any language that we train it on, and be able to tell you what's in those documents and be able to use that information.

Robotics is kind of a middle layer between humans and the machines that we have to program. So robotics requires really fine tuning and really fine programming. It takes a lot of time and it's hard for us to manage if we want to generate the massive amounts of code it takes for robots to do the things that we want it to do. Generally that's labor intensive. You have to spend hours and hours to develop the code, compile the code, and run the code. AI provides us a way to rapidly speak that robotic language using human prompts so we can tell a piece of technology what we want it to do, or show it what we wanted to do graphically, and it's able to do it by just interpreting those human commands into language that the robotic control systems will understand.

The final type of AI that we really should talk about is generative AI. This is kind of the wave of AI that's really captured the public's attention and is really generating a lot of what I would consider to be the hype around AI. And there's a few different aspects of what that is and why there's so much hype around AI. But we can kind of break generative AI down into a few different things. The first is the rise of all the large language models. This is your ChatGPT, your Gemini, your Microsoft Copilot. These are language models that are built on massive data sets scraped from all over the internet, libraries, billions and billions and billions of source decks.

From that we're actually able to simulate a kind of human chat experience where we're actually able to ask questions, get responses. So these are the large language models, and one of the reasons why they're so popular is we've all had opportunities to interact with them. We're using them maybe in our personal lives, maybe in our professional lives a little bit. But they've really exceeded a lot of expectations from folks, and it's really kind of captured people's imagination. And kind of related to this is AI that's focused on image and video generation. We've seen a lot of these technologies come up where we can give an AI a prompt. It's actually able to put out a video or put out an image from whole cloth just based on our prompts with varying degrees of success.

And then combining those two systems, the large language models and the image and video generations, are what we would call multimodal AI. And these are designed to take multiple types of inputs in and generate different types of outputs. So, maybe I give it a block of text and a video and it's able to give me a different output than what we put into it.

So there are a lot of different types of AI. There are a lot of different ways that AI is being used in healthcare. Some of it's really kind of flashy and generating a lot of interest. But a lot of the really practical things are technologies that have been in development for the last several decades and are coming into a new level of maturity that make them really reliable enough and useful enough to be used in healthcare.

You use massive, massive amounts of data to train these generative AIs. From a healthcare perspective, a lot of what's generating the need and desire to use AI are some of the same things that we consider when we're looking at telehealth provider shortages; we're in need of vastly much more providers than we have that are even being trained at this point. So you know, getting providers and getting them in really rural locations is really challenging. So AI has tools that can help us get, if not the providers to the location, increased levels of expertise and increased tools to be able to enhance care in these places.

Andrew Nelson: What are some examples of these different types of AI that are currently being used in healthcare?

Jordan Berg: I think it helps to break them up by the different end user that they're kind of targeted for. So, in the patient-facing space, we're seeing a few AI examples that are really focused on symptom checkers that a patient can use to put in their symptoms, and then an AI will actually take the symptoms and provide a diagnosis, kind of like WebMD. Now these are in the realm of educational, but these technologies can be used in a kind of a chatbot format to help triage patients to the right specialty clinic. So, if you were in a larger healthcare system, or you were in a healthcare system and you wanted to direct someone to a particular specialty, you could use some of these AI chatbots to kind of help direct the conversation and get the patient to the right contact to set up and schedule.

Speaking of scheduling, that's another thing that we've seen in the patient space is using AI to generate automatic appointment reminders; kind of have some of those initial conversations with patients to set up and collect demographic information for those visits. And then around the corner, one of the things that we're really interested in is AI for patient education that creates customized, tailored patient education products for patients based on their particular disease state and some of the demographic information for the patient.

In the provider situation, there's some tools that are available in the clinical space right now. A lot of these are focused on helping providers document and capture the clinical encounter. So when a provider's having a conversation with a patient, there are AI tools that will actually listen to what's being said in the room and help to not just record and collect the actual audio, but actually be able to take that audio, turn it into text, and actually create a summary of the visit following kind of the standard note order that a provider can go in later, look, verify everything is correct, and actually use to update their notes and provide better notes.

This is sort of like dictation enhancement or summary of patient encounter is really helpful. And when it's done right, it actually provides a way that a provider can really engage with the patient and not be typing away on there trying to capture all the notes that they need to get for that encounter.

Andrew Nelson: Sure, they have kind of a transcript.

Jordan Berg: Exactly. One of the things that computers are good at, they're going to hear everything that happens in that encounter. And humans are good at forgetting things that we heard two seconds ago. So, being able to make sure that we're kind of documenting everything that's going on, versus the things that are just top of mind. So that's one of the exciting things that we can have there. A lot of the actual and productive AI applications that we're seeing in the healthcare system are occurring on the business side. So, we've seen a lot of tools that are really embedded either in an electronic health record system or in a business management system that are designed for automatically collecting and paying invoices, automatically billing for services at the appropriate level, collecting payments from vendors and patients, and all of these things. So then not only in the actual day-to-day work of the business, but in the strategic management.

And beyond kind of collecting and doing the coding and the billing, these systems are actually designed to provide analytics information, let you know how you're doing on your returns on investments, and making sure that your bottom line is as productive as it can be. And then the other thing that we're seeing are AI tools that are useful for analyzing processes and creating improvement paths and revenue cycle optimizations. All with very minimal input from users and stakeholders.

The last applicable area of healthcare applications, and this is probably the most direct usage of AI that we would kind of recognize as healthcare, are in the diagnostic space. So, we're already seeing organizations of all sizes start to use AI tools in their radiology reads. So looking at MRIs, looking at X-rays, looking at CT scans, using AI for automated measurements and calculations in lab analysis as well, using AI to do some of the counting of cells like white blood cell counts, or looking to find pathology. We've seen a big growth of AI in the space of diabetic retinopathy, where we've had tools created that are able to take a look at a diabetic retinopathy image and be able to determine with a really high degree of success whether or not that person has diabetic retinopathy. And in some cases, those diabetic retinopathy screenings are coming before the actual diabetic diagnosis itself. So, they're actually able to determine from the eye screening that a patient had undiagnosed diabetes. And so that's just a great application of that kind of visually-based image processing that AI is able to do. And finally, in that diagnostic space, we're seeing a lot of AI technology around stethoscopes and ECGs [electrocardiograms] for analyzing and processing the rhythms that you get from a stethoscope, either in a photocardiogram or in the one-, three-, or five- lead ECGs.

Andrew Nelson: It's really exciting to hear about some of these potential benefits of this technology. Do you think it's important to manage our expectations as we continue to incorporate AI more and more into our health systems? Or are there certain limitations that we should be mindful of as we move forward?

Jordan Berg: It's a really good point, and I think some of these things are broader than just AI themselves. They are applicable to any new technology we're bringing in. We're in a different era of technology for healthcare. The things that we're using are a lot more software-based. They're a lot more process-based, and they really need to be evaluated at the system level. So, making sure we have good system-level controls for any technology that we're bringing in, whether that's a new piece of accounting software, a new video platform, or some sort of AI telehealth enhancement. We need to make sure that we have system-level approaches for integrating these technologies at a clinical level, at the IT level, at the security and data privacy level, and at our governance level so we can make sure that we're bringing these in, that we have all the things that we need to make sure this works with everything else in our systems.

I think beyond that, one of the things that's really helpful is to keep scope in mind. We are still really far off from a general-purpose AI for healthcare that can do all the jobs, that we can kind of just give it a nebulous task and it goes off and does it. So, what we really need to do is create AIs with really targeted jobs that are trained on very specific healthcare data related to the work that we're trying to do. These are going to be a lot more likely to help and much less likely to harm patients by hallucinating or coming up with biased data. And then I think the other big thing is that we're still very far off from the days that we have any kind of patient interaction where we're going to turn it over to an artificial intelligence tool or system and not expect to have a provider actually driving it.

Microsoft's AI solution is called Copilot, and I think that's an apt term for what AI is really good at. It's good at being there. It's good at gathering information, pointing things out, helping to see where things could be improved or where things could be made better. But we call it "human in the loop." There's no applications for AI where there's not a human in the loop at this point where we're not going back, we're not checking, and we're not verifying the data coming out of these AI systems.

Andrew Nelson: Gotcha. Are there any problems that you've seen are especially prominent in rural healthcare that AI can be especially helpful in solving or addressing?

Jordan Berg: My background's telehealth, right? And I find a really synergistic lineup of the things that we look to telehealth to help us solve and the things that we look to AI to solve. And then there's a whole lovely Venn diagram that the AI and the telehealth overlaps. But I think the key things that AI can help with is in our provider shortages. We've already talked about the usages of AI for radiology reads and imaging and lab results. So those massive backlogs that we have where we have so few of those providers that are able to do those jobs, AI can really help us help those providers get through those backlogs faster so that we're able to reduce the impacts of those shortages.

The other things that AI can do is help us to better triage and get patients to the right location for care the first time. So those are those patient-interactive chatbots and resources like that that can direct patients to the correct specialty or the correct facilities. And then finally, by using these tools, we can allow providers to kind of work at the top of their scope, so at the very kind of peak of what they're able to do under their license. And I think being able to use allied health professionals, using the right care provider at the right level and giving them tools to allow them to treat patients to the maximum of their capacity. That's really what AI allows us to do, and really gives us the ability to provide tools and oversight for those providers.

I think the other thing that's going to really be helpful for healthcare is we all know that rural healthcare has a bottom-line issue. And we have challenges keeping up with billing and coding. We have challenging billing things at the highest, most reimbursable rate. These are all problems that AI can help us solve and making sure that our revenue streams come in as robustly and as quickly as they possibly can. And then again, process improvement. So, a lot of small facilities have not had the tools to really do systemic breakdowns of their workflow processes, their revenue cycle processes, and AI makes those tools really accessible and can make those something that organizations can engage with and to find and mitigate inefficiencies and errors before they become costly for these small clinics.

And then I think one of the things I'm most excited about for rural health and AI is the RPM space, remote patient monitoring. So we've had remote patient monitoring out in locations for years. The challenge has always been, the size of organizations that need to run these RPM clinics are medium and above. It's hard for a small hospital organization to run these types of programs with the nursing staff demands, with the monitoring demands that these programs have. AI is really good at not only looking at all the data coming in from these tools and alerting when there should be alerts, but also at trend analysis. So even trends that we're not tracking now that we may not even be aware of, AI is really good at finding trends that exist in this RPM data and bringing that to the attention of people that need to do it with a limited intervention from people.

Andrew Nelson: I can see there's a lot of potential to improve the quality and availability of rural healthcare. Earlier you mentioned things as mundane as getting appointment reminders out to patients, or something as specialized as helping to diagnose a condition. You mentioned the possibility of, for example, AI or machine learning hallucinating results and providing inaccurate results if it's based on faulty data. Are there guardrails that should be placed on AI usage to prevent or minimize those types of outcomes?

Jordan Berg: Let's talk about the different kinds of errors that an AI can have, and then some of the guardrails that we can kind of set up to help us manage those. So the one type of error is, we call it a hallucination, and it's just an error. It's just a factual error where the AI just gets it wrong. These systems are so complex that it's hard to know what they got wrong or where they got wrong; you can't go back in and find the bug in the software. So that, that is one issue that we have, is they just hallucinate. And I like to say that most of the time when they do hallucinate, they're confidently incorrect. So, they'll tell you the wrong information and they won't give you any indication, and I'm speaking specifically of large language models in chatbots.

The other problem is that lack of transparency and explainability. These systems, it's almost impossible to know how they arrive at their decision. You can't backtrack the input to the output because there's so many different steps that that data is taking between when it goes in and when it comes out and different weights and measures that are being kind of adjusted constantly when these sets are being trained. The other one that you alluded to is data bias. All data has bias. It exists when we collect it. It exists when we enter it, and it exists when we query it. The very nature that human beings collected the data means that there's going to be bias in the data.

We can't get rid of the bias in the data. Well, all we can do is be aware of the bias that exists in the data sets that we have and attempt to mitigate it. So, in terms of the guardrails, a lot of this comes down to the training of AI data sets. And for most healthcare and medical applications, we shouldn't be training data on live patient sets. So we shouldn't be using AI to continuously be training the data sets as we have them, that's too unstable. So for most healthcare applications, what we're going to see is AI data sets are created, they train the AI on those data sets, and then that algorithm is locked into place so that when we give it an input, we should get an expected output. We'll have less variability. So, making sure that we're using well-trained AI solutions that have supervised training where we've looked at the data sets going in that we can make sure that we're getting good outputs.

Another term that is helpful is "explainable AI." Now we can't have transparency, we don't know what's happening inside of the neural networks as these AIs are processing the data. But one of the things that we can do is we can ask the AI to explain to us what factors contributed to its decision. And it's good at kind of regurgitating some of that information or telling us, "Okay, so this is the diagnosis I'm making for this patient. These are the factors that are kind of key in making this diagnosis. And here's why. Here's why I think this treatment will work." Other things that we need to do are make sure that we have quality control in the data sets that we are creating. So, again, if it's garbage in, it's going to be garbage out. In a healthcare situation, we want the priority over everything is patient safety. For most rural healthcare organizations, we're not going to be trying new, brand-new innovative models of care, that we're inputting new data sets into an AI algorithm and having it put out information; we'll probably be accessing these systems through existing platforms like our EHRs [electronic health records], like our business systems. And so, we will be able to vet those through those larger organizations.

Andrew Nelson: Could you expand a little bit on what about a data set could be problematic?

Jordan Berg: There are issues with data sets in a couple of different ways. Sometimes it's issues of how the data sets were collected how they were processed, and then issues with how we're applying those data sets. So, we'll use mortgage rates as an example. We have large amounts of data sets for mortgage rates. If you take that data on face value, it would appear that there are massive racial discrepancies of who gets approved for loans. And that's because of the policies that were kind of behind who got loans or how loans were processed in the country. How that data's collected, how it was entered, the biases of the person collecting and putting that information in, that's one area.

We also can have sampling bias, where a particular population is oversampled or undersampled. The sampling bias can bite us twice, not just when we collect the data, but also when we apply it. So, I work with the Alaska Native Tribal Health Consortium up here in Anchorage, Alaska, where I'm at, and we have a large Native population. Well, when we look at healthcare data, if we look at it when it's all combined together, Alaska Native data is vastly undersampled. And if we try to apply the same standards and the same demographic information of a national sample at our regional and local level, that wouldn't match our patient population. So we have a variety of different comorbidities. We have a variety of different complications. We have a lot of transportation issues and things that don't exist in the rest of the world. So we have to make sure that the training set that we're using to train the AI is similar enough to the actual patient set that is going to be used when we're delivering care.

And then there are always transcription errors and errors when entering the data where data cannot be clean as it's in the system. This is particularly true for large language models. They're good at taking large amounts of unstructured data. This isn't data that's in a database somewhere. This is data that exists in the web. This is data that exists in a variety of different textbooks that were written over years and years and years and years.

So, being able to compile all that together, get the good stuff out of it, spit out the bad stuff, when those data sets aren't structured and aren't regulated, that's some of the challenges that we deal with there. So how we pick up the data, who enters the data, and the biases of that person that's collecting that data, who is the populations that we're measuring, who is the populations that we're applying that to? Those are some of the key things that we need to look at when we're looking at bias and data sets.

Andrew Nelson: You mentioned telehealth earlier, and that's a technology that has a little bit of a head start in terms of widespread usage, I think. How can AI be able to serve geographically isolated patients, or patients in rural areas? And what are some factors that might limit its effectiveness?

Jordan Berg: So, we're really keyed in on connectivity. The three-legged stool that we have to build our digital health equity on right now is providers' access to services, access to affordable broadband, and then access to devices. And if any of those is missing, we're not able to deliver virtual healthcare, especially for geographically isolated patients.

We talked a little bit about remote patient monitoring and how that's the ability to actually push those remote patient monitoring programs and plans out further into more and more rural areas. And what that comes down to is, it's not in getting the kits themselves out to these locations. But the big challenges that AI can help us solve is optimizing our connectivity. So, making sure that we have connectivity in either that cellular or satellite or broadband connectivity into the locations that we're trying to collect these from.

If we can get these devices out there, AI can do a lot to help us process the information that we generate. So that's one of the key areas… better aftercare and education. So, a lot of times we have rural patients that go back home, and they didn't travel there with their whole family, or they didn't even travel there with their primary caregiver. How can we make sure that we're communicating all this information back to the patient, with the patient, making sure that we're giving them the best tools? So, creating those custom education resources that are available not just to the patient, but to their caregivers and in the language and at the language level that they can access and get the most out of.

One of the things I think is going to be the most impactful for rural healthcare is rural hospital and clinic facilities to provide two key services. One is tele-ICU. So, these are using those hospital beds in these small facilities to deliver ICU care, monitoring vital signs and heart rate and oxygen levels, all of that. And being able to monitor that at another facility and being able to alert to those critical incidents. We've also seen AI being used to do telemonitoring and fall detection; being able to create in every rural healthcare center across the country, [this] ability to do deliver ICU levels of care from these rural healthcare facilities and to do those interventions.

The other thing that I think is really important that AI has the potential to help us deliver is tools for triaging and supporting behavioral health as it shows up into our emergency rooms. So, we have emergency services, we have emergency rooms that are seeing patients that show up in crisis all the time; a suicidal patient or a patient that's suffering from some sort of behavioral health episode. And creating AI resources, AI trainings, to train up the providers so that they can actually address these issues in a standardized and accessible way. These are all the things that I think AI can help more and more in rural healthcare, at the very edges of rural health.

Andrew Nelson: Using these new applications of AI technology involves putting more and more patient information into the systems, necessarily. Do you feel like this raises any new concerns about risks to patient privacy and confidentiality?

Jordan Berg: Yes. And no. So, our patient data sets are at risk more from the challenges that we face as small healthcare organizations trying to manage secure data with a variety of support from local or regional IT companies and staff. So, I think the challenges that are added by AI are already challenges that already exist for these small rural healthcare facilities; lack of security and IT support at the location. We've seen a rise of contracted IT services that are more interested in mitigating risk than kind of actually using the data that we have to do the care. We have to manage risk, we have to have a plan for risk, but one size does not fit all for rural health.

And so having organizations that are able to address the needs of these healthcare organizations at an individual level [is] important. One of the misconceptions a lot of times is the idea that the data that we're generating or the data that we're giving to the AI is going in to train the AI. And I think that's a general misconception for most healthcare applications. So, you are providing data to the algorithm to make a decision. But that data is not necessarily used in that data set. It does not exist in that large data set to be used for future AI programs in most cases. So, the algorithms that that healthcare AI is using are set and already trained before they get to you.

Now, there may be new AI versions that are available and made available to your organization, but your healthcare data in most cases should not be contributing to that larger healthcare AI data set. It just should be an input that you're providing to the algorithm to provide you an output. That said, will a lot of the processing and a lot of the information be flowing through some sort of cloud infrastructure? Absolutely. There's no way that we can deliver these kinds of robust processing solutions to a clinical level without using cloud solutions. So, there shouldn't be a large amount of patient data that's stored offsite in some sort of vendor cloud. But you do need to make sure that you're following all the standard guidelines that you would for any kind of offsite cloud storage which includes a business associates agreement following all the HIPAA protocols. So yes, there is a risk to patient privacy and confidentiality, but it's a risk that we're already gearing up and trying to face just with living in a modern electronic health record world.

Andrew Nelson: It sounds like it just kind of comes with the territory, but there are obviously many positive results we're seeing, and will no doubt continue to see from using these tools. Have you seen that there are any new rules or regulations that have been developed or that will probably be developed in the future to regulate these applications?

Jordan Berg: In the U.S., there are a lot of states that are taking a stab at creating the regulations. California is an early leader in the space. There's a few other states that have state-level regulations around AI. But even in those states that do, it's pretty high-level and pretty general at this point. From the federal government, what we have is the executive order on AI. And that gives some framework around what AI should and shouldn't do in an executive order, but it is not legislation, and it's more best practices that the federal government will take, and as to how the federal government intends to approach AI. Some of those [are] guidances around safety and security, how the federal government will use AI internally for their processes, the desire to use it to promote innovation, reduce impact on workforce, promote equity and civil rights. Regulation is required for this technology to be widely used at its most effective levels. If we're pursuing the day when we have AI tools that are able to help us with decisions, regulation will have to be in place before those can be rolled out to clinical applications. So, it has to happen. It hasn't happened yet. It's early days. It's the Wild West of AI, and regulation has not caught up yet, which we see happens a lot with a variety of technologies. But the challenge is, AI technology is moving so quickly, by the time regulation catches up with it there may be a whole host of other unforeseen challenges that need to be addressed as well. It's going to be a moving target when we start to get those regulations in place.

Andrew Nelson: While we've been talking about how beneficial use of AI technology can be to a workforce of limited size, inevitably the role of actual human beings in providing a patient's care is being reduced to some extent. How can we ensure that that quality of care isn't being compromised?

Jordan Berg: I think one of the things that we learned in COVID was that technology can either improve the provider-patient relationship, or it can undermine it. And a lot of that has to do with how the provider approaches it and how the provider approaches the patient. So, we've all had visits where the provider has just been focused on the notes they were taking, or focused on the computer, and has not had the time or it seems like not had the time to spend with the patient. If we use these tools, if we use the documentation tools, if we use the translation tools, if we use the decision-support tools effectively, the relationship building with the patient should be improved, not reduced. So the provider will actually have more time to ask questions of the patient instead of having to document the visit that they're doing. Results will come back quicker from the imaging that we're doing so that we can actually have an experience where we take a retinopathy exam and we get the results back immediately, or we do blood work right before a visit and we're actually able to go in and discuss the results of that blood work or urinalysis or what have you, right in those visits.

So, it has the potential to improve the individual experience. The other thing that we've not talked about is that, not to make this completely bullish, but I think one of the challenges we have is just in training enough people to fill the roles that we really need and to train people to engage with these new tools. And that's one of the things that AI has a massive amount of potential to do, is to help create the curriculum and the standardized training approaches, whether that's virtualized training and virtual reality or augmented reality, or creating customized training materials for students so that we're creating as many healthcare providers as we can, at the best and most consistent level of training that we can possibly get them. If you're a provider and you're thinking, "AI is going to solve patient interactions for me," I don't think that's the right approach. But I do think if the tools are used correctly and if we make a conscious decision to keep the patient at the center of the technology and not let the technology be at the center of the experience, we'll have improved relationships between patients and providers.

Andrew Nelson: Sure. That is an interesting point, that more of the workload is just being kind of shifted off the human provider's shoulders. And so, patients might well find that the perceived quality of their interactions with the provider will actually be improved because the provider isn't trying to take notes or they're able to focus more on that interaction.

Jordan Berg: Yeah. And that's going to work best when the technology's as transparent as we can make it. If AI is going to change the way we deliver healthcare, we have to see past the flash and we have to see to more practical uses of kind of everyday technology. So, I think that's that ability to process images, the ability to improve our billing and our coding, the ability to provide handouts and services in the language of the people that we're interacting with. Those are the kinds of practical things that AI can do even starting now; the science fiction ideas that we have in our head may be holding us back from seeing the actual good these products can actually provide. So, if we can let go of the science fiction fantasy and kind of engage with the actual work that AI can do and really kind of limiting it to those jobs initially, I think there are things that can be done now to improve the processes.

Andrew Nelson: You've been listening to Exploring Rural Health, a podcast from RHIhub. In this episode, we spoke with Jordan Berg, Principal Investigator of the National Telehealth Technology Assessment Center. Look in our show notes for more information about his work and visit ruralhealthinfo.org for all things pertaining to rural health.