Smarter health: Regulating AI in health care

Linda Rider

 Find the first two episodes of the series here.

Health care is heavily regulated. But can the FDA effectively regulate AI in health care?

“Artificial intelligence can have a significant positive impact on public health,” The FDA’s Dr. Matthew Diamond says. “But it’s important to remember that, like any tools, AI enabled devices need to be developed and used appropriately.”

That’s Dr. Matthew Diamond, head of digital health at the FDA. Does the agency have the expertise to create the right guardrails around AI?

“We’re starting to learn how to regulate this space. … I don’t know that it’s particularly robust yet,” Dr. Kedar Mate says. “But we need to learn how to regulate the space.”

Today, On Point: Regulating AI in health care. It’s episode three of our special series Smarter health: Artificial intelligence and the future of American health care.

Guests

Elisabeth Rosenthal, editor-in-chief of Kaiser Health News. Author of “An American Sickness.” (@RosenthalHealth)

Finale Doshi-Velez, professor of computer science at Harvard University. Head of the Data to Actionable Knowledge Lab (DtAK) at Harvard Computer Science.

Yiannos Tolias, lawyer at the European Commission, worked on team who developed AI regulation proposals. Senior global fellow at the NYU school of law, researching liability for damages caused by AI systems. (@Yanos75261842)

Also Featured

Dr. Matthew Diamond, chief medical officer at the FDA’s Digital Health Center of Excellence.

Nathan Gurgel, director of enterprise imaging product marketing at FUJIFILM Healthcare Americas Corporation.

Dr. Kedar Mate, CEO of the Institute for Healthcare Improvement. (@KedarMate)

Part I

MEGHNA CHAKRABARTI: Episode three: The regulators. Over the four months and dozens of interviews that went into this series, one thing became clear, because just about everyone said it to us. Artificial intelligence has enormous potential to improve health care, if a lot of things don’t go enormously wrong.

Doctors, scientists, programmers, advocates, they all talk to us about the important need to, quote, mitigate the risks, to create comprehensive standards for evaluating if AI tools are even doing what they claim to do, to avoid what could easily go wrong. In short, to regulate and put up guardrails on how AI is used in health care.

For now, the task of creating those guardrails falls to the Food and Drug Administration. Dr. Elisabeth Rosenthal is editor in chief at Kaiser Health News. Dr. Rosenthal, welcome back to On Point.

DR. ELISABETH ROSENTHAL: Thanks for having me.

CHAKRABARTI: So let’s get right to it. Do you think, Dr. Rosenthal, that the FDA, as it is now, can effectively regulate artificial intelligence algorithms in health care?

ROSENTHAL: Well, it’s scrambling to keep up with the explosion of algorithms. And the problem I see is that the explosion is great. It’s mostly driven by startups, venture capital, looking for profit. And with a lot of promises, but very little question about, How is this going to be used? So what the FDA does and what companies try to do is just get their stuff approved by the FDA, so they can get it out into the market. And then how it’s used in the market is all over the place. And AI has enormous potential, but enormous potential for misuse, and poor use and to substitute for good health care.

CHAKRABARTI: Okay. So that explosion in the use and potential of health care, FDA is really aware of just that simple fact. We spoke with Dr. Matthew Diamond, who’s the chief medical officer of the Digital Health Center of Excellence at FDA. And we’re going to hear quite a few clips from my interview with him over the course of today’s program. We spoke with him late last month, and he talked about a significant challenge for the FDA in regulating AI.

DR. MATTHEW DIAMOND: It’s important to appreciate that the current regulatory framework that we have right now for medical devices was designed for more of a hardware based world. So we’re seeing a rapid growth of AI enabled products, and we have taken an approach to explore what an ideal regulatory paradigm would look like to be in sync with the natural lifecycle of medical device software in general. And as you mentioned, AI specifically.

CHAKRABARTI: Dr. Rosenthal, I mean, just to bring it down to a very basic level, FDA regulates drugs and devices. The regulatory schemes for both are different because drugs are different than devices. It seems as if FDA is going down the track of seeing software as a device, but do you think it has the expertise in place to even do that effectively?

ROSENTHAL: Well, it’s not what it was set up to do. Remember when the FDA started regulating devices, it was for things like tongue depressors, you know, and then it moved on to defibrillators and things like that. But, you know, the software expertise is out there in techland and in tech believers. And so it’s very hard to regulate.

And much of the AI stuff that’s getting approved is approved through something called the 510(k) pathway, which means you just have to show that the device, in this case an AI program or an AI enabled device, is similar to something that’s already on the market. And so you get a kind of copycat approval.

And what is similar, one that wasn’t AI enabled. In some cases, that appears to be the track. And then what they ask for subsequently is real world evidence that it’s working. The FDA has not been good historically in drugs or devices at following up and demanding the real world evidence from companies. And frankly, companies, once they have something out there in the market, they don’t really want evidence that maybe it doesn’t work as well as they thought originally. So they’re not very good at making the effort to collect it, because it’s costly.

CHAKRABARTI: You know, from my layperson’s perspective here, one of the biggest challenges that I see is that the world of software development, outside of health care, is a world where for a lot of good reasons — What’s the phrase that came out of Silicon Valley? Perpetual beta. It’s like the software is continuously being developed as it’s in the market. Right? We’re all using software that gets literally updated every day. How many times I have to do that on my phone? I can’t tell you.

But in health care, it’s very, very different. The risks of that constant development, there can be considerable. Because you’re talking about the care of patients here. Do you have a sense that the FDA has a framework in mind or any experience with that kind of paradigm where it’s not just, you know, a tool that they have to give preclearance for, and then the machine gets updated two years later and then they give clearance for that too? It seems like a completely different world.

ROSENTHAL: Yes, it is. And they announced last September a kind of framework for looking at these kind of things and asked for comment. And when you look at the comments, they’re mostly from companies developing these AI programs who kind of want the oversight minimized. It was a little bit like, trust us, make it easy to update. And you know, I can tell you, for example, on my car, which automatically updates its software. Each time it updates, I can’t find the windshield wipers. You know, that’s not good.

So there’s tremendous potential for good in AI, but also tremendous potential for confusion. And I think another issue is often the goals of some of these new AI products is to, quote-unquote, make health care cheaper. So, for example, one recent product is an AI enabled echocardiogram. So you don’t need a doctor to do it. You could have a nurse or a lay person to do it. Well, I’m sorry, there are enough cardiologists in the United States that everyone should be able to get a cardiologist doing their echocardiogram.

We just have a very dysfunctional health care system where that’s not the case. So, you know, AI may deliver good health care, but not quite as good as a physician in some cases. In other cases, it claims to do better. You know, it can detect polyps on a colonoscopy better than a physician. But I guess the question is, are the things that it’s detecting clinically significant or just things? And so these questions are so fraught. So, you know, I’m all in for a hybrid approach that combines a real person and AI. But so many times the claims are this is going to replace a person. And I think that’s not good.

CHAKRABARTI: Yeah, that’s actually going to be one of the centers of our focus for us in our fourth and final episode in this series. But you know, the thing about AI and health care and regulation that seem, it seems to me, to be the perfect distillation of a constant challenge that regulators have. Technology is always going to outpace what the current regulatory framework is, that that doesn’t seem to me to be a terrible thing.

That’s just what it is. But in health care, you don’t really want the gap to be too big. Because in that gap, what we have are the lives of patients. And, you know, we’ve spoken to people. Glenn Cohen at Harvard Law School was with us last week and he said he sees a problem in that the vast majority of algorithms to potentially use in health care, FDA wouldn’t even ever see them.

Because they would be the kinds of things that hospitals could just implement without FDA approval. And he talked with us about that FDA just isn’t set up to be a software first kind of regulator. Now, Dr. Matthew Diamond at FDA, when we talked to him, he actually acknowledged that. And here’s what he said.

DR. MATTHEW DIAMOND: What we have found is that we can’t move to a really more modern regulatory framework, one that would truly be fit for purpose for modern day software technologies, without changes in federal law. You know, there is an increasing realization that if this is not addressed, there will be some critical regulatory hurdles in the digital health space in the years to come.

CHAKRABARTI: Dr. Rosenthal, we have about 30 seconds before our first break, but just your quick response to that?

ROSENTHAL: Well, I think there is a big expertise divide. You know, the people who develop these software algorithms tend to be tech people and not in medicine. And the FDA doesn’t have these tech people on board because the money is all in the industry, not in the regulatory space.

CHAKRABARTI: Well, when we come back, we’re going to talk a little bit more about the guidelines or the beginnings of guidelines that the FDA has put out. And how really what’s needed more deeply here is maybe a different kind of mindset, a new regulatory approach when it comes to AI and health care. What would that mindset need to include?

Part II

CHAKRABRTI: Today, we’re talking about regulation. Health care is already a heavily regulated industry. But do we have the right thinking, the right frameworks, the right capacity in place at the level of state and federal government to adequately regulate the kinds of changes that artificial intelligence could bring to health care? Dr. Kedar Mate is CEO of the nonprofit Institute for Health Care Improvement. And here’s what he had to say.

DR. KEDAR MATE: We need regulatory agencies to help ensure that our technology creators, and our providers and our payers are disclosing the uses of AI and helping patients understand them. I absolutely believe that we need to have this space developed, and yet I don’t think we have the muscle yet built to do that.

I’m joined today by Dr. Elisabeth Rosenthal. She’s editor in chief at Kaiser Health News. And joining us now is Professor Finale Doshi-Velez. She’s professor of computer science at Harvard University. Professor Doshi-Velez, welcome to you.

FINALE DOSHI-VELEZ: It’s a pleasure to be here.

CHAKRABARTI: I’d like to actually start with an example when talking about the kind of mindset that you think needs to come in or evolved into regulation when it comes to AI and health care. And this example comes from Dr. Ziad Obermeyer, who’s out in California, because he told us in a previous episode about something interesting that had happened, they had done this study on a family of algorithms that was being used to examine health records for hundreds of millions of people.

And they found out that the algorithm was supposed to evaluate who was going to get sick, but how it was doing that was actually evaluating or predicting who’s going to cost the health care system the most. So it was actually answering a different question entirely, and no one really looked at that until his group did this analysis, external analysis. So I wonder what that tells you about the kinds of thinking that goes into developing algorithms and whether regulators recognize that thinking?

DOSHI-VELEZ: Yeah, it’s such an important question. And the example you gave is perfect. Because many times we just think about the model, but there’s an entire system that goes into the model. There’s the inputs that are used to train the model, as you’re saying, and many times we don’t have a measure of health. What does it mean to be healthy? So we stick in something else, like costly. Clearly, someone who’s using the system a lot, costing the system a lot. You know, they’re sick and that’s true.

But there’s a lot of other sick people who, for whatever reason, they’re not also getting access to care and are not showing up. So I think the first step there is really transparency. If we knew what our algorithms were really trained to predict, we might say, hey, there might be some problems here. One other thing that I’ll bring up in terms of mindset is also how people use these algorithms, because the algorithms don’t act in a void and once the recommendation comes out how people use them, do they over rely on them, I think is another really important systems issue, right? The algorithm isn’t treating the patient, the doctor is using the algorithm.

CHAKRABARTI: Okay. So systems issue here. … A systems mindset that it sounds like you’re calling for that needs to be integrated into regulation. But tell me a little bit more about what that system mindset looks like.

DOSHI-VELEZ: Exactly. So we’ve done some studies in our group and many other people have done similar studies that show that if you give people some information, a recommendation, they’re busy and they’re just going to follow the recommendation. Hey, that drug looks about right. Great, let’s go for it. And they’ll even say the algorithm is fantastic. They’re like, this is so useful, it’s reducing my work.

We’ve done a study where we gave bad recommendations and people didn’t notice because, you know, they were just going through and doing the study. And it’s really important to make sure that when we put a system out there and say, oh, but of course, the doctor will catch any issues, they may not because they may be really busy.

CHAKRABARTI: Okay. So Dr. Rosenthal, respond to that, because it sounds to me and both of you, please correct me if I say anything that’s a little bit off base. But it sounds to me that sort of the the established methods of developing a drug, let’s say, or even building a medical device, involve a way of thinking that doesn’t 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c} overlap with software development, not 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c}. And is that a problem, Dr. Rosenthal?

ROSENTHAL: Well, I think it is because most drugs are designed with the disease in mind, not necessarily to save money. I get pitches for AI stuff in medicine every day. Look at my great startup. And most of what they’re claiming is that it will save money. And I think that’s the wrong metric to use, but that’s the common metric that’s used now because most of these devices and most of these AI programs come out of the business space, not the medical space.

And I think many of them are claiming you don’t need the doctor really to look and see if it’s right or not. And I’ll say I haven’t practiced medicine in many years. But, you know, kind of diagnosis is very holistic. And you can check all the boxes for one diagnosis and look at a patient and say, no, that’s not the right one.

CHAKRABARTI: Hmm. Professor Doshi-Velez, did you want to respond to that?

DOSHI-VELEZ: I think that’s a great point. And goes back to the point that you made earlier, that we really need doctors in the loop. These are not replacements.

CHAKRABARTI: The FDA in 2019 put out a paper. It’s the artificial intelligence and machine learning discussion paper that they put out. And in a sense, they have offered kind of an early initial flow for decision making at FDA on how to regulate software as a medical device, which is what they call it. And the first part of the flow is actually determining whether, I’m looking at it right now, determining whether the culture of quality and organizational excellence of the company developing the AI reaches some kind of standard that FDA wants. In other words, do they have good machine learning practices? And as the computer scientist at the table, Professor Doshi-Velez, I’m wondering what you think about that.

DOSHI-VELEZ: I think that’s critical. I think ultimately there’s a lot of questions that you would want to ask of a company as they go through developing these devices or software as medical devices. I think the good news is that there are procurement checklists that are being made. Canada has an AI directive. World Economic Forum recently put out a set of guidelines, and these basically go through all the questions you should ask a company when you’re thinking about using an AI device. And they’re quite comprehensive.

CHAKRABARTI: And who would ask those questions?

DOSHI-VELEZ: So in this case, it’s if you’re someone who’s buying AI, and it’s public sector buying an AI, what would you consider?

CHAKRABARTI: We wanted to understand a little bit more about what the process is right now at FDA. I mean, it’s still under development for sure, but a couple of at least some artificial intelligence programs or platforms have received FDA approval. And so we reached out to a company that’s been through the process. And so we spoke with Nathan Gurgel. He is director of Enterprise Imaging Product Marketing at Fujifilm Healthcare Americas Corporation.

NATHAN GURGEL: I look at it as kind of like autopilot on an airline. It probably could land the plane, but we as humans and as the FAA feel more comfortable having a pilot. It’s the same way for AI and imaging. The FDA, you know, really has very specific guidelines about being able to show efficacy within the AI and making sure that the radiologists are really the ones that are in charge.

CHAKRABARTI: So you might be old enough like me to think of Fujifilm as a photograph and imaging company, which in fact it is. And Fuji is actually taking that imaging expertize and applying it pretty aggressively to AI and health care. So they’ve developed a platform that they say enables air imaging algorithms to be used more effectively by radiologists and cardiologists. And the FDA certified Fujifilm’s platform last year, it’s called REiLI. And Gurgel told us that getting that FDA certification, actually the process began at Fujifilm. The company did its own deep review of current FDA guidelines to evaluate their own product, and then they went through a pre-certification process with FDA.

GURGEL: You can actually meet with them and say, this is what our understanding is of the guidance and how we’re interpreting that. And then you can get feedback from them to say, Yes, you’re interpreting that. Or maybe we want to see something a little bit different within some of your study or your evaluation process. And so that gives you some confidence before you do the actual submission.

CHAKRABARTI: Gurgel said the process was beneficial for Fujifilm and it led to certification, but he also said there’s still a lot for the FDA to learn. About the technology it’s tasked with regulating. In particular, the FDA needs to increase its technical understanding of how AI works to process and identify findings in imaging software.

GURGEL: I do feel like in that area that is a learning process for the FDA of understanding what that entails and how that can potentially influence the end users, and in our case would be the radiologist within their analysis of the imaging.

CHAKRABARTI: Now, Gurgel also told us that Fujifilm, of course, is a global company. And so that means they have experience with AI regulations in several different countries, making it easier for them to bring AI products to market.

GURGEL: We have it in use right now within Japan, but when we are bringing it into the U.S., we’re required to go through reader studies. So we have radiologists take a look at that. But really what they are doing is proving the efficacy of that algorithm and making sure it provides and is meeting the needs of the radiology, and the radiology user. And making sure that when we bring it to the U.S. that it also is trained and is useful within the patient population within the U.S.

CHAKRABARTI: Now, another important distinction, Gurgel points out that right now FDA regulates static algorithms. These algorithms don’t automatically update with new information. They’re working on a new regulatory framework for that. And Gurgel said FDA does need to continue to develop guidelines for those.

GURGEL: Is there ever going to be the ability for these medical processing algorithms to update themselves? And where is the oversight for that? So as they go through and make changes and they hopefully improve themselves. Do the radiologists still agree with that? Are there, you know, still the same efficacy that was brought forward when the algorithm was first introduced into the market? So I think that’s the big question mark at this point, is how and when do we get to that automatic machine learning or deep learning?

CHAKRABARTI: So that’s Nathan Gurgel, Director of Enterprise Image Product Marketing at Fujifilm Health Care Americas Corporation. Dr. Elisabeth Rosenthal, what do you hear in that process that Gurgel just described to us?

ROSENTHAL: Well, I hear the same problem the FDA has with with drugs and devices generally, which are, you know, companies bring drugs and devices to the FDA. The companies do the studies they present to the FDA. In the case of drugs, you know, the FDA convenes these expert panels who are going to be expert panels for AI programs. That’s going to be a hard lift. And they haven’t said whether they’re going to have those.

So and again, there’s this question of the safe and effective standard. Effective compared to what? It’s why we in the United States have a lot of drugs that are effective compared to nothing, but not effective compared to other drugs. So, you know, are we talking about effective, more effective than a really good physician? Or more effective than a not very good physician? Or more effective than nothing? So I think, you know, some of these problems are endemic to the FDA’s charter and they’re just multiplied by the complexity of AI.

CHAKRABARTI: Oh, fascinating. Professor Doshi-Velez, I see you nodding your head. Go ahead.

DOSHI-VELEZ: I think a lot of the promise of AI well, in imaging … is to automate boring tasks, like finding uninteresting things. But when it comes to like finding, you know, those polyps or those issues in the images, there’s a lot of places that don’t have great access to those experts. And so there’s a lot of potential for good if you take someone who’s average and can give them some pointers and make them excellent. But that just comes into transparency. It’s really important that we know exactly what standard this meets.

CHAKRABARTI: Transparency, indeed. But … quickly, Dr. Rosenthal, I hear both of you when you say, you know, how effective compared to what? But who should be setting the guidelines to answer that question? Should that be coming from FDA? Should it be coming from the companies? I mean, It’s an important question. How do we begin to answer it?

ROSENTHAL: Well, the FDA isn’t allowed to make that decision right now. So that’s an endemic problem there. And we don’t have a good mechanism in this country to think about that. And to think about, again, appropriate use. Yes, maybe a device that’s pretty good at screening, but not as good as seeing a specialist at the Mayo Clinic is really useful in places where you don’t have access to specialists. But that’s where transparency comes in. But do you really want to trust the companies that are making money from these devices and these programs to say, Well, we think it’s this effective or not, we just don’t have a good way to measure that at the moment.

CHAKRABARTI: Okay. So then we’ve only got another minute and a half or so, Dr. Rosenthal. I’d love to hear from you, what do you think the next steps should be like? Because there’s no doubt that AI is going to continue to be developed for health care. … So what would you like to see happen in the next year or five years to help set those guardrails?

ROSENTHAL: Oh, that’s such a huge problem, because I don’t think we have the right expertise or the right agency at the moment to think about it. And particularly in our health care system, which is very disaggregated and balkanized and, you know, AI has tremendous potential for good, but it also has tremendous potential for misuse. So I think we need some really large scale thinking, maybe a different kind of agency. Maybe the FDA’s initial charter is due for rethinking. But at the moment, I just don’t think there’s a good place to do it.

Part III

CHAKRABARTI: It’s episode three of our special series, Smarter health. And today we’re talking about regulation or the new kind of framework, mindset or even agency that the United States might need to effectively regulate how AI could change American health care. I’m joined today by Professor Finale Doshi-Velez. She’s a professor of computer science at Harvard University, and she leads the Data to Actionable Knowledge group at Harvard Computer Science as well.

Now here again is Dr. Kedar Mate of the nonprofit Institute for Healthcare Improvement, and he talked with us about how regulators can use the expertise in the industry to develop guidelines to regulate.

DR. KEDAR MATE: I think some of this, by the way, can be done collaboratively with the industry. This doesn’t need to be a confrontational thing between regulatory agencies, you know, versus industry.

I think actually industry is setting standards today about how to build algorithms, how to build bias free algorithms, how to build transparency in a process, how to build provider disclosure, etc.. And a lot of that can be shared with the regulatory agencies to help power the first set of standards and write the regulatory rules around the industry. 

CHAKRABARTI: Professor Doshi-Velez, you know, I wonder if even thinking of this as how do we build regulation is maybe not the best way to think about it, because regulation to me feels very downstream. Should we, when we talked about mindset, should we be thinking more upstream?

And should really one of the purposes of government be to tell AI developers, Well, here are the requirements that we have, like the kinds of data used to train the algorithm. Or here’s what we require regarding transparency, things like that that are further upstream. Would that be a different and perhaps more effective way to look at what’s needed?

DOSHI-VELEZ: 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c} agreed that having requirements earlier in the process would be super helpful. And I also would say that it needs to be a continual process. Because these systems are not going to be perfect the first time. They’re going to need to be updated. And we’ve talked about the algorithm gathering data to update itself, but also the data changes under your feet.

You know, people change, processes used in medical centers change, and all of a sudden your algorithms go out of whack. So it does need to be a somewhat collaborative process of continually, Where are your requirements, how are you going to change, what are you going to disclose so everyone else can notice? Because as was noted before, it may not be in the companies interest or even the purchaser’s interest to be monitoring closely, but if certain things need to be disclosed, then at least it’s out there for the public to be able to see.

CHAKRABARTI: We like to sort of occasionally leave the United States and learn from examples abroad. And I’d really like to do that in in this situation. So let’s hop over to Cyprus, because that’s where Yiannos Tolias is joining us from. And Yiannos is the legal lead on AI liability in health care for the European Commission. And he worked on the teams who developed the Regulation and Health Data Regulation for the EU. Yiannos, welcome to you.

YIANNOS TOLIAS: Thanks a lot. That’s very nice to be here.

CHAKRABARTI: Can you first tell us why the European Commission very intentionally prioritized regulating AI?

TOLIAS: Just to mention that, of course, the views I am expressing will be personal, not necessarily representing the official position of the European Commission. But I could of course describe the regulatory frameworks that we have now in place. Basically the story of the European Union started back in 2017 where the European Parliament and later the European Council, which is the institution that represents all the 27 member states of the EU, have asked the Commission to come up with a legislative proposal on AI.

And specifically to look at the benefits and risks of AI. More specifically, they refer to issues like opacity, complexity, bias, autonomous, fundamental rights, ethics, liability. So they ask the Commission to consider and study all those and come up with a piece of legislation. And the Commission came up with the so-called AI Act … which was published as a proposal last year, April of last year to 2021.

And now to the European Parliament and the Council for adoption. Of course, maybe amendments too. And there is four main objectives that these regulation aims at. First of all is to ensure safety. Secondly, to ensure legal certainty. So also, the manufacturers are certain about their obligations. Thirdly, to create a single market for AI in Europe. So basically, if you develop AI in France and you follow those requirements without any obstacles, you should be able to move it throughout Europe to Sweden and Italy. And thirdly, to create a governance around AI and protect fundamental rights.

CHAKRABARTI: Okay. Can I just step in here for a moment? Because I think I’m also hearing that there was something else, perhaps even more basic, because you had told us before as well that in a sense, creating a kind of framework to regulate AI like is in place for pharmaceuticals in Europe. You know, it might increase the cost to develop and manufacture AI.

But I think you’ve told our producer that it creates an equal level of competition. Everyone has to fulfill the requirements. And so therefore, it creates trusts with physicians who could deploy or use it.

TOLIAS: Yeah. This is the four objectives I mentioned, so I put them a bit into four groups. First, you are creating this piece of legislation aims to create safety. So you are feeling safe as a patient, as a physician to use it and even not being liable using it and even trust it. So to create like a boost of uptake of AI. Secondly, to ensure legal certainty, to boost basically innovation. … Because everyone, all the manufacturers would be at the same same level playing field, in the sense that they would be all obliged to do the same and no other member state in the EU.

Because these would be, let’s call it, at the federal level. So it will be applicable to all the member states or the member states of the EU would not be able to come up with additional requirements. So you have a set of requirements at EU level and every startup, every company in the EU would be following those.

CHAKRABARTI: Okay. So let’s talk momentarily about one of those specific requirements. I understand that there’s a requirement now about the kind of data that algorithms get trained on, that companies have to show through the EU approval process, that they have trained their algorithms on a representative data set, that accurately represents the patient population across Europe.

TOLIAS: Yes, exactly. There are different obligations in the AI Act. One of which is the data governance, data quality obligations. And there are a series of requirements about annotation, labeling, collection of data reinforcement, or how you use all these issues of data, including an obligation that the training, validation and testing datasets should consider the geographical, behavioral and functional settings within which the high risk AI system … is intended to be used. …

CHAKRABARTI: Stand by for a second, because I want to turn back to Professor Doshi-Velez. This issue brings together, we talked a lot about the data used to train algorithms in our ethics episode and now regulation as well. Let’s bring it back to the U.S. context. I can see the advantage of putting into place a requirement. Let’s say FDA did, that said all AI developers have to train their algorithms on data that’s representative of the American patient population. Is that possible? Where would that data come from?

DOSHI-VELEZ: I think that ultimately has to be the goal. We don’t want populations left out, and yet currently we have populations that are left out of our datasets. I think there absolutely has to be an obligation to be clearer about who this algorithm might work well for. So that you don’t apply it incorrectly to a population that it might not work well for, or to test it carefully as you go. But ultimately, I think we need better data collection efforts to be able to achieve this goal.

CHAKRABARTI: So there’s even a further upstream challenge you’re saying, okay, here in the United States. Well, there’s another issue that I’d like to learn how Europe is handling it. And it’s one that we’ve mentioned a couple of times already. And that’s the need for transparency throughout this process, from the algorithm development process, through the regulatory process. And we asked Dr. Matthew Diamond at FDA about this.

And he told us that FDA has sought input from patients, for example, about what kinds of labels, what they want to know about AI tools being used in health care. And he said that transparency is critical for each stakeholder involved with the technology.

DR. MATTHEW DIAMOND: It’s crucial that the appropriate information about a device, and that includes its intended use, how it was developed, its performance and also when available, its logic. It’s crucial that that information is clearly communicated to stakeholders, including users and patients.

It’s important for a number of reasons. First of all, transparency allows patients, providers and caregivers to make informed decisions about the device. Secondly, that type of transparency supports proper use of device. For example, it’s crucial for users of the device to understand whether a device is intended to assist rather than replace the judgment of the user.

Third, transparency also has an important role in promoting health equity because, for example, if you don’t understand how a device works, it may be harder to identify. Transparency fosters trust and confidence.

CHAKRABARTI: That’s Dr. Matthew Diamond at FDA. Yiannos Tolias, Europe has put in something that I’ll just refer to as a human supervision provision. What does that do and … why is that important for the trust and transparency aspect of of regulating AI?

TOLIAS:  I think there is an interesting issue which was raised. Of where do you find the data to ensure that the representative of the people in Europe. And this is a very good point. That’s why it was actually thought, it was considered in the EU that that would be a problem. Hence why we have another piece of legislation, what is called the European Health Data Space Regulation, which was published just a couple of weeks ago, 1st of May actually, of this year.

Which basically provides the obligation of data holders, like a hospital, to be making their data available. … And then researchers, regulators would be able to access those data in a secure environment, anonymized and so on, to be training, testing, validating algorithms. So basically the idea is that you bring all the 27 member states, all, let’s say, hospitals or all data holders, which could be also beyond hospitals, to be basically coordinating their data and researchers, startups, regulators, to be able to use all these pool of data. So there is a new regulation on that specific issue, too.

CHAKRABARTI: … I definitely appreciate this glimpse that you’ve given us into how Europe is handling coming up with a new regulatory schema for AI in health care. So Yiannos Tolias, legal lead on AI liability in health care for the European Commission. Thank you so much for being with us today.

TOLIAS: Thanks a lot. It was great pleasure to be with you.

CHAKRABARTI: Professor Doshi-Velez, we’ve got about a minute left and I have two questions for you. First of all, the one thing that we haven’t really addressed head on yet is the fact that everyone wants to move to a place where the constant machine learning aspect is one of the strengths that could be brought to health care.

And it seems right now that the FDA is looking at things as fixed, even though they know that constant development is going to be in the future. What do we need to do to get ready for that?

DOSHI-VELEZ: I’m going to take a slightly contrary view here. I don’t think that algorithms in health care need to be learning constantly. I think we have plenty of time to roll out new versions and check new versions carefully. And that is actually super important. And what I worry about, as I said before, is not only, you know, we have to worry about the algorithms changing. But the data and the processes changing under our feet. And that’s why we just need, you know, post-market surveillance mechanisms.

CHAKRABARTI: Okay, that’s interesting. So then I’m going to give you ten more seconds to tell me in the next year or five years, what one thing would you like to see in place from regulators?

DOSHI-VELEZ: So as I mentioned earlier, there are some really great checklists out there that are being developed in the last year in terms of transparency. I would love to see those adopted. I think transparency is the way we’re going to get algorithms that are safe, and fair and effective.

This series is supported in part by Vertex, The Science of Possibility.

Next Post

Pot, hemp and the General Assembly 2022 short session

A tall eco-friendly plant with pungent bouquets has gotten a ton of notice in the North Carolina General Assembly these earlier couple of weeks. Lawmakers have been active producing confident hemp and CBD are nevertheless authorized after July 1, when the existing regulation is established to expire. In the Senate, […]
Pot, hemp and the General Assembly 2022 short session

You May Like

Subscribe US Now