Category: Health News

  • Yet another attempt to expand Medicaid in NC

    Yet another attempt to expand Medicaid in NC

    By Anne Blythe

    For all those waiting with bated breath to find out whether Medicaid will be expanded to nearly 600,000 more North Carolinians, take a pause.

    Republicans in the state House of Representatives are not ready to embrace the policy whole hog. Instead, there will be one more study and more planning, while the lawmakers campaign for elections in November.

    The proposal to create a legislative committee with members from both chambers that will hear a Medicaid Modernization Plan to be developed by the state Department of Health and Human Services comes out of negotiations between state House and Senate leaders over a spending plan for the coming fiscal year. This committee would come on the heels of a different study committee that met six times from February to April this year. 

    The new way forward toward embracing Medicaid expansion, according to SB 408, should:

    • Add Medicaid coverage for adults with annual incomes up to 133 percent of the federal poverty level, or slightly more than $17,000 in earnings for an individual.
    • Increase what hospitals pay for the expanded health care coverage. This means that hospitals will foot the bill for whatever share of the tab is not being picked up by the federal government.
    • Invest $1 billion in programs and treatment for the opioid, mental health and substance use crisis.
    • Include recommendations for increasing access to health care in rural areas.
    • Direct the secretary of commerce to collaborate with others to address the health care workforce shortage, amid predictions of worsening patient-to-caregiver ratios.
    • If the secretary of the state Department of Health and Human Services comes back from negotiations with federal regulators with a plan that the General Assembly is happy with, lawmakers have said they’d return to Raleigh no later than Dec. 15 to have the actual, final vote on the plan.

    “In December, should this go into law, there will be a vote,” Tim Moore, the Republican from Kings Mountain who’s speaker of the state House of Representatives, told the House Rules committee on Tuesday. In the past, bills to expand Medicaid have made it through the House committee hearing process only to never reach a vote on that chamber’s floor.

    This time, Moore says he’ll bring the bill to a vote in the House of Representatives.

    “The bill will either pass or the bill will either fail,” he said Tuesday. “Those who wonder whether there is some kind of trick — there wouldn’t be a bill voted on — absolutely not. The bill will be voted on and it will either pass or fail at that time on its merits. Based on what people have told me where they are on it, if the plan meets that, I think it will pass.”

    The call for further study comes after longtime advocates for Medicaid expansion have been on a rollercoaster ride, of sorts. First, they climbed the steep hill of winning over Senate leader Phil Berger (R-Eden), who stood in the way of the state Senate taking up the topic for close to a decade. That gave the advocates momentum until they hit another ascent after Rep. Donny Lambeth (R-Winston-Salem), who led a  Medicaid expansion study committee this spring, said many members of the House remained a tough sell.

    “We’re not lukewarm in the House,” Lambeth told reporters in February after the first meeting of the Joint Legislative Committee on Access to Healthcare and Medicaid Expansion. “It is still rather chilly. It is a heavy lift to convince our House caucus that this is the right direction to go. Now is it impossible? No. I wouldn’t be here if I thought it was impossible.”

    Roller coaster ride

    At the start of this month, Medicaid expansion advocates thought they were getting ready for a downhill thrill ride when Berger, the longtime critic, introduced a health care omnibus bill that called for adding hundreds of thousands of low-income North Carolinians to the Medicaid rosters.

    Berger’s proposal, which passed the state Senate with all but two votes, included several provisions that did not thrill hospitals and physicians. The Berger bill would change rules for how some health care facilities are regulated and give advanced practice nurses more autonomy from physicians that oversee them now.

    Moore has said those provisions were difficult for some House members to endorse, and they were not part of the bill the full House passed Tuesday evening 102 to 4. The Senate had gone home for the night by the time the full vote happened around 7 p.m. It would need approval from the Senate to then move to the desk of Gov. Roy Cooper, a longtime expansion advocate.

    Berger played it coy earlier Tuesday when asked about the prospects in his chamber for the newest Medicaid expansion bill proposal.

    “Let’s see what the House passes and then we’ll figure out what the Senate’s response will be,” Berger said.

    Moore, who was standing beside the Senate leader quipped: “We’re happy with that response. We’re going to send him a good product.”

    Berger quipped back: “Not as good as ours.”

    Preview of $27.9 billion spending plan

    All of the back-and-forth between the House and the Senate over Medicaid expansion comes in the midst of the annual tussle over the state budget. While the General Assembly is dominated by Republicans in both the Senate and House of Representatives, each chamber has its own priorities and there’s always backroom dealing to see which will prevail in the final budget plan. 

    Tuesday evening Moore and Berger spoke to reporters about their $27.9 billion spending plan for the fiscal year that begins on July 1.

    Key parts of the plan:

    • Gives teachers pay raises of 4.2 percent, a number that includes raises already called for in the previous fiscal year.
    • Sets aside $1 billion for an “inflationary reserve” to help offset problems brought on by high gas prices and other problems from the current state of inflation.
    • Sets funds aside to expand the school resource officer program but did not commit to beefing up school counseling programs.
    • Dips into sales tax revenue to support transportation projects that have been underfunded.
    • It keeps $6 billion in revenue surplus, $2 billion of which will be recurring funds.
    • Sets aside funds for redeveloping legislative and administrative buildings, a Raleigh development plan that could further reshape the downtown.

    Some of the health-related spending in the plan:

    • Provide $14.8 million for mental health resources;
    • Set aside $32 million for school safety projects;
    • Gives more support to the crisis helpline; and
    • Gives rural counties a better shot at school safety grants.

    More will become clear about the budget as the budget is more fully explained on Wednesday at a Joint Appropriations Committee meeting. The lawmakers have said they plan to vote on the proposed budget by the end of this week.

    “Medicaid expansion is not in the budget,” Berger acknowledged to a room full of reporters and TV crews.

    The reason, Moore told his House members before the vote on Tuesday, was: “There are not the votes in this chamber right now to put an outright expansion on the floor.” 

    Nonetheless, Moore was able to persuade all but four members of his chamber to take the only palatable path forward for skeptical Republican members of the House. One unwritten rule Moore has followed in the past has been to only bring legislation to the floor when a majority of his Republican caucus is in favor.

    ‘Not a trick’

    Democrats, who have waited years to see progress from the Republicans on Medicaid expansion, voted for the bill that Moore shepherded through the House with hopes that it would not be the end of the discussion.

    “This is not the bill I had hoped to vote on tonight,” said Gale Adcock, a Democrat from Cary and nurse practitioner. Adcock has been an advocate of giving nurses a broader scope of practice without the oversight of physicians, a proposal that often hits a roadblock among powerful health care lobbying groups.

    “As we look at expanding access to health care we need to look at striking the balance of not just having the demand side addressed, but also the supply side,” she told the House.

    Lambeth summed up his thoughts on the House Medicaid expansion bill by recounting a trip he and his wife took to Asheville recently. They took the backroads, a detour of sorts, that allowed them to see the beauty of parts of the state they might not otherwise have seen,

    As Kinsley develops a plan to bring back to lawmakers, he will learn from the federal government and teams of lawyers if North Carolina can add a work requirement to the expansion rules, something that’s failed in every other state that’s proposed it. He can also figure out a parachute for the state to opt out if the federal government tries to cut back on its funding share of 90 percent for every new Medicaid expansion beneficiary.

    “I kind of view this as we’re taking a little bit of a detour,” Lambeth said. “This gives us an opportunity to go down a path — we’re taking a little bit of a detour here — to get this right.”

    Republish our articles for free, online or in print, under a Creative Commons license.

    Close window

    Republish this article

    As of late 2019, we’re changing our policy about reprinting our content.

    You are free to use NC Health News content under the following conditions:

    • You can copy and paste this html tracking code into articles of ours that you use, this little snippet of code allows us to track how many people read our story.




    • Please do not reprint our stories without our bylines, and please include a live link to NC Health News under the byline, like this:

      By Jane Doe

      North Carolina Health News



    • Finally, at the bottom of the story (whether web or print), please include the text:

      North Carolina Health News is an independent, non-partisan, not-for-profit, statewide news organization dedicated to covering all things health care in North Carolina. Visit NCHN at northcarolinahealthnews.org. (on the web, this can be hyperlinked)

    1

  • Should North Carolina operate its Medicaid oral health program as fee-for-service or transition to managed care?

    Should North Carolina operate its Medicaid oral health program as fee-for-service or transition to managed care?

    By Anne Blythe

    As lawmakers ponder whether to expand Medicaid to add some 600,000 more people to the rolls, the North Carolina Oral Health Collaborative is looking at a different aspect of the federal- and state-sponsored insurance program.

    Nearly a year ago, North Carolina transformed its Medicaid program from a fee-for-service-based plan to a system managed by private insurers.

    The oral health portion of the program, however, was not part of the Medicaid transformation. It is still managed by the state.

    Zachary Brian, director of the North Carolina Oral Health Collaborative and vice president of impact, strategy and programs at the Foundation for Health Leadership and Innovation, said recently in a telephone interview that his organization has partnered with the North Carolina Institute of Medicine and The Duke Endowment to launch the Oral Health Transformation Initiative. (Disclosure: The Duke Endowment is a NC Health News sponsor).

    In July, a task force with members from diverse vantages in oral health care delivery will begin a year-long process in which members consider whether oral health care provided through Medicaid should remain a fee-for-service program or be overseen by private insurers.

    “The traditional fee-for-service payment system incentivizes costly, more invasive procedures,” Brian contended while announcing the joint initiative.

    “Nationally, we see a movement in remodeling our health care delivery system in many ways,” Michelle Ries, associate director of the North Carolina Institute of Medicine, added in the same video announcing the initiative. “As North Carolina has moved to managed care for primary health care and behavioral health services, we believe we owe it to the consumer and provider communities to thoroughly look at the current landscape for oral health and make recommendations based on an analysis of what other states are doing and lessons learned from the rollout of Medicaid managed care so far in North Carolina.”

    Whole-body care

    For too long, many public health advocates say, oral health care has been in a silo, of sorts, the mouth separated from the body. This is increasingly out of step with the systemic “whole-body” approach being advocated for more recently.

    A look into someone’s mouth can reveal evidence of heart disease, cancer, autoimmune syndromes, viruses, diabetes and gastrointestinal problems.

    Public health advocates say that integrating oral health care with primary care could not only make many communities and populations healthier but also reduce costs. People who do not have routine access to dental care often end up in emergency rooms with toothaches or infections in the oral cavity. Those visits can be far more costly for the patient, the provider and the insurer.

    Many communities in North Carolina face challenges accessing “optimal oral health care,” according to the Oral Health Collaborative.

    Four counties in North Carolina do not have a regularly practicing dentist, according to data collected from 2020 by the Cecil G. Sheps Center for Health Services Research. They’re in the northeastern tip of the state — Camden, Gates, Hyde and Tyrrell counties.

    Will more dentists participate?

    The collaborative says roughly 35 percent of the dentists in North Carolina participate in Medicaid or the Children’s Health Insurance Program, or CHIP as it’s often called.

    Dave Richard, head of Medicaid at the state Department of Health and Human Services, said his office puts that number closer to 40 percent. 

    Nonetheless, that number can pose a challenge for children and adults in need of care, often in the state’s rural reaches, public health care advocates note. Only 18 percent of adult Medicaid recipients in North Carolina use the dental care option, according to the collaborative’s statistics.

    Richard said that in 2021, the state’s fee-for-service Medicaid oral health program paid $24 million in claims for children in the CHIP program. The program paid $300 million for children ages 6 to 20 in the Medicaid program, and $104 million for adults 21 and older.

    Richard took no stance on whether it would be better to shift the oral health program to managed care or keep it as a fee-for-service program.

    Instead, he posed several questions.

    “What value add would you bring if you move to managed care?” Richard asked. He also wondered whether the state would lose or gain more dentists through such a shift.

    That’s what the task force plans to study over the next year with hopes of delivering a report and potential series of recommendations for a reimagined oral health care system. Their goal is to get something that policymakers and lawmakers can have to review in time to decide whether the state should make the shift before the next contracts are negotiated in 2024.

    “So often we don’t have the opportunity to really slow down and take a year, 18 months and dig in and engage with other states and engage with experts and really bring people to the table,” Stacy Warren, program officer for The Duke Endowment, said when the initiative was announced. 

    “We can’t just fund a lot of programs,” she said, although she said that’s actually happening. “We fund school-based oral health programs. We fund medical-dental integration programs, but what we’ve learned and the North Carolina Oral Health Collaborative has certainly helped teach us this over the years, is that these programs can’t exist successfully in isolation of true systems change.”

    Republish our articles for free, online or in print, under a Creative Commons license.

    Close window

    Republish this article

    As of late 2019, we’re changing our policy about reprinting our content.

    You are free to use NC Health News content under the following conditions:

    • You can copy and paste this html tracking code into articles of ours that you use, this little snippet of code allows us to track how many people read our story.




    • Please do not reprint our stories without our bylines, and please include a live link to NC Health News under the byline, like this:

      By Jane Doe

      North Carolina Health News



    • Finally, at the bottom of the story (whether web or print), please include the text:

      North Carolina Health News is an independent, non-partisan, not-for-profit, statewide news organization dedicated to covering all things health care in North Carolina. Visit NCHN at northcarolinahealthnews.org. (on the web, this can be hyperlinked)

    1

  • CDC Raises Monkeypox to a Level 2 Advisory

    CDC Raises Monkeypox to a Level 2 Advisory

    “Practice enhanced precautions” might appear to be like a acquainted directive by now, but its newest use will come from the Centers for Disorder Command and Prevention (CDC) for vacationers to stay away from an infection with monkeypox.

    The CDC just lately issued an “Alert – Degree 2” advisory for travelers as well being officers have noted an maximize in the amount of situations of monkeypox, which is a viral zoonosis, or virus transmitted to people from animals. Monkeypox offers with signs and symptoms that are really identical to individuals of smallpox, while it is clinically much less intense. Amount 2 is the 2nd best of a few journey safeguards, and urges men and women to “practice increased precautions” to avoid condition, although it was not connected to any particular locations.

    “Clusters of monkeypox scenarios have been claimed in several countries internationally, outside the house of regions in Central and West Africa exactly where instances are typically uncovered,” defined Neha Alang, MD, FACP, an infectious ailment specialist with the Hartford Healthcare Healthcare Group in Norwich. “The event of situations with no direct vacation to all those regions, or without the need of recognized links to a traveler from these locations, is abnormal.”

    As of Thursday, June 9, there were 45 verified scenarios of monkeypox in the United States. Globally, there are 1,356 scenarios in 31 nations.

    “The CDC implies the increase in conditions across the planet led to the new travel inform,” Dr. Alang mentioned. “The travel advisory is to make tourists informed of the actions they need to just take to stay clear of contracting monkeypox an infection.”

    These steps contain avoiding:

    • Near call with sick persons, including all those with pores and skin or genital lesions.
    • Make contact with with wild animals, alive or useless, this sort of as rodents (rats, squirrels) and non-human primates (monkeys, apes).
    • Taking in or planning meat from wild recreation (bushmeat) or utilizing solutions derived from wild animals from Africa (lotions, lotions, powders).
    • Get hold of with contaminated elements that have been applied by ill men and women – clothing, bedding or supplies used in healthcare settings – that came into get in touch with with contaminated animals.

    “The chance to the typical general public continues to be low, but persons should really search for healthcare treatment right away if they establish a new, unexplained pores and skin rash or lesions on any part of the system, whether there is fever and chills or not, and keep away from make contact with with many others,” Dr. Alang reported.

    Symptoms of monkeypox contain:

    • Fever and chills.
    • Exhaustion.
    • Headache.
    • Muscle mass aches.
    • Swollen lymph nodes.
    • Rash on confront and overall body.

    With regards to masking to avert monkeypox an infection, Dr. Alang stated the virus is totally distinct from other airborne illnesses like COVID-19.

    “It is not acknowledged to linger in the air and is not transmitted for the duration of shorter intervals of shared airspace,” she claimed. “Most persons with monkeypox report near contact with an infectious person. Although we do not know with certainty what purpose direct actual physical make contact with has as opposed to the job of respiratory secretions, in scenarios in which people who have monkeypox have travelled on airplanes, no recognised cases of monkeypox happened in the folks seated about them, even on lengthy intercontinental flights.”

    When COVID and measles are transmitted from airborne particles emitted by an infected person and inhaled by another, she mentioned there have been no stories that monkeypox has been transmitted that way. The CDC, on the other hand, does suggest everyone with monkeypox wear a mask all over many others in close spaces and face-to-encounter get in touch with.

    “In a healthcare location, a affected person with suspected or verified monkeypox infection ought to be positioned in a solitary-human being area, but specific air dealing with is not needed,” Dr. Alang reported. “Any methods that are very likely to spread oral secretions, this kind of as intubation and extubation, ought to be carried out in an airborne an infection isolation room.”

    Monkeypox lasts concerning two and 4 weeks just after an incubation period of time of one to 3 months. It is lethal for about a single in 10 contaminated persons.


  • Mental health struggles take toll on people suffering long COVID

    Mental health struggles take toll on people suffering long COVID

    Amy Weishan, 48, of Canby, Oregon, talks in the living room of her home.

    Amy Weishan, 48, of Canby, Oregon, discusses her psychological wellbeing troubles although dwelling with extensive COVID-19. (OHSU/Christine Torres Hicks)

    Content material warning: In assist of trauma-educated communications, please be informed that this information has subjects that may well be activating for survivors of attempted suicide and all those who have been impacted by suicide or attempted suicide. OHSU Suicide Prevention sources are out there and the Countrywide Suicide Prevention Helpline can be achieved 24/7 by calling 800-273-8255.

     

    For Amy Weishan, prolonged COVID-19 is substantially a lot more than the brain fog and significant tiredness that make simple responsibilities seem to be unsurmountable. It is also a frequent emotional roller coaster experience that led her to see a psychological health and fitness qualified for the to start with time.

    “If you noticed me appropriate now, you would not imagine my tale,” said Weishan, 48, and of Canby. “I don’t seem like somebody who struggles every single working day. I don’t have a Band-Help. My battle is on the inside of, and the day by day internal struggle is actually complicated. I’m often one predicament away from crying and crumbling.”

    Psychological wellbeing and emotional properly-currently being are generally-ignored elements of long COVID-19, which brings about amongst 10 and 30{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c} of those who get COVID-19 to continue going through myriad debilitating signs and symptoms three months or extra soon after their first infection. An onslaught of actual physical situations can get a toll, top to nervousness, despair, worry assaults and other temper disorders.

    “Those who have a more serious or complicated circumstance of prolonged COVID-19 could experience a profound perception of helplessness,” said Jordan Anderson, D.O., an assistant professor of psychiatry and neurology in the Oregon Well being & Science University School of Medication.

    Melancholy and stress and anxiety are how the brain responds to limits introduced on by a new health and fitness problem. The more time anyone encounters a health and fitness problem, the far more a person’s mental wellness can drop,” Anderson stated. “Some lengthy COVID-19 clients have not been well considering the fact that 2020, and are having difficulties emotionally as well as physically.”

    The federal govt estimates between 7.7 and 23 million People have lengthy COVID. Mental health and fitness is between numerous troubles stated in President Joe Biden’s April 5 memorandum, which orders the federal govt to coordinate the United States’ reaction to the ailment. And still Anderson does not know one more psychiatrist who dedicates most of their time to caring for patients with long-term COVID, the way he does as part of the OHSU Lengthy COVID-19 Software.

    Psychological troubles

    Weishan and her loved ones fell sick with COVID in July 2020, prior to vaccines have been accessible and just before study indicated vaccination lessens the threat of having very long COVID. She had a really hard time respiratory, seasoned intensive joint soreness, and was so weak that it felt like she experienced just run a marathon devoid of schooling beforehand. Although her household recovered, Weishan however had some lingering concerns. In Oct 2020, she tested favourable again and knowledgeable a new spherical of dreadful signs: coughing, pounding complications, and fevers.

    The back-to-again bouts with COVID-19 led Weishan to look for refuge in her bed room, by yourself. She craved rest and quiet, and became conveniently exhausted around other people — like her own relatives. Continued brain fog meant she had issues accumulating her very own ideas, let by yourself describing them to other people. Whilst she used to be straightforward-likely and gregarious, Weishan became bothered by clutter and chosen solitude above firm. She experienced to consider a six-month leave of absence from work.

    Once, she compelled herself to leave the house on a easy errand: heading to a gasoline station to fill the relatives automobile. When the tank was topped off and it was time to depart, she couldn’t restart the car or truck and instantaneously turned overwhelmed.

    “I was sobbing, and had to contact my partner,” Weishan said. “He came to the station and identified I experienced neglected to place the car or truck in park. He adopted me house to make sure I was Okay. Soon after that, all I could do was go to bed and snooze.”

    It virtually became way too substantially in November 2021, when she attempted suicide.

    “I recall pondering this a shitty detail to do, but it is superior than what I sense now,” Weishan recalled. “But I didn’t feel anything at all. So I pushed more difficult till I broke the surface area of my pores and skin.”

    She stopped just before producing serious hurt, and went to her husband for aid.

    Empathetic listening makes a big difference

    Weishan listened to about the OHSU Very long COVID-19 Program by way of an on-line assistance team. Her very first appointment was in April 2021 she was afterwards referred to a psychological wellbeing expert.

    “I was not able to get helpful assist until I met Dr. Anderson at OHSU,” she explained. “It felt as even though my complete body and brain had turned on me, and I did not realize myself any more. He assisted me make perception of what was taking place.”

    As a neuropsychiatrist who specializes in analyzing the ties in between psychological overall health problems and the mind as a physical organ, Anderson spelled out from a biological standpoint what was going on in her system and brain, and how they ended up related. Weishan was prescribed treatment to assist dampen her intense bouts of anger and other moods.

    To date, Anderson has taken care of approximately 50 of the approximately 800 people who have gained care by the OHSU Lengthy COVID-19 Program. Patients who are substantially distressed by despair, nervousness or panic assaults, or who have suicidal thoughts, are referred to him. Most of his prolonged COVID people are struggling with mental health for the initially time in their lives. And for those who have had psychological wellbeing issues before, long COVID can make them even worse.

    “Having long COVID alone is a new kind of trauma that is extended, and has not stopped for two-in addition a long time for some individuals,” Anderson explained, incorporating that lots of individuals battle to modify to their new, decrease stage of working as their physique bit by bit fights off long COVID.

    Like Weishan, some men and women need to choose a go away of absence from do the job when they’re originally struck with long COVID. Nevertheless, most of Anderson’s patients have been ready to return to at the very least component-time get the job done soon after about a 12 months of gradual recovery.

    Anderson focuses on just about every patient’s symptoms, and acknowledges that some could be caused by a actual physical ailment alternatively of a psychological 1. For example, some lengthy COVID patients also working experience Postural Orthostatic Tachycardia Syndrome, or POTS, a blood circulatory dysfunction that can lead to a little something related to a worry assault. In those scenarios, he and other OHSU extended COVID providers suggest simple techniques this kind of as emphasizing hydration and consuming adequate vitamins and electrolytes, in lieu of prescribing stress assault remedies.

    When proper, Anderson prescribes some typical psychiatric prescription drugs, which include propananol or benzodiazepine for stress. But perhaps the biggest assistance he delivers is currently being an empathetic listener who definitely hears what his sufferers share.

    “Mental wellness concerns worsen when patients come to feel invalidated,” he clarifies. “Their struggling can be decreased when their liked ones and wellbeing treatment companies are far more supportive and make a honest exertion to realize what they’re dealing with.”

    To even further assistance long COVID patients with psychological wellbeing considerations, the OHSU system has arranged guidance groups. Up to 20 patients have achieved just about about the moment a month to share their encounters with every single other. Weishan participated in two this kind of teams, and discovered listening to others’ stories served her comprehend that she’s not alone.

    Anderson states health companies of all specialties must be common with lengthy COVID and be open to referring sufferers with a lot more elaborate situations to a specialised clinic if needed. He also encourages vendors to monitor people not only for physical signs and symptoms, but also for their psychological health and fitness.

    A unique kind of joy

    Many factors have improved in the just about two a long time given that Weishan initially fell ill with COVID-19. She still receives head aches, her sense of smell is generally off, and she’s separated from her partner. She’s grieving above how lengthy COVID-19 has modified her globe.

    But not all is lost. For the previous yr, Weishan has identified assurance while diving into a new position. She mainly is effective from household, exactly where she can superior control her daily cadence. She feels good about her occupation, which helps well being treatment establishments acquire insurance coverage for prescription medications, and taps into her analytical and important imagining competencies.

    “Finding my joyful looks pretty distinctive these days,” she said. “I never know what the future seems to be like, but I’m purposeful in what I do and go after much more wins every day. I retain making an attempt, and set a person foot in front of the other. Some days are less complicated than other people.”

  • Smarter health: Regulating AI in health care

    Smarter health: Regulating AI in health care

     Find the first two episodes of the series here.

    Health care is heavily regulated. But can the FDA effectively regulate AI in health care?

    “Artificial intelligence can have a significant positive impact on public health,” The FDA’s Dr. Matthew Diamond says. “But it’s important to remember that, like any tools, AI enabled devices need to be developed and used appropriately.”

    That’s Dr. Matthew Diamond, head of digital health at the FDA. Does the agency have the expertise to create the right guardrails around AI?

    “We’re starting to learn how to regulate this space. … I don’t know that it’s particularly robust yet,” Dr. Kedar Mate says. “But we need to learn how to regulate the space.”

    Today, On Point: Regulating AI in health care. It’s episode three of our special series Smarter health: Artificial intelligence and the future of American health care.

    Guests

    Elisabeth Rosenthal, editor-in-chief of Kaiser Health News. Author of “An American Sickness.” (@RosenthalHealth)

    Finale Doshi-Velez, professor of computer science at Harvard University. Head of the Data to Actionable Knowledge Lab (DtAK) at Harvard Computer Science.

    Yiannos Tolias, lawyer at the European Commission, worked on team who developed AI regulation proposals. Senior global fellow at the NYU school of law, researching liability for damages caused by AI systems. (@Yanos75261842)

    Also Featured

    Dr. Matthew Diamond, chief medical officer at the FDA’s Digital Health Center of Excellence.

    Nathan Gurgel, director of enterprise imaging product marketing at FUJIFILM Healthcare Americas Corporation.

    Dr. Kedar Mate, CEO of the Institute for Healthcare Improvement. (@KedarMate)

    Part I

    MEGHNA CHAKRABARTI: Episode three: The regulators. Over the four months and dozens of interviews that went into this series, one thing became clear, because just about everyone said it to us. Artificial intelligence has enormous potential to improve health care, if a lot of things don’t go enormously wrong.

    Doctors, scientists, programmers, advocates, they all talk to us about the important need to, quote, mitigate the risks, to create comprehensive standards for evaluating if AI tools are even doing what they claim to do, to avoid what could easily go wrong. In short, to regulate and put up guardrails on how AI is used in health care.

    For now, the task of creating those guardrails falls to the Food and Drug Administration. Dr. Elisabeth Rosenthal is editor in chief at Kaiser Health News. Dr. Rosenthal, welcome back to On Point.

    DR. ELISABETH ROSENTHAL: Thanks for having me.

    CHAKRABARTI: So let’s get right to it. Do you think, Dr. Rosenthal, that the FDA, as it is now, can effectively regulate artificial intelligence algorithms in health care?

    ROSENTHAL: Well, it’s scrambling to keep up with the explosion of algorithms. And the problem I see is that the explosion is great. It’s mostly driven by startups, venture capital, looking for profit. And with a lot of promises, but very little question about, How is this going to be used? So what the FDA does and what companies try to do is just get their stuff approved by the FDA, so they can get it out into the market. And then how it’s used in the market is all over the place. And AI has enormous potential, but enormous potential for misuse, and poor use and to substitute for good health care.

    CHAKRABARTI: Okay. So that explosion in the use and potential of health care, FDA is really aware of just that simple fact. We spoke with Dr. Matthew Diamond, who’s the chief medical officer of the Digital Health Center of Excellence at FDA. And we’re going to hear quite a few clips from my interview with him over the course of today’s program. We spoke with him late last month, and he talked about a significant challenge for the FDA in regulating AI.

    DR. MATTHEW DIAMOND: It’s important to appreciate that the current regulatory framework that we have right now for medical devices was designed for more of a hardware based world. So we’re seeing a rapid growth of AI enabled products, and we have taken an approach to explore what an ideal regulatory paradigm would look like to be in sync with the natural lifecycle of medical device software in general. And as you mentioned, AI specifically.

    CHAKRABARTI: Dr. Rosenthal, I mean, just to bring it down to a very basic level, FDA regulates drugs and devices. The regulatory schemes for both are different because drugs are different than devices. It seems as if FDA is going down the track of seeing software as a device, but do you think it has the expertise in place to even do that effectively?

    ROSENTHAL: Well, it’s not what it was set up to do. Remember when the FDA started regulating devices, it was for things like tongue depressors, you know, and then it moved on to defibrillators and things like that. But, you know, the software expertise is out there in techland and in tech believers. And so it’s very hard to regulate.

    And much of the AI stuff that’s getting approved is approved through something called the 510(k) pathway, which means you just have to show that the device, in this case an AI program or an AI enabled device, is similar to something that’s already on the market. And so you get a kind of copycat approval.

    And what is similar, one that wasn’t AI enabled. In some cases, that appears to be the track. And then what they ask for subsequently is real world evidence that it’s working. The FDA has not been good historically in drugs or devices at following up and demanding the real world evidence from companies. And frankly, companies, once they have something out there in the market, they don’t really want evidence that maybe it doesn’t work as well as they thought originally. So they’re not very good at making the effort to collect it, because it’s costly.

    CHAKRABARTI: You know, from my layperson’s perspective here, one of the biggest challenges that I see is that the world of software development, outside of health care, is a world where for a lot of good reasons — What’s the phrase that came out of Silicon Valley? Perpetual beta. It’s like the software is continuously being developed as it’s in the market. Right? We’re all using software that gets literally updated every day. How many times I have to do that on my phone? I can’t tell you.

    But in health care, it’s very, very different. The risks of that constant development, there can be considerable. Because you’re talking about the care of patients here. Do you have a sense that the FDA has a framework in mind or any experience with that kind of paradigm where it’s not just, you know, a tool that they have to give preclearance for, and then the machine gets updated two years later and then they give clearance for that too? It seems like a completely different world.

    ROSENTHAL: Yes, it is. And they announced last September a kind of framework for looking at these kind of things and asked for comment. And when you look at the comments, they’re mostly from companies developing these AI programs who kind of want the oversight minimized. It was a little bit like, trust us, make it easy to update. And you know, I can tell you, for example, on my car, which automatically updates its software. Each time it updates, I can’t find the windshield wipers. You know, that’s not good.

    So there’s tremendous potential for good in AI, but also tremendous potential for confusion. And I think another issue is often the goals of some of these new AI products is to, quote-unquote, make health care cheaper. So, for example, one recent product is an AI enabled echocardiogram. So you don’t need a doctor to do it. You could have a nurse or a lay person to do it. Well, I’m sorry, there are enough cardiologists in the United States that everyone should be able to get a cardiologist doing their echocardiogram.

    We just have a very dysfunctional health care system where that’s not the case. So, you know, AI may deliver good health care, but not quite as good as a physician in some cases. In other cases, it claims to do better. You know, it can detect polyps on a colonoscopy better than a physician. But I guess the question is, are the things that it’s detecting clinically significant or just things? And so these questions are so fraught. So, you know, I’m all in for a hybrid approach that combines a real person and AI. But so many times the claims are this is going to replace a person. And I think that’s not good.

    CHAKRABARTI: Yeah, that’s actually going to be one of the centers of our focus for us in our fourth and final episode in this series. But you know, the thing about AI and health care and regulation that seem, it seems to me, to be the perfect distillation of a constant challenge that regulators have. Technology is always going to outpace what the current regulatory framework is, that that doesn’t seem to me to be a terrible thing.

    That’s just what it is. But in health care, you don’t really want the gap to be too big. Because in that gap, what we have are the lives of patients. And, you know, we’ve spoken to people. Glenn Cohen at Harvard Law School was with us last week and he said he sees a problem in that the vast majority of algorithms to potentially use in health care, FDA wouldn’t even ever see them.

    Because they would be the kinds of things that hospitals could just implement without FDA approval. And he talked with us about that FDA just isn’t set up to be a software first kind of regulator. Now, Dr. Matthew Diamond at FDA, when we talked to him, he actually acknowledged that. And here’s what he said.

    DR. MATTHEW DIAMOND: What we have found is that we can’t move to a really more modern regulatory framework, one that would truly be fit for purpose for modern day software technologies, without changes in federal law. You know, there is an increasing realization that if this is not addressed, there will be some critical regulatory hurdles in the digital health space in the years to come.

    CHAKRABARTI: Dr. Rosenthal, we have about 30 seconds before our first break, but just your quick response to that?

    ROSENTHAL: Well, I think there is a big expertise divide. You know, the people who develop these software algorithms tend to be tech people and not in medicine. And the FDA doesn’t have these tech people on board because the money is all in the industry, not in the regulatory space.

    CHAKRABARTI: Well, when we come back, we’re going to talk a little bit more about the guidelines or the beginnings of guidelines that the FDA has put out. And how really what’s needed more deeply here is maybe a different kind of mindset, a new regulatory approach when it comes to AI and health care. What would that mindset need to include?

    Part II

    CHAKRABRTI: Today, we’re talking about regulation. Health care is already a heavily regulated industry. But do we have the right thinking, the right frameworks, the right capacity in place at the level of state and federal government to adequately regulate the kinds of changes that artificial intelligence could bring to health care? Dr. Kedar Mate is CEO of the nonprofit Institute for Health Care Improvement. And here’s what he had to say.

    DR. KEDAR MATE: We need regulatory agencies to help ensure that our technology creators, and our providers and our payers are disclosing the uses of AI and helping patients understand them. I absolutely believe that we need to have this space developed, and yet I don’t think we have the muscle yet built to do that.

    I’m joined today by Dr. Elisabeth Rosenthal. She’s editor in chief at Kaiser Health News. And joining us now is Professor Finale Doshi-Velez. She’s professor of computer science at Harvard University. Professor Doshi-Velez, welcome to you.

    FINALE DOSHI-VELEZ: It’s a pleasure to be here.

    CHAKRABARTI: I’d like to actually start with an example when talking about the kind of mindset that you think needs to come in or evolved into regulation when it comes to AI and health care. And this example comes from Dr. Ziad Obermeyer, who’s out in California, because he told us in a previous episode about something interesting that had happened, they had done this study on a family of algorithms that was being used to examine health records for hundreds of millions of people.

    And they found out that the algorithm was supposed to evaluate who was going to get sick, but how it was doing that was actually evaluating or predicting who’s going to cost the health care system the most. So it was actually answering a different question entirely, and no one really looked at that until his group did this analysis, external analysis. So I wonder what that tells you about the kinds of thinking that goes into developing algorithms and whether regulators recognize that thinking?

    DOSHI-VELEZ: Yeah, it’s such an important question. And the example you gave is perfect. Because many times we just think about the model, but there’s an entire system that goes into the model. There’s the inputs that are used to train the model, as you’re saying, and many times we don’t have a measure of health. What does it mean to be healthy? So we stick in something else, like costly. Clearly, someone who’s using the system a lot, costing the system a lot. You know, they’re sick and that’s true.

    But there’s a lot of other sick people who, for whatever reason, they’re not also getting access to care and are not showing up. So I think the first step there is really transparency. If we knew what our algorithms were really trained to predict, we might say, hey, there might be some problems here. One other thing that I’ll bring up in terms of mindset is also how people use these algorithms, because the algorithms don’t act in a void and once the recommendation comes out how people use them, do they over rely on them, I think is another really important systems issue, right? The algorithm isn’t treating the patient, the doctor is using the algorithm.

    CHAKRABARTI: Okay. So systems issue here. … A systems mindset that it sounds like you’re calling for that needs to be integrated into regulation. But tell me a little bit more about what that system mindset looks like.

    DOSHI-VELEZ: Exactly. So we’ve done some studies in our group and many other people have done similar studies that show that if you give people some information, a recommendation, they’re busy and they’re just going to follow the recommendation. Hey, that drug looks about right. Great, let’s go for it. And they’ll even say the algorithm is fantastic. They’re like, this is so useful, it’s reducing my work.

    We’ve done a study where we gave bad recommendations and people didn’t notice because, you know, they were just going through and doing the study. And it’s really important to make sure that when we put a system out there and say, oh, but of course, the doctor will catch any issues, they may not because they may be really busy.

    CHAKRABARTI: Okay. So Dr. Rosenthal, respond to that, because it sounds to me and both of you, please correct me if I say anything that’s a little bit off base. But it sounds to me that sort of the the established methods of developing a drug, let’s say, or even building a medical device, involve a way of thinking that doesn’t 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c} overlap with software development, not 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c}. And is that a problem, Dr. Rosenthal?

    ROSENTHAL: Well, I think it is because most drugs are designed with the disease in mind, not necessarily to save money. I get pitches for AI stuff in medicine every day. Look at my great startup. And most of what they’re claiming is that it will save money. And I think that’s the wrong metric to use, but that’s the common metric that’s used now because most of these devices and most of these AI programs come out of the business space, not the medical space.

    And I think many of them are claiming you don’t need the doctor really to look and see if it’s right or not. And I’ll say I haven’t practiced medicine in many years. But, you know, kind of diagnosis is very holistic. And you can check all the boxes for one diagnosis and look at a patient and say, no, that’s not the right one.

    CHAKRABARTI: Hmm. Professor Doshi-Velez, did you want to respond to that?

    DOSHI-VELEZ: I think that’s a great point. And goes back to the point that you made earlier, that we really need doctors in the loop. These are not replacements.

    CHAKRABARTI: The FDA in 2019 put out a paper. It’s the artificial intelligence and machine learning discussion paper that they put out. And in a sense, they have offered kind of an early initial flow for decision making at FDA on how to regulate software as a medical device, which is what they call it. And the first part of the flow is actually determining whether, I’m looking at it right now, determining whether the culture of quality and organizational excellence of the company developing the AI reaches some kind of standard that FDA wants. In other words, do they have good machine learning practices? And as the computer scientist at the table, Professor Doshi-Velez, I’m wondering what you think about that.

    DOSHI-VELEZ: I think that’s critical. I think ultimately there’s a lot of questions that you would want to ask of a company as they go through developing these devices or software as medical devices. I think the good news is that there are procurement checklists that are being made. Canada has an AI directive. World Economic Forum recently put out a set of guidelines, and these basically go through all the questions you should ask a company when you’re thinking about using an AI device. And they’re quite comprehensive.

    CHAKRABARTI: And who would ask those questions?

    DOSHI-VELEZ: So in this case, it’s if you’re someone who’s buying AI, and it’s public sector buying an AI, what would you consider?

    CHAKRABARTI: We wanted to understand a little bit more about what the process is right now at FDA. I mean, it’s still under development for sure, but a couple of at least some artificial intelligence programs or platforms have received FDA approval. And so we reached out to a company that’s been through the process. And so we spoke with Nathan Gurgel. He is director of Enterprise Imaging Product Marketing at Fujifilm Healthcare Americas Corporation.

    NATHAN GURGEL: I look at it as kind of like autopilot on an airline. It probably could land the plane, but we as humans and as the FAA feel more comfortable having a pilot. It’s the same way for AI and imaging. The FDA, you know, really has very specific guidelines about being able to show efficacy within the AI and making sure that the radiologists are really the ones that are in charge.

    CHAKRABARTI: So you might be old enough like me to think of Fujifilm as a photograph and imaging company, which in fact it is. And Fuji is actually taking that imaging expertize and applying it pretty aggressively to AI and health care. So they’ve developed a platform that they say enables air imaging algorithms to be used more effectively by radiologists and cardiologists. And the FDA certified Fujifilm’s platform last year, it’s called REiLI. And Gurgel told us that getting that FDA certification, actually the process began at Fujifilm. The company did its own deep review of current FDA guidelines to evaluate their own product, and then they went through a pre-certification process with FDA.

    GURGEL: You can actually meet with them and say, this is what our understanding is of the guidance and how we’re interpreting that. And then you can get feedback from them to say, Yes, you’re interpreting that. Or maybe we want to see something a little bit different within some of your study or your evaluation process. And so that gives you some confidence before you do the actual submission.

    CHAKRABARTI: Gurgel said the process was beneficial for Fujifilm and it led to certification, but he also said there’s still a lot for the FDA to learn. About the technology it’s tasked with regulating. In particular, the FDA needs to increase its technical understanding of how AI works to process and identify findings in imaging software.

    GURGEL: I do feel like in that area that is a learning process for the FDA of understanding what that entails and how that can potentially influence the end users, and in our case would be the radiologist within their analysis of the imaging.

    CHAKRABARTI: Now, Gurgel also told us that Fujifilm, of course, is a global company. And so that means they have experience with AI regulations in several different countries, making it easier for them to bring AI products to market.

    GURGEL: We have it in use right now within Japan, but when we are bringing it into the U.S., we’re required to go through reader studies. So we have radiologists take a look at that. But really what they are doing is proving the efficacy of that algorithm and making sure it provides and is meeting the needs of the radiology, and the radiology user. And making sure that when we bring it to the U.S. that it also is trained and is useful within the patient population within the U.S.

    CHAKRABARTI: Now, another important distinction, Gurgel points out that right now FDA regulates static algorithms. These algorithms don’t automatically update with new information. They’re working on a new regulatory framework for that. And Gurgel said FDA does need to continue to develop guidelines for those.

    GURGEL: Is there ever going to be the ability for these medical processing algorithms to update themselves? And where is the oversight for that? So as they go through and make changes and they hopefully improve themselves. Do the radiologists still agree with that? Are there, you know, still the same efficacy that was brought forward when the algorithm was first introduced into the market? So I think that’s the big question mark at this point, is how and when do we get to that automatic machine learning or deep learning?

    CHAKRABARTI: So that’s Nathan Gurgel, Director of Enterprise Image Product Marketing at Fujifilm Health Care Americas Corporation. Dr. Elisabeth Rosenthal, what do you hear in that process that Gurgel just described to us?

    ROSENTHAL: Well, I hear the same problem the FDA has with with drugs and devices generally, which are, you know, companies bring drugs and devices to the FDA. The companies do the studies they present to the FDA. In the case of drugs, you know, the FDA convenes these expert panels who are going to be expert panels for AI programs. That’s going to be a hard lift. And they haven’t said whether they’re going to have those.

    So and again, there’s this question of the safe and effective standard. Effective compared to what? It’s why we in the United States have a lot of drugs that are effective compared to nothing, but not effective compared to other drugs. So, you know, are we talking about effective, more effective than a really good physician? Or more effective than a not very good physician? Or more effective than nothing? So I think, you know, some of these problems are endemic to the FDA’s charter and they’re just multiplied by the complexity of AI.

    CHAKRABARTI: Oh, fascinating. Professor Doshi-Velez, I see you nodding your head. Go ahead.

    DOSHI-VELEZ: I think a lot of the promise of AI well, in imaging … is to automate boring tasks, like finding uninteresting things. But when it comes to like finding, you know, those polyps or those issues in the images, there’s a lot of places that don’t have great access to those experts. And so there’s a lot of potential for good if you take someone who’s average and can give them some pointers and make them excellent. But that just comes into transparency. It’s really important that we know exactly what standard this meets.

    CHAKRABARTI: Transparency, indeed. But … quickly, Dr. Rosenthal, I hear both of you when you say, you know, how effective compared to what? But who should be setting the guidelines to answer that question? Should that be coming from FDA? Should it be coming from the companies? I mean, It’s an important question. How do we begin to answer it?

    ROSENTHAL: Well, the FDA isn’t allowed to make that decision right now. So that’s an endemic problem there. And we don’t have a good mechanism in this country to think about that. And to think about, again, appropriate use. Yes, maybe a device that’s pretty good at screening, but not as good as seeing a specialist at the Mayo Clinic is really useful in places where you don’t have access to specialists. But that’s where transparency comes in. But do you really want to trust the companies that are making money from these devices and these programs to say, Well, we think it’s this effective or not, we just don’t have a good way to measure that at the moment.

    CHAKRABARTI: Okay. So then we’ve only got another minute and a half or so, Dr. Rosenthal. I’d love to hear from you, what do you think the next steps should be like? Because there’s no doubt that AI is going to continue to be developed for health care. … So what would you like to see happen in the next year or five years to help set those guardrails?

    ROSENTHAL: Oh, that’s such a huge problem, because I don’t think we have the right expertise or the right agency at the moment to think about it. And particularly in our health care system, which is very disaggregated and balkanized and, you know, AI has tremendous potential for good, but it also has tremendous potential for misuse. So I think we need some really large scale thinking, maybe a different kind of agency. Maybe the FDA’s initial charter is due for rethinking. But at the moment, I just don’t think there’s a good place to do it.

    Part III

    CHAKRABARTI: It’s episode three of our special series, Smarter health. And today we’re talking about regulation or the new kind of framework, mindset or even agency that the United States might need to effectively regulate how AI could change American health care. I’m joined today by Professor Finale Doshi-Velez. She’s a professor of computer science at Harvard University, and she leads the Data to Actionable Knowledge group at Harvard Computer Science as well.

    Now here again is Dr. Kedar Mate of the nonprofit Institute for Healthcare Improvement, and he talked with us about how regulators can use the expertise in the industry to develop guidelines to regulate.

    DR. KEDAR MATE: I think some of this, by the way, can be done collaboratively with the industry. This doesn’t need to be a confrontational thing between regulatory agencies, you know, versus industry.

    I think actually industry is setting standards today about how to build algorithms, how to build bias free algorithms, how to build transparency in a process, how to build provider disclosure, etc.. And a lot of that can be shared with the regulatory agencies to help power the first set of standards and write the regulatory rules around the industry. 

    CHAKRABARTI: Professor Doshi-Velez, you know, I wonder if even thinking of this as how do we build regulation is maybe not the best way to think about it, because regulation to me feels very downstream. Should we, when we talked about mindset, should we be thinking more upstream?

    And should really one of the purposes of government be to tell AI developers, Well, here are the requirements that we have, like the kinds of data used to train the algorithm. Or here’s what we require regarding transparency, things like that that are further upstream. Would that be a different and perhaps more effective way to look at what’s needed?

    DOSHI-VELEZ: 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c} agreed that having requirements earlier in the process would be super helpful. And I also would say that it needs to be a continual process. Because these systems are not going to be perfect the first time. They’re going to need to be updated. And we’ve talked about the algorithm gathering data to update itself, but also the data changes under your feet.

    You know, people change, processes used in medical centers change, and all of a sudden your algorithms go out of whack. So it does need to be a somewhat collaborative process of continually, Where are your requirements, how are you going to change, what are you going to disclose so everyone else can notice? Because as was noted before, it may not be in the companies interest or even the purchaser’s interest to be monitoring closely, but if certain things need to be disclosed, then at least it’s out there for the public to be able to see.

    CHAKRABARTI: We like to sort of occasionally leave the United States and learn from examples abroad. And I’d really like to do that in in this situation. So let’s hop over to Cyprus, because that’s where Yiannos Tolias is joining us from. And Yiannos is the legal lead on AI liability in health care for the European Commission. And he worked on the teams who developed the Regulation and Health Data Regulation for the EU. Yiannos, welcome to you.

    YIANNOS TOLIAS: Thanks a lot. That’s very nice to be here.

    CHAKRABARTI: Can you first tell us why the European Commission very intentionally prioritized regulating AI?

    TOLIAS: Just to mention that, of course, the views I am expressing will be personal, not necessarily representing the official position of the European Commission. But I could of course describe the regulatory frameworks that we have now in place. Basically the story of the European Union started back in 2017 where the European Parliament and later the European Council, which is the institution that represents all the 27 member states of the EU, have asked the Commission to come up with a legislative proposal on AI.

    And specifically to look at the benefits and risks of AI. More specifically, they refer to issues like opacity, complexity, bias, autonomous, fundamental rights, ethics, liability. So they ask the Commission to consider and study all those and come up with a piece of legislation. And the Commission came up with the so-called AI Act … which was published as a proposal last year, April of last year to 2021.

    And now to the European Parliament and the Council for adoption. Of course, maybe amendments too. And there is four main objectives that these regulation aims at. First of all is to ensure safety. Secondly, to ensure legal certainty. So also, the manufacturers are certain about their obligations. Thirdly, to create a single market for AI in Europe. So basically, if you develop AI in France and you follow those requirements without any obstacles, you should be able to move it throughout Europe to Sweden and Italy. And thirdly, to create a governance around AI and protect fundamental rights.

    CHAKRABARTI: Okay. Can I just step in here for a moment? Because I think I’m also hearing that there was something else, perhaps even more basic, because you had told us before as well that in a sense, creating a kind of framework to regulate AI like is in place for pharmaceuticals in Europe. You know, it might increase the cost to develop and manufacture AI.

    But I think you’ve told our producer that it creates an equal level of competition. Everyone has to fulfill the requirements. And so therefore, it creates trusts with physicians who could deploy or use it.

    TOLIAS: Yeah. This is the four objectives I mentioned, so I put them a bit into four groups. First, you are creating this piece of legislation aims to create safety. So you are feeling safe as a patient, as a physician to use it and even not being liable using it and even trust it. So to create like a boost of uptake of AI. Secondly, to ensure legal certainty, to boost basically innovation. … Because everyone, all the manufacturers would be at the same same level playing field, in the sense that they would be all obliged to do the same and no other member state in the EU.

    Because these would be, let’s call it, at the federal level. So it will be applicable to all the member states or the member states of the EU would not be able to come up with additional requirements. So you have a set of requirements at EU level and every startup, every company in the EU would be following those.

    CHAKRABARTI: Okay. So let’s talk momentarily about one of those specific requirements. I understand that there’s a requirement now about the kind of data that algorithms get trained on, that companies have to show through the EU approval process, that they have trained their algorithms on a representative data set, that accurately represents the patient population across Europe.

    TOLIAS: Yes, exactly. There are different obligations in the AI Act. One of which is the data governance, data quality obligations. And there are a series of requirements about annotation, labeling, collection of data reinforcement, or how you use all these issues of data, including an obligation that the training, validation and testing datasets should consider the geographical, behavioral and functional settings within which the high risk AI system … is intended to be used. …

    CHAKRABARTI: Stand by for a second, because I want to turn back to Professor Doshi-Velez. This issue brings together, we talked a lot about the data used to train algorithms in our ethics episode and now regulation as well. Let’s bring it back to the U.S. context. I can see the advantage of putting into place a requirement. Let’s say FDA did, that said all AI developers have to train their algorithms on data that’s representative of the American patient population. Is that possible? Where would that data come from?

    DOSHI-VELEZ: I think that ultimately has to be the goal. We don’t want populations left out, and yet currently we have populations that are left out of our datasets. I think there absolutely has to be an obligation to be clearer about who this algorithm might work well for. So that you don’t apply it incorrectly to a population that it might not work well for, or to test it carefully as you go. But ultimately, I think we need better data collection efforts to be able to achieve this goal.

    CHAKRABARTI: So there’s even a further upstream challenge you’re saying, okay, here in the United States. Well, there’s another issue that I’d like to learn how Europe is handling it. And it’s one that we’ve mentioned a couple of times already. And that’s the need for transparency throughout this process, from the algorithm development process, through the regulatory process. And we asked Dr. Matthew Diamond at FDA about this.

    And he told us that FDA has sought input from patients, for example, about what kinds of labels, what they want to know about AI tools being used in health care. And he said that transparency is critical for each stakeholder involved with the technology.

    DR. MATTHEW DIAMOND: It’s crucial that the appropriate information about a device, and that includes its intended use, how it was developed, its performance and also when available, its logic. It’s crucial that that information is clearly communicated to stakeholders, including users and patients.

    It’s important for a number of reasons. First of all, transparency allows patients, providers and caregivers to make informed decisions about the device. Secondly, that type of transparency supports proper use of device. For example, it’s crucial for users of the device to understand whether a device is intended to assist rather than replace the judgment of the user.

    Third, transparency also has an important role in promoting health equity because, for example, if you don’t understand how a device works, it may be harder to identify. Transparency fosters trust and confidence.

    CHAKRABARTI: That’s Dr. Matthew Diamond at FDA. Yiannos Tolias, Europe has put in something that I’ll just refer to as a human supervision provision. What does that do and … why is that important for the trust and transparency aspect of of regulating AI?

    TOLIAS:  I think there is an interesting issue which was raised. Of where do you find the data to ensure that the representative of the people in Europe. And this is a very good point. That’s why it was actually thought, it was considered in the EU that that would be a problem. Hence why we have another piece of legislation, what is called the European Health Data Space Regulation, which was published just a couple of weeks ago, 1st of May actually, of this year.

    Which basically provides the obligation of data holders, like a hospital, to be making their data available. … And then researchers, regulators would be able to access those data in a secure environment, anonymized and so on, to be training, testing, validating algorithms. So basically the idea is that you bring all the 27 member states, all, let’s say, hospitals or all data holders, which could be also beyond hospitals, to be basically coordinating their data and researchers, startups, regulators, to be able to use all these pool of data. So there is a new regulation on that specific issue, too.

    CHAKRABARTI: … I definitely appreciate this glimpse that you’ve given us into how Europe is handling coming up with a new regulatory schema for AI in health care. So Yiannos Tolias, legal lead on AI liability in health care for the European Commission. Thank you so much for being with us today.

    TOLIAS: Thanks a lot. It was great pleasure to be with you.

    CHAKRABARTI: Professor Doshi-Velez, we’ve got about a minute left and I have two questions for you. First of all, the one thing that we haven’t really addressed head on yet is the fact that everyone wants to move to a place where the constant machine learning aspect is one of the strengths that could be brought to health care.

    And it seems right now that the FDA is looking at things as fixed, even though they know that constant development is going to be in the future. What do we need to do to get ready for that?

    DOSHI-VELEZ: I’m going to take a slightly contrary view here. I don’t think that algorithms in health care need to be learning constantly. I think we have plenty of time to roll out new versions and check new versions carefully. And that is actually super important. And what I worry about, as I said before, is not only, you know, we have to worry about the algorithms changing. But the data and the processes changing under our feet. And that’s why we just need, you know, post-market surveillance mechanisms.

    CHAKRABARTI: Okay, that’s interesting. So then I’m going to give you ten more seconds to tell me in the next year or five years, what one thing would you like to see in place from regulators?

    DOSHI-VELEZ: So as I mentioned earlier, there are some really great checklists out there that are being developed in the last year in terms of transparency. I would love to see those adopted. I think transparency is the way we’re going to get algorithms that are safe, and fair and effective.

    This series is supported in part by Vertex, The Science of Possibility.

  • GOP takes indirect aim at Fetterman’s health in Pennsylvania Senate race

    GOP takes indirect aim at Fetterman’s health in Pennsylvania Senate race

    The perfectly-wishing is above. Now Pennsylvania Lt. Gov. John Fetterman’s stroke is formally a campaign concern in the swing state’s U.S. Senate race.

    But instead than instantly criticize Fetterman more than his health and fitness, Republicans are taking a unique strategy: bashing the Democrat for not remaining far more clear about the stroke that hospitalized him four times before he handily gained the May possibly 17 primary.

    The Fetterman marketing campaign waited two times to disclose his hospitalization, issued a assertion that puzzled cardiologists and later acknowledged that he had a earlier undisclosed coronary heart problem that led physicians to set up a pacemaker with a defibrillator previous thirty day period. He was unveiled from the clinic numerous days immediately after the election.

    On Thursday, the National Republican Senatorial Committee, or NRSC, unveiled a world-wide-web advertisement that featured news protection of pundits and reporters talking about the Fetterman campaign’s evolving explanations of his wellness and hospitalization, inquiring, “Does John Fetterman Have a Difficulty Telling the Real truth?”

    The advertisement, from the campaign arm of Senate Republicans, was a marked departure from current remarks by Fetterman’s opponent, celeb Tv set doctor Mehmet Oz, who wished him well when he was initially hospitalized.

    It is also the very first time Fetterman’s overall health has been raised — although indirectly — by Republicans, who program a compensated Television set media invest in to tarnish the brand of a Democrat who designed a standing as a much larger-than-lifestyle straight talker, in accordance to an NRSC consultant who was not approved to explore marketing campaign strategy publicly. The guide claimed the NRSC also plans to goal Fetterman more than how he has talked over a 2013 incident when he pulled a gun on Black guy he suspected of prison functions.

    Pennsylvania Democrats, meanwhile, have expressed worries about how the Fetterman campaign has dealt with both of those the stroke and discussion of the gun incident. But a spokesman for the lieutenant governor claimed the GOP criticisms will not operate with voters.

    “Pennsylvania voters know and have faith in John Fetterman. Who they never believe in is Mehmet Oz, who is a fraudster and a scam artist who isn’t even from and does not know Pennsylvania,” Fetterman spokesman Joe Calvello reported, obliquely referring to a Democratic Senatorial Campaign Committee world-wide-web advertisement assault on Oz, whose campaign wouldn’t remark.

    Fetterman’s wife, Gisele Fetterman, insisted in an NBC Information interview that aired Wednesday that her household and the marketing campaign had been open about his issue as they ended up just striving to “navigate these pretty personal and hard points quite publicly.”

    “We have finished a superb career on transparency,” she reported.

    On Election Working day, Gisele Fetterman instructed her husband’s situation wasn’t so poor, calling the stroke “a minimal hiccup.” She said he’d be “back on his ft in no time.” But Fetterman remained in the hospital for 9 times, and his campaign says he’s nevertheless resting and could possibly not be back again on the trail until July.

    It was not until finally very last Friday that Fetterman, in a assertion issued by his medical professional, disclosed that he experienced been diagnosed in 2017 with “atrial fibrillation, an irregular heart rhythm, alongside with a decreased heart pump.”

    Fetterman, who is 6-ft-8 and weighed 418 lbs at the time of his analysis, when he was the mayor of Braddock, immediately went on a eating plan just after the diagnosis, and a 12 months later he touted his new wholesome way of living, telling the Pittsburgh Tribune-Assessment he had misplaced 148 lbs .. He failed to mention his coronary heart trouble in that job interview, nor was he getting his heart drugs or viewing his health care provider at the time.

    “He possibly thought to himself: ‘I lost 150 pounds. I’m working all around. I’m nutritious now. I never have to have to convey to any one or see my health care provider or take my prescription drugs.’ Properly, that was dumb. Now he’s bought a pacemaker, and individuals are inquiring issues,” said Neil Oxman, a Pennsylvania Democratic strategist.

    Oxman said the Republican assault on Fetterman’s transparency was the only way to broach his health with no seeming cruel. But he reported it would have minimal salience with voters mainly because of the timing and since Fetterman is anticipated to be again on the marketing campaign path.

    “If he’s up and functioning 3 months from now, no a single will treatment,” Oxman mentioned.

    Republican guide Charlie Gerow, who ran unsuccessfully for governor in final month’s GOP main, agreed with Oxman that the attack is “not a match changer,” expressing Fetterman will be bogged down more by the toll of inflation and other headwinds experiencing Democrats in the midterm elections.

    But Gerow explained that in a carefully divided swing condition, everything issues.

    “When candidates really don’t talk straight, it doesn’t engage in nicely,” Gerow stated.