Tag: health

  • Health Tips: The super trio that helps fight cancer when you’re over 70

    Health Tips: The super trio that helps fight cancer when you’re over 70

    Dr. Mehmet Oz and Dr. Mike Roizen

    Marvel Comics loves a trio of superheroes: There is Captain The usa, Thor and Iron Person as the Avengers Primary and the original Defenders — Namor, Hulk and Physician Bizarre.

    Your well being enjoys a trio of superheroes, as well. The blend of vitamin D3, omega-3s and energy-creating exercise is impressive more than enough to slash your threat of developing invasive cancer above age 70 by 60{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c}.

    A new multi-countrywide, randomized, controlled demo revealed in Frontiers of Growing older appeared at the influence of having 2,000 IU a working day of vitamin D3, having 1 gram of omega-3s and undertaking a basic at-household energy-making workout program at least 3 instances weekly.

    Between the extra than 2,000 participants who had been tracked from 2012 to 2017, only four men and women who followed all 3 of the encouraged interventions created most cancers, while 12 who followed none of them ended up identified with most cancers. The gains of doing any a person of the treatment plans or combining two of them were measurable, but not almost as highly effective as the trio alongside one another.

    Your trio:

    1. The analyze utilized stand-to-sit physical exercises, a single-leg equilibrium routines, elastic resistance bands and going up techniques. Work out assists fight most cancers by improving upon immune toughness.

    2. Ask your doc for a blood check to verify your vitamin D levels. Then consider the approved total of D3 to create your immune and bone power (2,000 IU in the review).

    3. You can get omega-3 fatty acids from salmon, anchovies, herring and sea trout, and dietary supplements designed from algae or fish oil (1,000 milligrams in the research). These coronary heart-loving fats may essentially encourage most cancers mobile death.

    Additional evidence that the “What to Take in When” technique works

    Two of the oldest acknowledged Sumerian created performs “Kesh Temple Hymn” and the “Instructions of Shuruppak” day to around 2,500 B.C. I have not been writing about how to roll back again your RealAge by means of wise diet for that long, but at times it feels like it! Nevertheless, I’m usually glad to see backup for my life’s get the job done with new, higher-quality study by experts fascinated in lengthy and balanced living. The most recent is a overview from USC Leonard Davis College of Gerontology, published in Mobile. It looked at hundreds of scientific tests on diet, disorders and longevity in laboratory animals and individuals. They bundled large-fats and small-carbohydrate ketogenic eating plans, vegetarian and vegan meal plans, the Mediterranean eating plan and calorie-restricted diet plans.

    The researchers identified that the finest diet plan for an prolonged healthspan and lifespan incorporates moderate to significant carbohydrate ingestion from non-refined resources, low but ample protein largely from plant-centered resources (heaps of legumes), and enough plant-based fat to provide about 30{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c} of electrical power desires. Ideally, your day’s foods come about in an 11-12 hour window, and each individual three to four months, you go through a five-working day cycle of a fasting or quasi-fasting diet. That aids lower insulin resistance, blood strain and other chance variables for persistent disorders. Sound acquainted?

    1 caution: The moment you are over 65, to steer clear of frailty, you must increase your protein intake and make absolutely sure to take in a great deal of sophisticated, unrefined carbs. For enable: Test out “What to Consume When” and the “What to Take in When Cookbook” to start your dwell-improved-younger-and-for a longer period marketing campaign.

  • Smarter health: Regulating AI in health care

    Smarter health: Regulating AI in health care

     Find the first two episodes of the series here.

    Health care is heavily regulated. But can the FDA effectively regulate AI in health care?

    “Artificial intelligence can have a significant positive impact on public health,” The FDA’s Dr. Matthew Diamond says. “But it’s important to remember that, like any tools, AI enabled devices need to be developed and used appropriately.”

    That’s Dr. Matthew Diamond, head of digital health at the FDA. Does the agency have the expertise to create the right guardrails around AI?

    “We’re starting to learn how to regulate this space. … I don’t know that it’s particularly robust yet,” Dr. Kedar Mate says. “But we need to learn how to regulate the space.”

    Today, On Point: Regulating AI in health care. It’s episode three of our special series Smarter health: Artificial intelligence and the future of American health care.

    Guests

    Elisabeth Rosenthal, editor-in-chief of Kaiser Health News. Author of “An American Sickness.” (@RosenthalHealth)

    Finale Doshi-Velez, professor of computer science at Harvard University. Head of the Data to Actionable Knowledge Lab (DtAK) at Harvard Computer Science.

    Yiannos Tolias, lawyer at the European Commission, worked on team who developed AI regulation proposals. Senior global fellow at the NYU school of law, researching liability for damages caused by AI systems. (@Yanos75261842)

    Also Featured

    Dr. Matthew Diamond, chief medical officer at the FDA’s Digital Health Center of Excellence.

    Nathan Gurgel, director of enterprise imaging product marketing at FUJIFILM Healthcare Americas Corporation.

    Dr. Kedar Mate, CEO of the Institute for Healthcare Improvement. (@KedarMate)

    Part I

    MEGHNA CHAKRABARTI: Episode three: The regulators. Over the four months and dozens of interviews that went into this series, one thing became clear, because just about everyone said it to us. Artificial intelligence has enormous potential to improve health care, if a lot of things don’t go enormously wrong.

    Doctors, scientists, programmers, advocates, they all talk to us about the important need to, quote, mitigate the risks, to create comprehensive standards for evaluating if AI tools are even doing what they claim to do, to avoid what could easily go wrong. In short, to regulate and put up guardrails on how AI is used in health care.

    For now, the task of creating those guardrails falls to the Food and Drug Administration. Dr. Elisabeth Rosenthal is editor in chief at Kaiser Health News. Dr. Rosenthal, welcome back to On Point.

    DR. ELISABETH ROSENTHAL: Thanks for having me.

    CHAKRABARTI: So let’s get right to it. Do you think, Dr. Rosenthal, that the FDA, as it is now, can effectively regulate artificial intelligence algorithms in health care?

    ROSENTHAL: Well, it’s scrambling to keep up with the explosion of algorithms. And the problem I see is that the explosion is great. It’s mostly driven by startups, venture capital, looking for profit. And with a lot of promises, but very little question about, How is this going to be used? So what the FDA does and what companies try to do is just get their stuff approved by the FDA, so they can get it out into the market. And then how it’s used in the market is all over the place. And AI has enormous potential, but enormous potential for misuse, and poor use and to substitute for good health care.

    CHAKRABARTI: Okay. So that explosion in the use and potential of health care, FDA is really aware of just that simple fact. We spoke with Dr. Matthew Diamond, who’s the chief medical officer of the Digital Health Center of Excellence at FDA. And we’re going to hear quite a few clips from my interview with him over the course of today’s program. We spoke with him late last month, and he talked about a significant challenge for the FDA in regulating AI.

    DR. MATTHEW DIAMOND: It’s important to appreciate that the current regulatory framework that we have right now for medical devices was designed for more of a hardware based world. So we’re seeing a rapid growth of AI enabled products, and we have taken an approach to explore what an ideal regulatory paradigm would look like to be in sync with the natural lifecycle of medical device software in general. And as you mentioned, AI specifically.

    CHAKRABARTI: Dr. Rosenthal, I mean, just to bring it down to a very basic level, FDA regulates drugs and devices. The regulatory schemes for both are different because drugs are different than devices. It seems as if FDA is going down the track of seeing software as a device, but do you think it has the expertise in place to even do that effectively?

    ROSENTHAL: Well, it’s not what it was set up to do. Remember when the FDA started regulating devices, it was for things like tongue depressors, you know, and then it moved on to defibrillators and things like that. But, you know, the software expertise is out there in techland and in tech believers. And so it’s very hard to regulate.

    And much of the AI stuff that’s getting approved is approved through something called the 510(k) pathway, which means you just have to show that the device, in this case an AI program or an AI enabled device, is similar to something that’s already on the market. And so you get a kind of copycat approval.

    And what is similar, one that wasn’t AI enabled. In some cases, that appears to be the track. And then what they ask for subsequently is real world evidence that it’s working. The FDA has not been good historically in drugs or devices at following up and demanding the real world evidence from companies. And frankly, companies, once they have something out there in the market, they don’t really want evidence that maybe it doesn’t work as well as they thought originally. So they’re not very good at making the effort to collect it, because it’s costly.

    CHAKRABARTI: You know, from my layperson’s perspective here, one of the biggest challenges that I see is that the world of software development, outside of health care, is a world where for a lot of good reasons — What’s the phrase that came out of Silicon Valley? Perpetual beta. It’s like the software is continuously being developed as it’s in the market. Right? We’re all using software that gets literally updated every day. How many times I have to do that on my phone? I can’t tell you.

    But in health care, it’s very, very different. The risks of that constant development, there can be considerable. Because you’re talking about the care of patients here. Do you have a sense that the FDA has a framework in mind or any experience with that kind of paradigm where it’s not just, you know, a tool that they have to give preclearance for, and then the machine gets updated two years later and then they give clearance for that too? It seems like a completely different world.

    ROSENTHAL: Yes, it is. And they announced last September a kind of framework for looking at these kind of things and asked for comment. And when you look at the comments, they’re mostly from companies developing these AI programs who kind of want the oversight minimized. It was a little bit like, trust us, make it easy to update. And you know, I can tell you, for example, on my car, which automatically updates its software. Each time it updates, I can’t find the windshield wipers. You know, that’s not good.

    So there’s tremendous potential for good in AI, but also tremendous potential for confusion. And I think another issue is often the goals of some of these new AI products is to, quote-unquote, make health care cheaper. So, for example, one recent product is an AI enabled echocardiogram. So you don’t need a doctor to do it. You could have a nurse or a lay person to do it. Well, I’m sorry, there are enough cardiologists in the United States that everyone should be able to get a cardiologist doing their echocardiogram.

    We just have a very dysfunctional health care system where that’s not the case. So, you know, AI may deliver good health care, but not quite as good as a physician in some cases. In other cases, it claims to do better. You know, it can detect polyps on a colonoscopy better than a physician. But I guess the question is, are the things that it’s detecting clinically significant or just things? And so these questions are so fraught. So, you know, I’m all in for a hybrid approach that combines a real person and AI. But so many times the claims are this is going to replace a person. And I think that’s not good.

    CHAKRABARTI: Yeah, that’s actually going to be one of the centers of our focus for us in our fourth and final episode in this series. But you know, the thing about AI and health care and regulation that seem, it seems to me, to be the perfect distillation of a constant challenge that regulators have. Technology is always going to outpace what the current regulatory framework is, that that doesn’t seem to me to be a terrible thing.

    That’s just what it is. But in health care, you don’t really want the gap to be too big. Because in that gap, what we have are the lives of patients. And, you know, we’ve spoken to people. Glenn Cohen at Harvard Law School was with us last week and he said he sees a problem in that the vast majority of algorithms to potentially use in health care, FDA wouldn’t even ever see them.

    Because they would be the kinds of things that hospitals could just implement without FDA approval. And he talked with us about that FDA just isn’t set up to be a software first kind of regulator. Now, Dr. Matthew Diamond at FDA, when we talked to him, he actually acknowledged that. And here’s what he said.

    DR. MATTHEW DIAMOND: What we have found is that we can’t move to a really more modern regulatory framework, one that would truly be fit for purpose for modern day software technologies, without changes in federal law. You know, there is an increasing realization that if this is not addressed, there will be some critical regulatory hurdles in the digital health space in the years to come.

    CHAKRABARTI: Dr. Rosenthal, we have about 30 seconds before our first break, but just your quick response to that?

    ROSENTHAL: Well, I think there is a big expertise divide. You know, the people who develop these software algorithms tend to be tech people and not in medicine. And the FDA doesn’t have these tech people on board because the money is all in the industry, not in the regulatory space.

    CHAKRABARTI: Well, when we come back, we’re going to talk a little bit more about the guidelines or the beginnings of guidelines that the FDA has put out. And how really what’s needed more deeply here is maybe a different kind of mindset, a new regulatory approach when it comes to AI and health care. What would that mindset need to include?

    Part II

    CHAKRABRTI: Today, we’re talking about regulation. Health care is already a heavily regulated industry. But do we have the right thinking, the right frameworks, the right capacity in place at the level of state and federal government to adequately regulate the kinds of changes that artificial intelligence could bring to health care? Dr. Kedar Mate is CEO of the nonprofit Institute for Health Care Improvement. And here’s what he had to say.

    DR. KEDAR MATE: We need regulatory agencies to help ensure that our technology creators, and our providers and our payers are disclosing the uses of AI and helping patients understand them. I absolutely believe that we need to have this space developed, and yet I don’t think we have the muscle yet built to do that.

    I’m joined today by Dr. Elisabeth Rosenthal. She’s editor in chief at Kaiser Health News. And joining us now is Professor Finale Doshi-Velez. She’s professor of computer science at Harvard University. Professor Doshi-Velez, welcome to you.

    FINALE DOSHI-VELEZ: It’s a pleasure to be here.

    CHAKRABARTI: I’d like to actually start with an example when talking about the kind of mindset that you think needs to come in or evolved into regulation when it comes to AI and health care. And this example comes from Dr. Ziad Obermeyer, who’s out in California, because he told us in a previous episode about something interesting that had happened, they had done this study on a family of algorithms that was being used to examine health records for hundreds of millions of people.

    And they found out that the algorithm was supposed to evaluate who was going to get sick, but how it was doing that was actually evaluating or predicting who’s going to cost the health care system the most. So it was actually answering a different question entirely, and no one really looked at that until his group did this analysis, external analysis. So I wonder what that tells you about the kinds of thinking that goes into developing algorithms and whether regulators recognize that thinking?

    DOSHI-VELEZ: Yeah, it’s such an important question. And the example you gave is perfect. Because many times we just think about the model, but there’s an entire system that goes into the model. There’s the inputs that are used to train the model, as you’re saying, and many times we don’t have a measure of health. What does it mean to be healthy? So we stick in something else, like costly. Clearly, someone who’s using the system a lot, costing the system a lot. You know, they’re sick and that’s true.

    But there’s a lot of other sick people who, for whatever reason, they’re not also getting access to care and are not showing up. So I think the first step there is really transparency. If we knew what our algorithms were really trained to predict, we might say, hey, there might be some problems here. One other thing that I’ll bring up in terms of mindset is also how people use these algorithms, because the algorithms don’t act in a void and once the recommendation comes out how people use them, do they over rely on them, I think is another really important systems issue, right? The algorithm isn’t treating the patient, the doctor is using the algorithm.

    CHAKRABARTI: Okay. So systems issue here. … A systems mindset that it sounds like you’re calling for that needs to be integrated into regulation. But tell me a little bit more about what that system mindset looks like.

    DOSHI-VELEZ: Exactly. So we’ve done some studies in our group and many other people have done similar studies that show that if you give people some information, a recommendation, they’re busy and they’re just going to follow the recommendation. Hey, that drug looks about right. Great, let’s go for it. And they’ll even say the algorithm is fantastic. They’re like, this is so useful, it’s reducing my work.

    We’ve done a study where we gave bad recommendations and people didn’t notice because, you know, they were just going through and doing the study. And it’s really important to make sure that when we put a system out there and say, oh, but of course, the doctor will catch any issues, they may not because they may be really busy.

    CHAKRABARTI: Okay. So Dr. Rosenthal, respond to that, because it sounds to me and both of you, please correct me if I say anything that’s a little bit off base. But it sounds to me that sort of the the established methods of developing a drug, let’s say, or even building a medical device, involve a way of thinking that doesn’t 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c} overlap with software development, not 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c}. And is that a problem, Dr. Rosenthal?

    ROSENTHAL: Well, I think it is because most drugs are designed with the disease in mind, not necessarily to save money. I get pitches for AI stuff in medicine every day. Look at my great startup. And most of what they’re claiming is that it will save money. And I think that’s the wrong metric to use, but that’s the common metric that’s used now because most of these devices and most of these AI programs come out of the business space, not the medical space.

    And I think many of them are claiming you don’t need the doctor really to look and see if it’s right or not. And I’ll say I haven’t practiced medicine in many years. But, you know, kind of diagnosis is very holistic. And you can check all the boxes for one diagnosis and look at a patient and say, no, that’s not the right one.

    CHAKRABARTI: Hmm. Professor Doshi-Velez, did you want to respond to that?

    DOSHI-VELEZ: I think that’s a great point. And goes back to the point that you made earlier, that we really need doctors in the loop. These are not replacements.

    CHAKRABARTI: The FDA in 2019 put out a paper. It’s the artificial intelligence and machine learning discussion paper that they put out. And in a sense, they have offered kind of an early initial flow for decision making at FDA on how to regulate software as a medical device, which is what they call it. And the first part of the flow is actually determining whether, I’m looking at it right now, determining whether the culture of quality and organizational excellence of the company developing the AI reaches some kind of standard that FDA wants. In other words, do they have good machine learning practices? And as the computer scientist at the table, Professor Doshi-Velez, I’m wondering what you think about that.

    DOSHI-VELEZ: I think that’s critical. I think ultimately there’s a lot of questions that you would want to ask of a company as they go through developing these devices or software as medical devices. I think the good news is that there are procurement checklists that are being made. Canada has an AI directive. World Economic Forum recently put out a set of guidelines, and these basically go through all the questions you should ask a company when you’re thinking about using an AI device. And they’re quite comprehensive.

    CHAKRABARTI: And who would ask those questions?

    DOSHI-VELEZ: So in this case, it’s if you’re someone who’s buying AI, and it’s public sector buying an AI, what would you consider?

    CHAKRABARTI: We wanted to understand a little bit more about what the process is right now at FDA. I mean, it’s still under development for sure, but a couple of at least some artificial intelligence programs or platforms have received FDA approval. And so we reached out to a company that’s been through the process. And so we spoke with Nathan Gurgel. He is director of Enterprise Imaging Product Marketing at Fujifilm Healthcare Americas Corporation.

    NATHAN GURGEL: I look at it as kind of like autopilot on an airline. It probably could land the plane, but we as humans and as the FAA feel more comfortable having a pilot. It’s the same way for AI and imaging. The FDA, you know, really has very specific guidelines about being able to show efficacy within the AI and making sure that the radiologists are really the ones that are in charge.

    CHAKRABARTI: So you might be old enough like me to think of Fujifilm as a photograph and imaging company, which in fact it is. And Fuji is actually taking that imaging expertize and applying it pretty aggressively to AI and health care. So they’ve developed a platform that they say enables air imaging algorithms to be used more effectively by radiologists and cardiologists. And the FDA certified Fujifilm’s platform last year, it’s called REiLI. And Gurgel told us that getting that FDA certification, actually the process began at Fujifilm. The company did its own deep review of current FDA guidelines to evaluate their own product, and then they went through a pre-certification process with FDA.

    GURGEL: You can actually meet with them and say, this is what our understanding is of the guidance and how we’re interpreting that. And then you can get feedback from them to say, Yes, you’re interpreting that. Or maybe we want to see something a little bit different within some of your study or your evaluation process. And so that gives you some confidence before you do the actual submission.

    CHAKRABARTI: Gurgel said the process was beneficial for Fujifilm and it led to certification, but he also said there’s still a lot for the FDA to learn. About the technology it’s tasked with regulating. In particular, the FDA needs to increase its technical understanding of how AI works to process and identify findings in imaging software.

    GURGEL: I do feel like in that area that is a learning process for the FDA of understanding what that entails and how that can potentially influence the end users, and in our case would be the radiologist within their analysis of the imaging.

    CHAKRABARTI: Now, Gurgel also told us that Fujifilm, of course, is a global company. And so that means they have experience with AI regulations in several different countries, making it easier for them to bring AI products to market.

    GURGEL: We have it in use right now within Japan, but when we are bringing it into the U.S., we’re required to go through reader studies. So we have radiologists take a look at that. But really what they are doing is proving the efficacy of that algorithm and making sure it provides and is meeting the needs of the radiology, and the radiology user. And making sure that when we bring it to the U.S. that it also is trained and is useful within the patient population within the U.S.

    CHAKRABARTI: Now, another important distinction, Gurgel points out that right now FDA regulates static algorithms. These algorithms don’t automatically update with new information. They’re working on a new regulatory framework for that. And Gurgel said FDA does need to continue to develop guidelines for those.

    GURGEL: Is there ever going to be the ability for these medical processing algorithms to update themselves? And where is the oversight for that? So as they go through and make changes and they hopefully improve themselves. Do the radiologists still agree with that? Are there, you know, still the same efficacy that was brought forward when the algorithm was first introduced into the market? So I think that’s the big question mark at this point, is how and when do we get to that automatic machine learning or deep learning?

    CHAKRABARTI: So that’s Nathan Gurgel, Director of Enterprise Image Product Marketing at Fujifilm Health Care Americas Corporation. Dr. Elisabeth Rosenthal, what do you hear in that process that Gurgel just described to us?

    ROSENTHAL: Well, I hear the same problem the FDA has with with drugs and devices generally, which are, you know, companies bring drugs and devices to the FDA. The companies do the studies they present to the FDA. In the case of drugs, you know, the FDA convenes these expert panels who are going to be expert panels for AI programs. That’s going to be a hard lift. And they haven’t said whether they’re going to have those.

    So and again, there’s this question of the safe and effective standard. Effective compared to what? It’s why we in the United States have a lot of drugs that are effective compared to nothing, but not effective compared to other drugs. So, you know, are we talking about effective, more effective than a really good physician? Or more effective than a not very good physician? Or more effective than nothing? So I think, you know, some of these problems are endemic to the FDA’s charter and they’re just multiplied by the complexity of AI.

    CHAKRABARTI: Oh, fascinating. Professor Doshi-Velez, I see you nodding your head. Go ahead.

    DOSHI-VELEZ: I think a lot of the promise of AI well, in imaging … is to automate boring tasks, like finding uninteresting things. But when it comes to like finding, you know, those polyps or those issues in the images, there’s a lot of places that don’t have great access to those experts. And so there’s a lot of potential for good if you take someone who’s average and can give them some pointers and make them excellent. But that just comes into transparency. It’s really important that we know exactly what standard this meets.

    CHAKRABARTI: Transparency, indeed. But … quickly, Dr. Rosenthal, I hear both of you when you say, you know, how effective compared to what? But who should be setting the guidelines to answer that question? Should that be coming from FDA? Should it be coming from the companies? I mean, It’s an important question. How do we begin to answer it?

    ROSENTHAL: Well, the FDA isn’t allowed to make that decision right now. So that’s an endemic problem there. And we don’t have a good mechanism in this country to think about that. And to think about, again, appropriate use. Yes, maybe a device that’s pretty good at screening, but not as good as seeing a specialist at the Mayo Clinic is really useful in places where you don’t have access to specialists. But that’s where transparency comes in. But do you really want to trust the companies that are making money from these devices and these programs to say, Well, we think it’s this effective or not, we just don’t have a good way to measure that at the moment.

    CHAKRABARTI: Okay. So then we’ve only got another minute and a half or so, Dr. Rosenthal. I’d love to hear from you, what do you think the next steps should be like? Because there’s no doubt that AI is going to continue to be developed for health care. … So what would you like to see happen in the next year or five years to help set those guardrails?

    ROSENTHAL: Oh, that’s such a huge problem, because I don’t think we have the right expertise or the right agency at the moment to think about it. And particularly in our health care system, which is very disaggregated and balkanized and, you know, AI has tremendous potential for good, but it also has tremendous potential for misuse. So I think we need some really large scale thinking, maybe a different kind of agency. Maybe the FDA’s initial charter is due for rethinking. But at the moment, I just don’t think there’s a good place to do it.

    Part III

    CHAKRABARTI: It’s episode three of our special series, Smarter health. And today we’re talking about regulation or the new kind of framework, mindset or even agency that the United States might need to effectively regulate how AI could change American health care. I’m joined today by Professor Finale Doshi-Velez. She’s a professor of computer science at Harvard University, and she leads the Data to Actionable Knowledge group at Harvard Computer Science as well.

    Now here again is Dr. Kedar Mate of the nonprofit Institute for Healthcare Improvement, and he talked with us about how regulators can use the expertise in the industry to develop guidelines to regulate.

    DR. KEDAR MATE: I think some of this, by the way, can be done collaboratively with the industry. This doesn’t need to be a confrontational thing between regulatory agencies, you know, versus industry.

    I think actually industry is setting standards today about how to build algorithms, how to build bias free algorithms, how to build transparency in a process, how to build provider disclosure, etc.. And a lot of that can be shared with the regulatory agencies to help power the first set of standards and write the regulatory rules around the industry. 

    CHAKRABARTI: Professor Doshi-Velez, you know, I wonder if even thinking of this as how do we build regulation is maybe not the best way to think about it, because regulation to me feels very downstream. Should we, when we talked about mindset, should we be thinking more upstream?

    And should really one of the purposes of government be to tell AI developers, Well, here are the requirements that we have, like the kinds of data used to train the algorithm. Or here’s what we require regarding transparency, things like that that are further upstream. Would that be a different and perhaps more effective way to look at what’s needed?

    DOSHI-VELEZ: 100{fe463f59fb70c5c01486843be1d66c13e664ed3ae921464fa884afebcc0ffe6c} agreed that having requirements earlier in the process would be super helpful. And I also would say that it needs to be a continual process. Because these systems are not going to be perfect the first time. They’re going to need to be updated. And we’ve talked about the algorithm gathering data to update itself, but also the data changes under your feet.

    You know, people change, processes used in medical centers change, and all of a sudden your algorithms go out of whack. So it does need to be a somewhat collaborative process of continually, Where are your requirements, how are you going to change, what are you going to disclose so everyone else can notice? Because as was noted before, it may not be in the companies interest or even the purchaser’s interest to be monitoring closely, but if certain things need to be disclosed, then at least it’s out there for the public to be able to see.

    CHAKRABARTI: We like to sort of occasionally leave the United States and learn from examples abroad. And I’d really like to do that in in this situation. So let’s hop over to Cyprus, because that’s where Yiannos Tolias is joining us from. And Yiannos is the legal lead on AI liability in health care for the European Commission. And he worked on the teams who developed the Regulation and Health Data Regulation for the EU. Yiannos, welcome to you.

    YIANNOS TOLIAS: Thanks a lot. That’s very nice to be here.

    CHAKRABARTI: Can you first tell us why the European Commission very intentionally prioritized regulating AI?

    TOLIAS: Just to mention that, of course, the views I am expressing will be personal, not necessarily representing the official position of the European Commission. But I could of course describe the regulatory frameworks that we have now in place. Basically the story of the European Union started back in 2017 where the European Parliament and later the European Council, which is the institution that represents all the 27 member states of the EU, have asked the Commission to come up with a legislative proposal on AI.

    And specifically to look at the benefits and risks of AI. More specifically, they refer to issues like opacity, complexity, bias, autonomous, fundamental rights, ethics, liability. So they ask the Commission to consider and study all those and come up with a piece of legislation. And the Commission came up with the so-called AI Act … which was published as a proposal last year, April of last year to 2021.

    And now to the European Parliament and the Council for adoption. Of course, maybe amendments too. And there is four main objectives that these regulation aims at. First of all is to ensure safety. Secondly, to ensure legal certainty. So also, the manufacturers are certain about their obligations. Thirdly, to create a single market for AI in Europe. So basically, if you develop AI in France and you follow those requirements without any obstacles, you should be able to move it throughout Europe to Sweden and Italy. And thirdly, to create a governance around AI and protect fundamental rights.

    CHAKRABARTI: Okay. Can I just step in here for a moment? Because I think I’m also hearing that there was something else, perhaps even more basic, because you had told us before as well that in a sense, creating a kind of framework to regulate AI like is in place for pharmaceuticals in Europe. You know, it might increase the cost to develop and manufacture AI.

    But I think you’ve told our producer that it creates an equal level of competition. Everyone has to fulfill the requirements. And so therefore, it creates trusts with physicians who could deploy or use it.

    TOLIAS: Yeah. This is the four objectives I mentioned, so I put them a bit into four groups. First, you are creating this piece of legislation aims to create safety. So you are feeling safe as a patient, as a physician to use it and even not being liable using it and even trust it. So to create like a boost of uptake of AI. Secondly, to ensure legal certainty, to boost basically innovation. … Because everyone, all the manufacturers would be at the same same level playing field, in the sense that they would be all obliged to do the same and no other member state in the EU.

    Because these would be, let’s call it, at the federal level. So it will be applicable to all the member states or the member states of the EU would not be able to come up with additional requirements. So you have a set of requirements at EU level and every startup, every company in the EU would be following those.

    CHAKRABARTI: Okay. So let’s talk momentarily about one of those specific requirements. I understand that there’s a requirement now about the kind of data that algorithms get trained on, that companies have to show through the EU approval process, that they have trained their algorithms on a representative data set, that accurately represents the patient population across Europe.

    TOLIAS: Yes, exactly. There are different obligations in the AI Act. One of which is the data governance, data quality obligations. And there are a series of requirements about annotation, labeling, collection of data reinforcement, or how you use all these issues of data, including an obligation that the training, validation and testing datasets should consider the geographical, behavioral and functional settings within which the high risk AI system … is intended to be used. …

    CHAKRABARTI: Stand by for a second, because I want to turn back to Professor Doshi-Velez. This issue brings together, we talked a lot about the data used to train algorithms in our ethics episode and now regulation as well. Let’s bring it back to the U.S. context. I can see the advantage of putting into place a requirement. Let’s say FDA did, that said all AI developers have to train their algorithms on data that’s representative of the American patient population. Is that possible? Where would that data come from?

    DOSHI-VELEZ: I think that ultimately has to be the goal. We don’t want populations left out, and yet currently we have populations that are left out of our datasets. I think there absolutely has to be an obligation to be clearer about who this algorithm might work well for. So that you don’t apply it incorrectly to a population that it might not work well for, or to test it carefully as you go. But ultimately, I think we need better data collection efforts to be able to achieve this goal.

    CHAKRABARTI: So there’s even a further upstream challenge you’re saying, okay, here in the United States. Well, there’s another issue that I’d like to learn how Europe is handling it. And it’s one that we’ve mentioned a couple of times already. And that’s the need for transparency throughout this process, from the algorithm development process, through the regulatory process. And we asked Dr. Matthew Diamond at FDA about this.

    And he told us that FDA has sought input from patients, for example, about what kinds of labels, what they want to know about AI tools being used in health care. And he said that transparency is critical for each stakeholder involved with the technology.

    DR. MATTHEW DIAMOND: It’s crucial that the appropriate information about a device, and that includes its intended use, how it was developed, its performance and also when available, its logic. It’s crucial that that information is clearly communicated to stakeholders, including users and patients.

    It’s important for a number of reasons. First of all, transparency allows patients, providers and caregivers to make informed decisions about the device. Secondly, that type of transparency supports proper use of device. For example, it’s crucial for users of the device to understand whether a device is intended to assist rather than replace the judgment of the user.

    Third, transparency also has an important role in promoting health equity because, for example, if you don’t understand how a device works, it may be harder to identify. Transparency fosters trust and confidence.

    CHAKRABARTI: That’s Dr. Matthew Diamond at FDA. Yiannos Tolias, Europe has put in something that I’ll just refer to as a human supervision provision. What does that do and … why is that important for the trust and transparency aspect of of regulating AI?

    TOLIAS:  I think there is an interesting issue which was raised. Of where do you find the data to ensure that the representative of the people in Europe. And this is a very good point. That’s why it was actually thought, it was considered in the EU that that would be a problem. Hence why we have another piece of legislation, what is called the European Health Data Space Regulation, which was published just a couple of weeks ago, 1st of May actually, of this year.

    Which basically provides the obligation of data holders, like a hospital, to be making their data available. … And then researchers, regulators would be able to access those data in a secure environment, anonymized and so on, to be training, testing, validating algorithms. So basically the idea is that you bring all the 27 member states, all, let’s say, hospitals or all data holders, which could be also beyond hospitals, to be basically coordinating their data and researchers, startups, regulators, to be able to use all these pool of data. So there is a new regulation on that specific issue, too.

    CHAKRABARTI: … I definitely appreciate this glimpse that you’ve given us into how Europe is handling coming up with a new regulatory schema for AI in health care. So Yiannos Tolias, legal lead on AI liability in health care for the European Commission. Thank you so much for being with us today.

    TOLIAS: Thanks a lot. It was great pleasure to be with you.

    CHAKRABARTI: Professor Doshi-Velez, we’ve got about a minute left and I have two questions for you. First of all, the one thing that we haven’t really addressed head on yet is the fact that everyone wants to move to a place where the constant machine learning aspect is one of the strengths that could be brought to health care.

    And it seems right now that the FDA is looking at things as fixed, even though they know that constant development is going to be in the future. What do we need to do to get ready for that?

    DOSHI-VELEZ: I’m going to take a slightly contrary view here. I don’t think that algorithms in health care need to be learning constantly. I think we have plenty of time to roll out new versions and check new versions carefully. And that is actually super important. And what I worry about, as I said before, is not only, you know, we have to worry about the algorithms changing. But the data and the processes changing under our feet. And that’s why we just need, you know, post-market surveillance mechanisms.

    CHAKRABARTI: Okay, that’s interesting. So then I’m going to give you ten more seconds to tell me in the next year or five years, what one thing would you like to see in place from regulators?

    DOSHI-VELEZ: So as I mentioned earlier, there are some really great checklists out there that are being developed in the last year in terms of transparency. I would love to see those adopted. I think transparency is the way we’re going to get algorithms that are safe, and fair and effective.

    This series is supported in part by Vertex, The Science of Possibility.

  • Health tips for adolescents: 5 problems due to obesity, ways to lose weight | Health

    Health tips for adolescents: 5 problems due to obesity, ways to lose weight | Health

    Being overweight situations are spiking in little ones which signifies much too much system fat and a better system mass index (BMI) is a silent killer leading to larger mortality premiums. Sure elements these kinds of as eating a food plan large in calories, a sedentary way of life, genes, a sluggish metabolic process, deficiency of slumber, tension, struggling from endocrine diseases and intake of junk, processed and canned foods can direct to being overweight in children. 

    Did you know obesity conditions are not only rising in older people but even adolescents? Remaining obese can make youngsters fall prey to a variety of well being complications or significant complications. In an interview with HT Life style, Dr Padma Srivastava, Specialist Obstetrician and Gynaecologist at Motherhood Hospitals in Pune’s Lullanagar, unveiled the 5 difficulties that come about owing to weight problems:

    1. Hypertension and cholesterol – Those people young children who are overweight will have higher blood pressure or even high cholesterol. These items can direct to heart ailment in the longer operate.

    2. Diabetes – Obesity can elevate the chance of form 2 diabetes. It can lead to resistance to insulin, the hormone that controls blood sugar. When being overweight will cause insulin resistance, blood sugar levels will be greater than the advisable assortment.

    3. Joint troubles – Did you know? Currently being obese can induce joint pain. It will influence the knees and can bring about knee suffering of walking or doing any other exercise. One will also be at hazard of osteoarthritis.

    4. Slumber apnoea – Just one who is obese will also have rest apnoea which is a lethal sleep dysfunction whereby one gasps to breathe. It tends to interrupt slumber through the night and triggers sleepiness in the course of the day. It can lead to major loud night breathing. The hazard for other respiratory challenges this kind of as bronchial asthma is higher in an overweight kid.

    5. Depression – People with obesity will be depressed, anxious, stressed, and disappointed. They will have weak self-esteem, will be irritable, anxious, and come to feel lonely. They might steer clear of social interactions and be confined to household thanks to becoming overweight. Such young children may perhaps have entire body dysmorphia and they will also be unwanted fat-shamed or bullied.

    Tips for pounds reduction in adolescents: 

    Dr Padma Srivastava instructed, “As parents, you require to consider numerous steps to aid your young children direct a wholesome everyday living. Try out to make absolutely sure that the kid follows a properly-well balanced diet plan inclusive of all the very important vitamins and minerals. Contain contemporary fruits, veggies, complete grains, legumes, pulses and lentils in their diet regime. Children should prevent the junk, canned, processed and oily foodstuff.”

    She added, “They need to exercise day-to-day and do things to do these kinds of as jogging, cycling, swimming, gymming, aerobics, Zumba, strolling or managing. Test to de-strain by carrying out Yoga or meditation and maintain an the best possible fat. Mother and father can just take the help of an qualified who will tutorial them regarding what to consume and avoid.”

  • GOP takes indirect aim at Fetterman’s health in Pennsylvania Senate race

    GOP takes indirect aim at Fetterman’s health in Pennsylvania Senate race

    The perfectly-wishing is above. Now Pennsylvania Lt. Gov. John Fetterman’s stroke is formally a campaign concern in the swing state’s U.S. Senate race.

    But instead than instantly criticize Fetterman more than his health and fitness, Republicans are taking a unique strategy: bashing the Democrat for not remaining far more clear about the stroke that hospitalized him four times before he handily gained the May possibly 17 primary.

    The Fetterman marketing campaign waited two times to disclose his hospitalization, issued a assertion that puzzled cardiologists and later acknowledged that he had a earlier undisclosed coronary heart problem that led physicians to set up a pacemaker with a defibrillator previous thirty day period. He was unveiled from the clinic numerous days immediately after the election.

    On Thursday, the National Republican Senatorial Committee, or NRSC, unveiled a world-wide-web advertisement that featured news protection of pundits and reporters talking about the Fetterman campaign’s evolving explanations of his wellness and hospitalization, inquiring, “Does John Fetterman Have a Difficulty Telling the Real truth?”

    The advertisement, from the campaign arm of Senate Republicans, was a marked departure from current remarks by Fetterman’s opponent, celeb Tv set doctor Mehmet Oz, who wished him well when he was initially hospitalized.

    It is also the very first time Fetterman’s overall health has been raised — although indirectly — by Republicans, who program a compensated Television set media invest in to tarnish the brand of a Democrat who designed a standing as a much larger-than-lifestyle straight talker, in accordance to an NRSC consultant who was not approved to explore marketing campaign strategy publicly. The guide claimed the NRSC also plans to goal Fetterman more than how he has talked over a 2013 incident when he pulled a gun on Black guy he suspected of prison functions.

    Pennsylvania Democrats, meanwhile, have expressed worries about how the Fetterman campaign has dealt with both of those the stroke and discussion of the gun incident. But a spokesman for the lieutenant governor claimed the GOP criticisms will not operate with voters.

    “Pennsylvania voters know and have faith in John Fetterman. Who they never believe in is Mehmet Oz, who is a fraudster and a scam artist who isn’t even from and does not know Pennsylvania,” Fetterman spokesman Joe Calvello reported, obliquely referring to a Democratic Senatorial Campaign Committee world-wide-web advertisement assault on Oz, whose campaign wouldn’t remark.

    Fetterman’s wife, Gisele Fetterman, insisted in an NBC Information interview that aired Wednesday that her household and the marketing campaign had been open about his issue as they ended up just striving to “navigate these pretty personal and hard points quite publicly.”

    “We have finished a superb career on transparency,” she reported.

    On Election Working day, Gisele Fetterman instructed her husband’s situation wasn’t so poor, calling the stroke “a minimal hiccup.” She said he’d be “back on his ft in no time.” But Fetterman remained in the hospital for 9 times, and his campaign says he’s nevertheless resting and could possibly not be back again on the trail until July.

    It was not until finally very last Friday that Fetterman, in a assertion issued by his medical professional, disclosed that he experienced been diagnosed in 2017 with “atrial fibrillation, an irregular heart rhythm, alongside with a decreased heart pump.”

    Fetterman, who is 6-ft-8 and weighed 418 lbs at the time of his analysis, when he was the mayor of Braddock, immediately went on a eating plan just after the diagnosis, and a 12 months later he touted his new wholesome way of living, telling the Pittsburgh Tribune-Assessment he had misplaced 148 lbs .. He failed to mention his coronary heart trouble in that job interview, nor was he getting his heart drugs or viewing his health care provider at the time.

    “He possibly thought to himself: ‘I lost 150 pounds. I’m working all around. I’m nutritious now. I never have to have to convey to any one or see my health care provider or take my prescription drugs.’ Properly, that was dumb. Now he’s bought a pacemaker, and individuals are inquiring issues,” said Neil Oxman, a Pennsylvania Democratic strategist.

    Oxman said the Republican assault on Fetterman’s transparency was the only way to broach his health with no seeming cruel. But he reported it would have minimal salience with voters mainly because of the timing and since Fetterman is anticipated to be again on the marketing campaign path.

    “If he’s up and functioning 3 months from now, no a single will treatment,” Oxman mentioned.

    Republican guide Charlie Gerow, who ran unsuccessfully for governor in final month’s GOP main, agreed with Oxman that the attack is “not a match changer,” expressing Fetterman will be bogged down more by the toll of inflation and other headwinds experiencing Democrats in the midterm elections.

    But Gerow explained that in a carefully divided swing condition, everything issues.

    “When candidates really don’t talk straight, it doesn’t engage in nicely,” Gerow stated.

  • Jeff Bridges Gives Health Update After Battling Cancer and COVID

    Jeff Bridges Gives Health Update After Battling Cancer and COVID


    Twitter

    Jeff Bridges Posts Overall health Update After Cancer Diagnosis

    Look at Story

  • Aging experts offer tips for longevity and health – News

    Aging experts offer tips for longevity and health – News

    UAB industry experts provide strategies seniors can consider and manage handle of their overall health in several areas of wellness.

    Written by: Mary Ashley Canevaro
    Media get hold of: Anna Jones

    Aging streamUAB industry experts provide methods seniors can get and preserve command of their wellbeing in various spots of wellness.Life-style things such as physical exercise and diet regime can be as important as genetics when it arrives to residing a very long life and getting old gracefully, and aging well can in some cases be as very simple as adhering to a few quick measures. Specialists from the University of Alabama at Birmingham Division of Gerontology, Geriatrics and Palliative Care, and the UAB Division of Preventive Medication supply some basic measures more mature grownups can get to preserve regulate of their health and fitness.

    Workout

    A substantial way older grownups can age very well is by often engaging in physical exercise and physical fitness but when it arrives to tips for precise exercises, suggestions may vary. Thomas Buford, Ph.D., a professor in the UAB Division of Gerontology, Geriatrics and Palliative Care and director for the Centre for Work out Medication, suggests any motion is useful.

    “While some wellness tips point out 150 minutes per week of average to vigorous bodily activity (or 10,000 measures for every working day), considerable investigation states that reduced ranges — both in duration and/or in intensity — can still have significant well being gains for more mature grown ups,” Buford claimed. Buford claims the best way to solution workout is to locate an pleasurable action. Buford also recommends incorporating cardio, power teaching, balance and stretching.

    “The most effective exercising regimen is one particular that you take pleasure in and can stick with,” Buford explained. “Try to get in as substantially exercise as you can by participating in pursuits you like executing.”

    Nutrition

    Nourishment can enjoy a big function in how the body ages but luckily, feeding on a healthful eating plan does not have to be hard. Andrew Duxbury, M.D., professor in the UAB Division of Gerontology, Geriatrics and Palliative Care, encourages more mature adults to not overthink it.

    “In typical, more mature individuals just have to have a nicely-balanced eating plan like anyone else,” Duxbury reported.

    Kaitlyn Waugaman, a registered dietitian and software supervisor in the UAB Division of Preventive Medicine, describes that a perfectly-balanced feeding on plan includes fruits, greens, entire grains, low-extra fat or excess fat-free of charge dairy, and protein. She also suggests it is vital to pick out foodstuff small in saturated fat, trans unwanted fat, salt and extra sugar.

    Duxbury says more mature grown ups can generally use extra protein in their diet plan than more youthful persons, and they typically do not have to have natural vitamins and other dietary supplements until they have distinct wellbeing ailments. Having said that, some older people, particularly women, may possibly have enhanced needs for added calcium and vitamin D past what a regular diet materials to battle the tendency towards thinning bones and retaining a healthful pounds. 

    Rate of metabolism

    aging insideAs folks get older, they have a tendency to isolate on their own and come to feel by yourself but associations and connections participate in a important role in serving to older adults sustain their psychological overall health as they age.Waugaman describes that metabolic rate typically slows down as people today age because of improvements in human body composition that lower electrical power requires.

    So, what can older older people take in or do to hold matters shifting? The simple solution: Try to eat effectively.

    “Eating effectively can strengthen the high-quality of daily life for older grownups,” Waugaman reported. “As we age, we should keep away from diets or drastic bodyweight decline. We may possibly consider eating plans are the very best way to be nutritious, but this is not real. Meal plans, specially ones that eliminate foodstuff groups, can direct to diet deficits and cause extra hurt than great.”

    Waugaman endorses location aims for eating all food stuff teams and keeping a secure pounds. If there is a want for body weight decline or specific diet objectives, chat to a registered dietitian or nutritionist.

    Chronic disease

    A worry among older grownups may consist of becoming a lot more susceptible to persistent sickness.

    Duxbury says seniors may be much more vulnerable to disease for the reason that their immune techniques are fewer sturdy, and they generally expertise a typical decline in physiology and organ capability.

    When requested how older grown ups can reduce condition, Duxbury says that, even though there is no way to completely avoid possible ailments throughout growing older, there are a few stable plan habits to integrate for best wellness.

    “It is incredibly essential to avoid falls and take the ideal basic safety measures that are needed to reduce this possibility,” Duxbury stated. “It is also vital to fully grasp the medicines you are taking and have a reliable experienced who will help with them and is not concerned to halt medications that may perhaps no more time be wanted.”

    Last but not least, Duxbury recommends accepting the getting old approach and the variations that arrive with it.

    Mental wellbeing

    Yet another typical curiosity between seniors and people approaching retirement age is sustaining balanced cognitive functionality and memory.

    Duxbury clarifies that analysis has labored to discover the fantastic mind exercising to stop cognitive drop with growing older, but no unique solution has been discovered yet. Nevertheless, there are some ways to continue to keep the mind and memory balanced when acquiring more mature.

    “First, we know that the brain in older grownups is a ‘use it or get rid of it’ organ,” Duxbury explained. “Individuals who retire from lively lifetime to passive pursuits close to the dwelling, usually with minor stimulation aside from the television, are substantially much more probably to build cognitive decrease than individuals who preserve an desire in trouble-resolving and learning new items.”

    Duxbury claims the finest items an getting old grownup can do to maintain mind purpose are to preserve their mind energetic with stimulating duties such as looking through about new issues, attending lectures, resolving puzzles, interacting with new folks, and looking ahead to the new day with intellectual curiosity.

    Emotional health and fitness

    As men and women get older, they tend to isolate them selves and sense on your own but relationships and connections engage in a significant part in supporting older grown ups maintain their psychological wellness as they age.

    “Human beings are social animals,” Duxbury claimed. “We are designed by nature to reside in a mutually supportive group of people today. Separating ourselves from other people tends to raise our stress amounts and is commonly unhealthy.”

    “Older adults require both of those the enterprise of other older adults — those people who see and recognize the planet in a comparable way by way of shared working experience — but also young individuals who can hold them active and engaged in the new and keep them finding out.” Duxbury explained. “ In switch, older adults presume their pure roles of mentors, storytellers and keepers of cultural knowledge for the younger.”

    For seniors who are on the lookout for link or younger grown ups who would like to volunteer, the Birmingham Disaster Heart gives a most important company for senior citizens, retirees and widowed folks to talk with volunteer counselors in excess of the cellphone on a typical basis. Signing up for the system is simple. Pay a visit to Crisis Center Birmingham or call the Disaster Center’s Senior Speak Line at (205) 328-8255 and request to be signed up.