false
Catalog
2023 IASLC CT Screening Workshop
Video: Application of AI in Lung Screening
Video: Application of AI in Lung Screening
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thank you, Jeff, and thank you to the organizers for inviting me. What a wonderful day we've had with lots of lively discussion and questions, so it's really a pleasure to be here. All right, success. So I like this figure drawn by my friend, Dr. Sergei Bonn, to this is part of a review article from ASCO this year that he and Dr. Yang and I put together, Dr. PC Yang. And we were talking about the lack in the current grasp and reach of lung cancer screening. And so these bars show from U.S. data the patients who are diagnosed with lung cancer, how many of them would have been eligible for screening based on their characteristics, mostly smoking and age. And then, of course, the dismal participation that we have in the U.S. in our current lung cancer screening program. So this gap is really our lack of grasping lung cancer patients who are screen eligible. But what I'm going to focus on in this talk is this gap here, our lack of understanding an individual's risk of detecting lung cancer, at least when we go only by age and smoking criteria. So we've talked about this problem in the morning, so I don't want to belabor. But there are many, many risk models that have been developed, and we heard this morning about increasingly complex ones. In the slide, I had listed the PLCO criteria. And many of these, certainly the first-generation risk models, focused heavily on smoking, many different variables in the model pertaining to smoking. But as we heard about, the epidemiology of lung cancer is changing. So is it necessary for our models to focus so much on smoking? And I would say that one thing that came across from the presentations today is that there are so many factors, we don't know them. Can we ever build a perfect model that requires input of known information? Are we going to know the zip code, where people lived, and the PM 2.5 in that zip code? And whether there was radon in someone's home, and how much time they spent here and there, and what their genetics are, and what their family history is. So it is hard to have a model that requires you to know the factors. So with that background, I've been very privileged to collaborate with this woman here in the photo, Dr. Regina Barzilay, who is a computer scientist at MIT, and also with Dr. Florian Fintelman from Thoracic Imaging at MGH. We've developed this model to basically use radiology input data to try and understand future risk of lung cancer. And what I think is different, and maybe sometimes hard for clinicians to wrap their head around, is what are we doing with these scans? So I'll tell you about SIBL, the model we developed. We use scans from the NLST in order to develop this model. But I think a key thing to understand is that in medicine, we often use images to understand what is going on with the patient in front of us right now. Someone comes in with pain, we get an X-ray, we see there's a broken bone. Someone comes in with cough and fever, we get imaging, we see there's pneumonia. In oncology, we're often looking at tumors, the size of the tumors, are they responding to treatment, are they not responding to treatment. So we usually use these visual images to understand what is happening inside the patient's body at this moment. And what this model is doing is using a radiology image taken today to try to predict something that might happen in the future. So it's a really different use of radiology than the clinical use of radiology. So what we did was we took all these thousands of scans that we obtained from NIH, from the NLST trial, and tried to teach this machine learning model how to, well we told the computer this patient got lung cancer at this specific date, and this patient never was diagnosed with lung cancer within six years. And basically tried to teach the computer how to recognize visual patterns associated with a future diagnosis of lung cancer. Some of the key advantages about this approach is that it doesn't require image annotation. In other words, you don't need a radiologist to circle any particular region of interest like a nodule. This is not an algorithm that is looking at the risk of a nodule. Is this nodule cancer or not? You don't need to input any clinical data like the age or sex or race or smoking information, anything about the patient. It's all derived from the volumetric, total volumetric image of the CAT scan. And we get, the way that the model was designed, we get a six-year annualized risk. So what is their risk of lung cancer at one year after this scan, two, three, all the way up to six? What we published in JCO earlier this year was the results of three, well the results of the development of the model and then three independent validation cohorts. So each one of these panels, A, B, and C, is a different patient cohort. Panel A shows a validation set within NLST. These are patients that were completely separate from the derivation of the model set. The second, panel B, shows an independent validation cohort from Mass General, from our own screening program. And so both panels A and B are similar in that they're U.S. patients. They are predominantly white. They are over age 50. They are heavy smokers. So these are the U.S. PSTF type of criteria patients. Panel C is very different. We also had an independent validation cohort from Taiwan. So this was from a specific hospital in Taiwan. It's not part of the talent study that you've seen presented today. But for the most part, these patients are majority Asian and are also majority non-smoking. So what you see in the different colored lines, so these are AUC curves showing the relationship between sensitivity and one minus specificity of the Sibyl algorithm, how accurate is it when predicting risk of cancer? And the different colored AUC lines represent Sibyl's prediction in that cohort out to the different years. Year one is blue. And you can see that in all the cohorts, the year one is the most accurate and it starts to get a little bit more murky by year six. But in general, all of these AUC curves are quite good. So again, I think the most common question that I've been asked is, well, what is Sibyl-seen? What kind of pattern is commensurate with a future risk of cancer? And I think that is really hard to answer. Someone else mentioned today about machine learning and how you don't really know what the variables are. I mean, I think that's part of the power of machine learning because you can pull out patterns that human brains wouldn't necessarily recognize. But in order to try and give you a visual example, a clinical example, this is an anecdotal case. So this is an actual patient from the NLST trial, a 69-year-old male with a heavy smoking history. And the baseline scan shown here was read by the radiologist as negative, minor abnormalities, not suspicious for lung cancer. But when Sibyl took a look at this scan, it was placed in the 75th risk percentile. Now we can ask Sibyl to show us what part of the scan is leading or sort of the cause of this risk assessment. So this is what we call an attention map. This is not a PET scan, okay? This is a Sibyl attention map. When we ask Sibyl, why did you give this patient scan a 75th risk percentile, the attention was focused on this region in the right lung. Now what happened with this particular patient is that a worrisome nodule appeared within a one-year interval in this location. And this patient was participating in NLST, so the follow-up was excellent. But as we know, in clinical practice, there's a 75% chance, I think still Dr. Silvestri's recent publication, 75% chance that in the real world this patient would not have come back. And so this cancer would not have been caught, although in the clinical trial it was caught. The patient had surgery, 2.2 centimeter cancer. So for the NLST purposes, this was actually a win. This was a screen-detected cancer. It was stage one. But in the real world, this patient may very well have not come back. And Sibyl was able to tell us the year prior that this was a risk. And when Sibyl took a look at this current scan that had the nodule, again, the attention was really focused on here. This is not a PET scan. This is not FDG administered. This is just Sibyl saying, what part of the volumetric CT scan am I worried about? So in general, what do we do with this type of a tool? And I think that is something that my collaborators and I are really thinking hard about, because there are so many different problems to tackle with lung cancer screening, as we've been talking about all day, and so many different variables as we move around the world from different regions. But one thing I think that has come out to me in many people's talks is our need to better understand personalized risk. And there are many different risk calculators being developed. But one thing we did, because a lot of the predominance of the publications are around the PLCO model. So we compared Sibyl on the NLST scans, our validation subset, the part that wasn't part of the model building, but was used for independent validation. We also, because it was a clinical trial, and we had all the information of the variables going into the PLCO model, we were able to compare Sibyl with PLCO. And across all of the six years, using this imaging to assess someone's future risk of lung cancer was more predictive than the 11 questions that focus heavily on smoking. And I think, although I don't know this, but I think that conceptually one thing that Sibyl could do is take into account all these factors we've been talking about. Things like exposures to different toxins, pollutants, things like genetics, what are the genetic exposures. We may not know how to ask questions of our patients to answer all of these factors. But how their lungs look and how the other tissues in the chest look may carry signs that can tell a computer like Sibyl, this person has a high risk for lung cancer and this other person doesn't. So could we think about a personalized and potentially population-based screening strategy where everyone would get a baseline scan to tell us their future risk, and then the interval of screening could be derived from that. Someone with a high baseline risk might be scanned more often. Someone with an intermediate baseline risk may be scanned infrequently. And someone with a very low baseline risk may be scanned very infrequently and just need an update in 10 years to see what their new risk is. This is one potential vision for the future. And it would really, honestly, if we did something like that, it would fall in line with how we think about screening for other cancers. We do baseline cervical cancer screening, and we include an HPV test. And based on the results of HPV, we may modulate up or down the frequency with which we're doing pap smears. Same with colon cancer screening. Now, at least in the US, everyone is recommended to get a colonoscopy at age 45. Your subsequent colonoscopy schedule will be based on what is seen. If you have no nodule, no polyps versus many polyps, that can influence how often you need repeat colonoscopy. So doing something like that for lung cancer screening may take away not only the stigma that is associated with the smoking questions, but also the concept in our mind that smoking is the only risk factor for lung cancer. So next steps that we're concrete steps we're doing is we're learning about SIBL in some ongoing experimental screening programs for high-risk patients who cannot currently access screening. So we have a fire health study in firefighters going on at Mass General. Dr. Shum, who's here, is leading a female Asian non-smoker screening study in the US, which we are going to apply SIBL to. And Dr. Yang is leading a black women's screening study in Boston and Chicago, which we're also going to apply SIBL to those scans. And then in terms of people who can access screening currently, I'm working with Dr. Sarah Gabon and other colleagues at Mass General to figure out the best way to integrate SIBL into a prospective clinical trial to optimize the benefit on getting patients through lung cancer screening who currently qualify under the US PSTF regulations. So I'll skip this part in lieu of the time and basically just close with my conclusions of we need novel ideas to improve lung cancer screening, and we need to include all of the population. I guess I'm kind of along the lines of the patient that was mentioned that if you have lungs you can develop lung cancer, and we need to be thinking about all our potential lung cancer patients when we think about prevention and detection. And SIBL is one tool that can provide personalized future risk of lung cancer, and I hope that others are developed, and we need better practical implementation science trials to understand how this really could affect our treatment in the real world. Thank you very much. Female Speaker Thank you, Jeff, for the introduction. AI is hot. AI is happening. AI is the thing. AI is new. No, it's not new. It came already in the 1950s with the introduction of computer, and AI is known as intelligence demonstrated by machines rather than humans or animals. And by the improvement of computer power, also the possibilities of AI increased, and you see machine learning as part of AI giving, and it's defined as computers with the skill to learn without explicit programming. And with the introduction of GPUs, deep learning came up, and computers were not able only to learn but also improve on their own by using neural networks. And AI gives us a lot of possibilities also for the optimization of lung cancer screening workflow, and I want to go through a couple of them. First, CT-scan quality optimization, pulmonary nodule detection measurement and linkage, pulmonary nodule classification as a malignancy detector but also as a malignancy excluder, optimization of the workflow for negative and positive nodules, and I know that there are many more applications. So to start off, CT-scan quality optimization. We screen more and more people, and for this we use tons of CT-scans, and a lot of all the CT-scans come with a lot of dose. So of course, the lower the dose, the better, and we know that AI can help us to improve image quality when applied on, for instance, ultra-low-dose CT-scans. This is a study with 100 thoracic ultra-low-dose CT-scans performed in children, and three types of post-perception techniques were compared. First weighted filtered back projection, then the APMIRE, iterative reconstruction software, and then a particular AI software called PixelShine, and endpoints of the study were noise measured in numbers in handful units, and then also the eight readers in this study were asked to subjectively measure their image quality, and the reading time of the eight readers was measured, and what they found is that using the AI software, both the noise was significantly lower, the subjective image quality was the highest, and even in less experienced radiologists, the reading time significantly reduced when using this AI software. There are a number of softwares available on the market now to assist the radiologists to reduce their workload and automatically detect the nodule, measure the nodule, segment the nodule, give, for instance, nodule volume, give the segmentation in different angles, but also to automatically link a CT-scan with a previous CT-scan and measure the volume doubling time, and AI can do all this for you. I show you here four different companies, but there are many more on the market. AI can help to assist to classify the nodule. First, I want to discuss how AI can help to detect malignant nodules. This is a study performed by a group of the Radboud University in the Netherlands, and they tested a deep learning algorithm which they developed using NLST data on an external data set consisting of data from the Danish lung cancer screening trial, 177 nodules in total included, of which 59 were malignant, and they compared the performance of their AI system with 11 clinicians and the PENCAN model, and they showed that the performance of their deep learning algorithm outperformed all but one clinician and also outperformed the PENCAN model. It's shown here in the blue line, and what was interesting was that also the performance of some of the residents was higher than the radiologists, and even the pulmonologists — I'm a pulmonologist myself — we did not too bad in classifying these nodules. This was not a full CT scan, this was just to classify the nodules benign or malignant. A problem with using AI to identify lung cancer is that the test characteristics, so the sensitivity and specificity usually are not high enough to correctly classify all lung cancers. You don't want to miss any and you don't want to want too many false positive results. Also often AUCs are given, promising AUCs, but often the specific cutoff values for risk are lacking in the publications. That's why we thought of using AI in a different way. So why not use AI to identify nodules that have a 100 percent certainty to be benign so that these nodules can be automatically detected and ruled out for further reading by the radiologist in that particular screening round. So we did this in a multi-center international trial including three centers and 2,106 nodules of which around 10 percent represented lung cancer. All of these nodules were between 5 and 15 millimeters in size, so indeterminate nodules. And we used lung cancer prediction convolutional neural network from Optelum. We trained the network based on NLST data and validated it on this external data set and we found that we could rule out lung cancer in 20 percent of all participants with a negative predictive value of 99.5 percent, leading to a workload reduction for the radiologist of around 20 percent. So this idea was then tested to see if we could optimize nodule workflow for negative nodules. We did this using data from the Moscow Lung Cancer Screening Trial and all the scans were read by AView Coraline Soft AI system which automatically detected, measured, and classified the nodules. And we asked the AI system to classify the nodules as negative, so less than 100 cubic millimeters, or positive, more than 100 cubic millimeters. And AI was used as a stand-alone reader and was then compared with five radiologists who read the scans and also were asked the same question. AI had eight negative misclassifications, which was defined as a nodule that was defined as less than 100 cubic millimeters by the AI, but it turned out to be more than 100. But when you compare the performance of AI with the other readers, then AI did better than four out of five radiologists. And this can lead to enormous workload reduction when the AI can automatically rule out all the scans with a tiny nodule for further radiologist evaluation. We did the same to validate this on data from the UK Lung Cancer Screening Trial, and one of our fellows, Harriet Lancaster, will present a mini-abstract presentation on it on Tuesday. Positive nodules, we talked about this in the previous session, are a real issue in lung cancer screening. They represent about 4 to 5 percent of all nodules detected in each screening round, and the different screening guidelines do not give any guidance of how to work up these nodules. The advice is to send the participants to the pulmonologist, who then can decide to do any management he wants. So the question is, can AI help? Because we know that only 1 to 2 percent of all participants are diagnosed with lung cancer each round, so around 50 percent of all referred participants have a benign nodule. Can we use AI to reduce false positives? These are just some hypothetical ideas, but we know that most participants that are referred to a pulmonologist as a first step will get a PET-CT. Can we use the PET-CT scan and combine it with the AI CT results and make a model to rule out malignancy in more nodules so that we don't need to do invasive workup in them, but instead of them, return them to the screening program? Can we use AI to make more guidance for the diagnostic workup of first choice? These are questions to be answered in the upcoming years. This is a website showing all the AI tools that are currently available on the market. It's called AI for Radiology, and you can select your sub-specialty. Here I selected chest and the modality CT. You can select the CE or FDE classification, and when you then search for chest CT, you find 39 results, so approved AI software systems, and 19 of these 39 primarily focus on either the detection or the classification of pulmonary nodules. So to wrap up, applications of AI to optimize workflow in lung cancer screening are, I think there are many of them. I showed you that you can use AI to improve your CT scan quality optimization, to improve your nodule lung nodule detection, and to reduce thereby the workload by the radiologist. You can use it as a malignancy detector. You can use it as a malignancy excluder, and you can use it to improve the workup of negative and positive nodules, and I'm sure that there are very many more applications. Thank you all for your attention. I want to thank the organizers for inviting me to give this talk. I also want to thank Dr. Ping Lee for giving us that advice about the favorites of people in Singapore, that's food and shopping. I think this is going to be something all of us in the screening community are going to agree on. Rare that we'll all agree on anything, but I think that's going to be one thing, and so far she's right. I'm talking about the clinical impact of identifying additional, and in parenthesis, incidental findings. That's a great title given to me, and I appreciate it, because I have often said that the word incidental finding is not a very good one to describe, and you've heard several people today mentioning that probably these additional findings may wind up being more important than the lung cancer, and will wind up saving more lives, so incidental in general is sort of considered a minor term, and that's kind of why I think that probably as time goes on we're going to see that these additional findings are perhaps even more important than the lung cancer itself. This was an editorial that was written in 2010. It was called What We Can and Cannot See Coming, and it was based on three articles that were published in this issue of radiology. One was on looking for osteoporosis in coronary calcium scoring, where they were able to show that you could measure osteoporosis. The second was the idea of looking on CT screening, that we should start quantifying the extent of coronary calcium scoring, and the third was looking at the aorta on routine scans. The point of this was that these were considered additional findings, and they were quantitative findings, and the editorial came out with the following statements. From our first days of residency, we're trained to report on all of the findings that we can see on each radiologic study, so they were saying that we were taught that if you see a thyroid nodule, you've got to report it, or a breast mass on a lung cancer scan, you've got to report it. Those are the additional findings, but then they go on to say that here now, we're in a different world. These are not things that we see. These are things that need to be measured, and that the idea that we have to start measuring things is important, so we have to measure the extent of osteoporosis, we have to measure the extent of calcium scoring. It was all of this sort of quantitative stuff, and they were saying that this is different in radiology. We haven't been doing this before, and then they go on to say that this paradigm shift for a rich avenue of further research and development, rather than shying away from this new responsibility, the radiology leadership should embrace the possibility of adding a new dimension to our profession. This is in 2010, and that we should be really focusing on all these quantitative things that we can be doing, and very similar to what we do in blood tests, where you don't just order a lymphocyte. You get a blood test, and you get lymphocytes, monocytes, basophils, and you get reports of normal ranges. This kind of thing. You can't simply get one thing anymore. All of these things need to be reported, and similarly, we're starting to see this here in lung cancer screening, where all of these findings, and the longer we go, the more we're going to find of things that we can report, and things that we can quantify, and eventually, we're going to be doing the same thing. We're going to be putting normal values on each of the things that we find. This kind of quantitative assessment. I think this is the future of where this is, and we're moving rapidly. Keep in mind now, we're talking about things that we can see, and now things that we can directly measure. Now, this whole idea of additional findings, they're sort of broadly broken up into two categories. There are those that are considered not clinically relevant, and those that are clinically relevant. If you go through the literature, there's lots of articles about how frequently this occurs, and you see reports saying they range from 80 to 94 percent. It's 100 percent. Essentially, every single scan we see will have an additional finding that could be reported. Just like if we use a lower and lower cutoff, and higher and higher resolution scan, every single patient will have a lung nodule. Maybe .5 millimeters, or one millimeter, but every patient will have an additional finding. The idea of clinically relevant, you see numbers roughly reported around 20 percent. To me, this also is not a great number, because it's really changing over time, and it evolves. Depends on, you know, what's clinically relevant is different in different places. It changes based on what treatments become available, so it's also a little bit of a rough estimate. Now, not everybody agrees that all these additional findings are of value. This, for example, I'm just going to show you a few articles that came out. This came out after the VA study, and it was brought up that, you know, important questions about really question the whole value of lung cancer screening. Should we even be doing it when you're finding incidental findings 40 times as frequently as you're finding lung nodules? This was used as a way of sort of discrediting lung cancer screening. This was a, I found this article. This came from the Nelson study. This is back in 2007. I suspect the opinions have changed, but it was really an interesting article, and they looked at their incidental findings back then, and they found incidental findings found in 81 percent considered clinically relevant, and only 1 percent, with 0 percent ultimately found significant. And the conclusion at that time was based on our results. We advise against systematically searching for and reporting incidental findings in lung cancer screening studies using low-dose multi. I'm not, now, I suspect you're seeing this thought process evolving, and you're seeing new studies that are specifically looking for incidental findings and their value. But these are the kind of things that we have to be thinking about. We have to make it important. We have to make it relevant, and we have to understand what that means. This is a statement from my eminent colleague, Dr. James Mulshine, and he points out that medicine is rarely gifted with such a high-yield opportunity to move beyond the individual disease silo to better manage these comorbid conditions. It's really a very powerful statement, and especially when so much illness is associated with tobacco, it's just tobacco-related, that you can look for all of these tobacco-related diseases at one time, emphysema, bronchiectasis, cardiac disease, osteoporosis, even breast cancer, all of this is tobacco-related, and we can find this all on one scan. And it's really an opportunity for us to be doing this, I think. So what does a positive clinical impact mean? Well, there has to be a finding, you have to make a finding. If the finding is made, you have to act upon it, and the action needs to be clinically useful. That's how something becomes, you get a positive clinical impact from these additional findings. I'll just show you a few articles and some, just a few thought processes. This was a perspective piece that was written, emphysema detection in the course of lung screening, optimizing a rare opportunity to impact population health. And they were focusing here in this editorial on the idea that we really should be screening for emphysema on these patients. We need to be looking for emphysema now. It's interesting that COPD is one of those illnesses that screening is not recommended, specifically not recommended for. But here in lung cancer screening, we really have the opportunity to do it. And should we be doing it? And it brings out that, in fact, emphysema is frequently identified on low-dose CT. Patients frequently don't know that they have emphysema, even though a quarter of them have severe emphysema and are unaware of it. And this was found in the NLST and in the LCAP studies, that a large proportion of patients have severe emphysema and don't even know they have it. Now, is it valuable to tell them? Well, I can tell you, we had a lot of problems getting this editorial accepted because the idea was we kept getting, well, there's nothing you can do for emphysema anyway. There's no drug to treat it, therefore you shouldn't be screening for it. The argument that we made was that, of course, there's something you can do about it. People need to be aware that they have it. It would motivate them to stop smoking. It may motivate them to do the usual things like exercise, pneumovax for, you know, prevent the flu or pneumonias, things like that. So there are things to be done. Point is, is that this is one of those illnesses where we think screening can be done but there's controversy as to what can be done once you find the disease. Now, this is changing. There are a whole new suite of drugs that are probably coming out in the relatively near future that will have a positive impact on emphysema. That's why I say these things are rapidly changing. This is another article that came out. This is from the Stanford group and they just showed that by reporting, this is a small randomized trial that they did, that by reporting coronary calcium to patients that were found incidentally that when you go back and look at scans and you report whether they have coronary calcium or not, the group that was specifically given this information wound up doing something about it. Larger percentage of those that were specifically reported their results of their coronary calcium scores wound up being put on statins compared to the group where it was found but they weren't specifically given that information as was sort of the standard of care. So again, this shows you that you can do something of positive value. Now, the U.S. Preventive Services has specific recommendations. Interestingly, as I pointed out, COPD screening is not recommended but on low dose CT, I think most of us report and Lung RADS, for example, recommends that you report the extent of emphysema. So we're reporting it even though the U.S. Preventive Services says you shouldn't be screening for it. The same, in a sense, with coronary calcium. Coronary calcium scoring is only recommended for a limited group of intermediate risk but on all the low dose CT scans, we'll report the coronary calcium. So we're already kind of doing this. Osteoporosis screening is recommended but I can tell you very few people report the extent of osteoporosis. Although there's now software that's been FDA approved on low dose CT that's considered equivalent to DEXA. So I think that you're going to start seeing osteoporosis being routinely reported and this is one where screening is already recommended. And same with breast density. Breast density can be measured on low dose CT as well as on mammograms and this is something that's actually legally required. So I think we're going to start seeing a lot of these things being implemented. In terms of how we work up all these incidental findings, there's lots of articles on this. There's one from the ACR, the LCAP has one, there's a – I just show a few here. One is from the Italian group, another is from Brazil, and there's many from Europe, Asia. And you can see this is the ACRs and they break it down into each category of cardiovascular, breast, emphysema, et cetera. And I think these are all very good recommendations above the diaphragm, below the diaphragm. I think there's lots that we can learn of what to do. Now getting to something that I think is really interesting and you heard it in a talk earlier, the talk on SIBL that was just given. Really interesting stuff. This is now not just stuff that is – you can't see but you can sort of measure the extent of it. Here you don't even know what you're measuring. It's finding things. We're not even seeing it, we're not able to exactly measure it, but we're getting a result. And it's probably a very meaningful result. So now we're getting into this area of not only can't we see it but we can sort of measure it. Here we don't know what we're measuring. This is another article that came out, AI body composition, where they're starting to predict mortality. You're starting to get into a whole other range of things here. And this is going to continue. We're going to start finding more and more things that we can start predicting with AI. And this is, you know, in a sense it's a little troublesome because should we be providing all this information or how do we provide it? We've got to really start thinking about what we're telling people. Participants may not want to be told, hey, by the way, your mortality is, you know, you've got an increased risk of mortality based on some AI finding about your muscle composition. Interesting. You know, do we tell people that? Clinicians, do they really want to be told all these ancillary or all these additional findings where they may have to start working it up? We don't have a real mechanism for how we're going to do this. And I think it's really important. I think we're going to have to start addressing this because this is going to keep coming. When we make future predictions about mortality events or even incurable illnesses. How are we going to deal with this? Do we start just reporting this? You know, it's a new domain for all of us. So in conclusion, additional findings will be made with increasing frequency. The clinical impact implies diagnostic and therapeutic actions, options, and this is a rapidly changing subject, changing subject to many additional factors. As we expand our prediction capabilities, we need to consider how this information is presented. Thank you. All right. Thank you so much to the organizers, and thank you very much for that kind introduction. It's great to be with so many people who are interested in bringing lung cancer screening to people in need to create lung cancer survivors and ultimately reduce mortality as a public health tool. And that's the lens in which I'm going to talk about quality measures, and I think the lens by which I see a lot of what has been presented today and in this session. I'm going to talk a little bit about definitions of quality so that we're talking about the same thing, because different cultures, different organizations define quality differently. I'm also going to talk about the lens through which you see quality measures, an individual, the public, a health system, a health department, or a national health organization. I'm going to talk about some categories of quality measures in lung cancer screening, use some of our examples from the work at the American College of Radiology Lung Cancer Screening Registry and through the roundtable, and ultimately close with a project that's underway through the ISLAC Early Detection Screening Committee to develop a quality measure consensus. There's this discrepancy in the use of the words quality and safety, and many people see them as the same, and others see them as different. Quality, according to our U.S. Federal Agency for Healthcare Research and Quality, AHRQ, is doing the right thing at the right time for the right person and having the best possible result. An example of that as applied to lung cancer screening might be the percentage of individuals screened who met the eligibility criteria, the right screening test for the right patient at the right time, or the percent of individuals who come back for that annual screening. Safety may be seen through a different lens. According to the World Health Organization, safety is the prevention of errors and adverse events to patients associated with healthcare. And if you look at screening through that lens, you might think about things like unnecessarily high radiation exposure for the patient or the patient's size, or the consistency or variability of readers in looking at lung cancer screening exams, or the complication rates from procedures that are done in patients who have an abnormal screen to determine if they have cancer, and even in the treatment of those cancers. But I think fundamentally people are starting to recognize that these are really two ends of the same spectrum, quality and safety. And if we look at the World Health Organization's definition of quality healthcare, it can be defined in many ways, that heterogeneity I talked about, but healthcare services should be effective, providing evidence-based healthcare to those who need them, safe, avoiding harm to people for whom the care is intended, and people-centered, providing care that responds to individual preferences, needs, and values. But that to realize the benefits of effective, safe, and people-centered care, healthcare services must be timely. You have a positive screen, what's your wait time to get through your diagnostics? And if you have cancer, what's your wait time to get to the treatment pathway? For if you confine that cancer, and then you go through diagnosis and treatment, all that is delayed, you're not bringing the benefit to the patient in a timely way that could avoid unnecessary delays in both their symptom development as well as ultimately their outcome. And that healthcare services must be equitable, providing care that doesn't vary depending on your gender, ethnicity, geographic location, or economic status as a measure of quality in healthcare delivery. It should be integrated, providing that care across the care continuum. If you can screen for that cancer in a location, but you can't do the follow-up and diagnostics and treatment, you need to find a way to make sure those patients can get to places they can in an integrated, seamless manner, instead of leaving that patient hanging with a positive result and no place to go. And they must be efficient, maximizing the benefit of the resources you have. If you put up a screening facility, or program, or develop a mobile screening program, are you utilizing that facility effectively to bring the most to the patients for the resource you have, and avoiding waste? So in looking at lung cancer screening, think about it through the lens of different perspectives as well. If you're thinking at a population level, a national health ministry, or a national health service, or a public health program, you're thinking about the population, and what sort of population measures would you want? In screening, are you finding cancers? Are you finding early-stage cancers that will bring benefit to patients if treated earlier? Do you have treatments for those early-stage cancers? Are you seeing a stage shift from what you saw before? But ultimately, are you reducing lung cancer mortality as an ultimate measure of the performance of how we deliver lung cancer care, including screening? You can also think of quality at the level of a screening program or facility, things that you do in your facility towards lung cancer screening, outcomes, and adherence. But you can also think about quality measures at the patient level. Do they feel that they understand why they're coming for lung cancer screening or their lung health check? Do they understand and feel educated in why they're getting follow-up tests, or why they're going on to an invasive procedure, or why they're getting procedure A versus procedure B? We've seen some very wonderful technologies described today in this session. But how does a patient know if that's right for them? Do they feel satisfied with the questions that they've asked being answered? How do they feel about their journey through lung cancer screening, potentially diagnostics, and cancer care? And you can also think about quality from a health care facility operations level. Should a health care facility or health system invest in lung cancer screening? Is it right for them and their patient population? Are they getting the biggest bang for their buck out of what they're investing to screen patients in your population? And are there operational metrics being met? So how you look at quality depends on your perspective, a national level, a facility level, a patient level, or a recess and clinical organization level. But it's also important to consider these things in the time in which they're being developed. In the early stage of a lung cancer screening program, say you're out one or two years' worth of patients in your screening program, you know how many patients you screened. You can measure whether they were eligible based on whatever guideline you're using in your program. You can get to the stage of lung cancer. But it's not long enough to see if you're having a cancer stage shift in the entire population that you cover. And it's certainly not enough to be able to measure mortality reduction meaningfully across a country. So the metrics that you may use earlier in a program may differ from when you have a more mature program with large implementation. It's also important to consider in order to measure quality, and you're setting up a lung cancer screening program, sometimes it's the last thing that gets thought of. People think about finding patients who are eligible, smoking cessation, CT scanners, radiologists, referrals for positive screens. But sometimes what doesn't get asked for up front are the resources, which is people and tools, to measure quality and do so in a way in which it can be easily presented to a practice, a screening program, to understand how it's performing. Whether it's back to the radiologist for their variability of reading across their readers, or whether it's the adherence to screening. If you can't see the data, it doesn't matter if it's sitting somewhere in a complicated EHR. And that's people and tools. From my perspective, I'm going to share just some of the information through how we developed the ACR's Lung Cancer Screening Registry as an example of quality, or quality implementation, and in one country that may or may not reflect practices in other countries. And may or not reflect the way healthcare is organized in other countries. Because we are very heterogeneous, although I think we all agree that we should deliver a lung cancer screening with quality. In our country, when Medicare decided they would finally cover lung cancer screening, they said you had to participate in a quality registry in order to get paid. And so that spawned the development of the ACR's Lung Cancer Screening Registry. And they also said you have to use a standardized lung nodule detection reporting system, and for that we developed LungRad. So the way our coverage works in the U.S., things like that drive things that we can do and things that get developed. So we included in that registry the things that Medicare required, which seemed pretty basic. Information about the reading radiologist, the patient identifier, the ordering practitioner identifier, information about your CT scanner, making sure people were asymptomatic for lung cancer that we were using, the LungRad's classification scheme and the results of that, the smoking history and the ability that smoking cessation had been, interventions had been used, radiation dose, and then was it a screen, a baseline, or a subsequent annual screen. This was the minimum that Medicare required, but that just doesn't seem like enough to assess a lung cancer screening program. So then we added additional information. And in collecting information about quality, it's always a trade-off for how much you want. It's like the pathologist wants more tissue, while in quality we want more data. And yet it can be a burden to collect that information, so there's a trade-off. So we wanted to know information about facilities so that we could look at where screening was being done across the country. We wanted information on height and weight so we could look to make sure radiation dose was optimized for patient size. We looked at things that included shared decision-making, other lung cancer risk factors if they didn't meet our USPSTF criteria, incidental findings, as David discussed, and outcomes for screening. Because without the outcomes, what happens after an abnormal screen? Are they getting the right follow-up diagnostic tests? Are they getting them in a timely manner? Are they diagnosed with a benign or malignant nodule, and if malignant, what's the tissue diagnosis and what's the cancer stage? And the outcome cycle. And we put all this together in an interactive user dashboard so the facilities can go in and look at their own data. They can look at their whole facility or they can look at each individual practicing radiologist, and they can slice and dice their data by facility type to find practices that are like them, community versus academic, or by location, or by census tract or region. And they can look at key performance indicators like appropriateness of screening, radiation exposure by patient size, their lung RAS distributions in their practice or radiologists, diagnostic tissue testing and tissue sampling rates, lung cancer diagnosis and stage distribution, and a series of positive predictive values. But it's also not enough to just develop measures and throw them out there and say, look at them and do something with them. People fundamentally need to understand why the measures are important and how to use their own data to see if they have gaps in performance. So we formed a quality improvement and education subcommittee to specifically develop quality improvement templates to help educate radiology practices and facilities on three important key metrics, adherence to annual screening, achieving appropriate radiation exposure, and increasing non-smoking rates as a measure of smoking cessation interventions across screening with time. And these templates use a typical PDSA cycle, but they explain in very clear terms how to look at your data, who to include on your team if you have a gap, what questions to ask, and here's a bunch of strategies you can try. Because not everybody understands quality improvement, not everybody lives and breathes it every day, and it's bringing quality improvement to people who practice every day. So each of these templates has information on why, how to view your data, interventions you can trial, and information on how to use your own dashboard to see how you're performing. And so now the ACR's Lung Cancer Screening Registry has about 4.5 million screening events in it as of September of this year from over 3,500 facilities, and it has provided us important information on the national rollout of screening in a very decentralized healthcare system that's implemented at a local facility level. And it's shown us through some of the work like folks like Gerard Sylvester, who's back there, has done tons of work with our data on who's being screened, age, gender, race, ethnicity, education, insurance status, and to identify gaps that help us inform where we need to go and where we need to put efforts. Early stage cancer distribution of cancers diagnosed. It affirmed the use of lung reds in interpretation, but it also showed us things like the lack of adherence to annual screening in the first million screens was only 22 percent. So clearly something we need to do a lot of work to improve. But by knowing where the gaps are, could be in geography, could be in patient population, could be by insurance status, it helps us inform where to go next. But, you know, what are the most important quality measures? Peter Mazzone, who heads a very centralized lung cancer screening program at the Cleveland Clinic, led a group at the National Lung Cancer Roundtable through a consensus exercise. This brainstorming around potential indicators of lung cancer screening quality through our implementation task group. And then through the 30 potential indicators, they narrowed it down iteratively through a performance of consensus independent ratings to get down through three rounds to a series of quality measures. And again, quality depends on when and where you are in your journey with lung cancer screening. It depends on what you'd like to measure from an ideal standpoint, but what's feasible, what do you have data to be able to look at? And the measures that made it to the top of the list were screening appropriateness, are you screening the right people based on whatever guideline it is you are using in your practice? In the U.S., it's RUSPSDF, generally speaking. Is smoking cessation being delivered as the single best way to reduce lung cancer mortality? Smoking cessation, is it integrated, is it effective, or is it just a check box with a brochure? The third, fourth, and fifth top recommendations for lung cancer screening quality measures were people getting the right follow-up after their lung cancer screening. If they had a negative screen, were they coming back in a year? If they had positive screens of different risk categories, Lung RANDS 3 and 4, were they coming back for the interim follow-ups that were recommended? So that was 3-5. And number 6 was, if they had highly suspicious findings, Lung RANDS 4B and X, were they getting timely evaluation of those findings, timely diagnosis of lung cancer if they should have a diagnosis? Now there are many other measures that were looked at and didn't move forward, not necessarily because they weren't good things to measure, and good things to impact the quality of lung cancer screening practice, but because the measures may be not feasible in the status of the electronic health records, for example. Or because the data is not routinely and consistently entered into an EHR. And those things included, like, the percent of non-surgical or surgical biopsies in people who had benign lesions, ultimately had benign lesions. The percent of screen-detected lung cancers that were Stage 1. Are we screening symptomatic people? That's really a no-no. That's not lung cancer screening. But not having the information in the EHRs to necessarily extract that answer. Evaluating the performance of a shared decision-making, in our EHR we know it's been done, but we don't know if it's been done well. Are we performing low-dose CT within the radiation dose recommended? Again, not sitting in most EHRs, usually sits in radiology databases. The percent of positive screens, the percent of non-actionable findings, other non-nodule actionable findings, and the percentage of cancers other than lung cancer. Good things to think about knowing, but not necessarily extractable for most EHRs. And so it's really exciting that the ISLC's Early Detection Screening Working Group has a consensus methodology project underway, looking at 44 different quality measures across 11 domains for their importance and feasibility. Essential versus desirable, but not necessarily feasible, to help us move forward with a consensus with international experts in this ranking exercise, looking at these 11 domains for lung cancer screening. Entry into screening, eligibility, smoking cessation, imaging, adherence, diagnostics and outcomes, harms, safety, treatment, health equity, wait times for care, and patient satisfaction. And I'm very hopeful that through this exercise we can increase the visibility of essential quality metrics for lung cancer screening, things that we can use across the world with agreed-to definitions and compare practices locally or practices nationally, so that we can continue to move forward the safe and effective practice of lung cancer screening and ultimately do well by the patients we serve and do good in the public health. Thank you very much. Okay. And there's some time. I'm going to keep my closing remarks for this session brief. But what I wanted us to think about when we're thinking about AI is there is a little bit of detachment between tools and clinical practitioners. And there's this big gap to fill between what AI could potentially do and how we accept it as human beings and the way we practice. So there's technology and there's humans, there's science and humans, and it's the interaction of the two that allows us to make decisions to move forward in a positive manner. The possibilities of AR are huge, as we've seen, helping us identify in CT data sets risk factors, risk prediction for somebody developing cancer. When we have abnormalities, to rule them in or out as cancer, to level the playing field and the quality of the acquisitions that we perform, I mean, the number of things that we can do with AI tools across the lung cancer screening continuum into lung cancer diagnostics and therapeutics is tremendous. It provides this tremendous opportunity that we couldn't have even thought about 10 or 15 years ago. But the other thing I want to bring together is that we know how long it takes to go from science to implementation. And an implementation cycle is not measured in minutes or hours or years, it's decades. We know that as physicians, when we're practicing medicine, we are the most knowledgeable in the years following our training, and with training plus practice, our knowledge base goes up and then it starts to fall over time. And so AI offers us the potential ability to level the playing field and the heterogeneity we see in our imperfect human beings to use AI to help us level the playing field across the diversity of practitioners, the quality of practitioners, and the way we, as humans, look at data to try and make decisions for our patients. But what AI doesn't do is really inform us on the human part of how practitioners interact with patients. And so when we think about AI and we're so afraid that it's going to take away people's jobs, which radiologists tend to get all the time as a radiologist, or we don't need as many radiologists. If we think about it from the way it can enhance what we're doing, it can level the playing field across differences in quality, whether it's years in training, years out from training, the cycles of implementation of new information to practice, or just the variability across human beings' performance, we should take heart in the fact that medicine and delivering health care is a people skill. It's people working with people. And that human-to-human interaction and how we influence patients is perhaps more valuable than any technology we can bring to bear, because it's that trust between patient and practitioner that makes patients feel comfortable in how they're moving forward their health care. So while people may be afraid of AI, there's so much that it can enhance us and help us perform in a better way systematically. But it will never fundamentally replace the human-to-human interaction that builds trust and confidence in health care delivery and drives many of the reasons why most of us went into health care, which is to do greater good for our patients in practice. And in lung cancer screening, I can think of nothing better as a public health tool to reduce mortality from the highest cause of cancer death in the world in our local populations and the patients that we serve, and to use lung cancer screening beyond lung cancer to the other actionable findings that we find that also can do public health good. Thank you very much. Thanks, Steve. I've been given the role of trying to bring all this together in 10 minutes and without actually going through every talk again. What I'm trying to try and do, and somebody used the term, don't want to piss everybody off, but I will be leaving certain things out, an expression that will probably live with you through this conference. This actually really has come from a document from last year. And Stephen Lamb, myself, and a group of individuals have moved this to the point it's actually now on JCO website as a pre-print. And it really is looking at where we want to go over the next five years. And the idea was to actually take these subjects in place over the next five years and try to tackle them. Now, some of the topics that we have dealt with today fall in line. Others don't. But we've got another four years. OK, this was the road map. I can't read it, so you can read it for me. The issue, can I see this on the screen? It'd be much more helpful. So basically, the road map has got these nine areas that we want to cover. And I'm not going to go through them. You can all read it, probably much better than I can. And the thing is that I appreciate when we plan these workshops, we can't always say we will do X, Y, Z on year one, year two. Do you know, at the very beginning, first talk, easy, cheap, convenient. And I thought to myself, that is something I've always wanted to hear about lung cancer screening. Now, I don't think it was said in jest by my colleague. That's great, I can then see my own slides. OK, so easy, cheap, convenient. Basically, this is something we want lung cancer screening to be. We don't want it to be complex and fall asunder in the first go. So I actually really liked that whole concept and the proposal of what they were going to do in China. The pros and cons. I've spent some time writing these slides up, because I think this is something that is going to be debated. And I'll give you my opinion at the end. But there is no right answer. And that is actually probably one of the most important points to take away from this today. What are our values? I like that question. It came from the floor. Do we actually identify more cancers, or do we actually identify targeted approaches and less harms? And I don't believe that we can actually say one or the other. I think it's actually going to be cost. How carefully can we approach this question? And I think there will be different approaches to USA, Canada, and Asia. How should we identify never smokers? And to be perfectly honest, we have lots of discussion about secondhand smoke, occupational history, and I've been through this in my lifetime, and individuals putting work into occupational history. It is intense. And secondhand smoking is difficult. So I've put in how accurate is that, and how reliable is that to put into an e-risk model? The question was asked, are the cancers different in ever smokers and never smokers? Yes, they are genetically, but are they actually different clinically? And the answer to that is probably not. Once you have a cancer, you have a cancer. The question of over-diagnosis is a problem, and I think that is something we have to be careful of. There's evidence that is going backwards and forwards on this, and the pro and con people took different papers and then proved their points. Do we need a randomized controlled trial? There's lots of controversy about this, but actually, do we want to be having the same discussion in five or 10 years' time? And what is the best way of taking this forward? And it probably is a randomized controlled trial, because if we actually knew what the harms were, and I'm assuming there will be harms, are those harms manageable, and can we deal with them in society? Or do we say, actually, screening for this particular group of people, unless they're very high risk, which we could show by, let's say, a biomarker of the future, is not something that should be included. I'm leaving it as an open question. I'm not trying to answer it. Now, Ray actually made some great comments. Emotional level, population level, and resources finite. And I think that actually sums up. We can be very emotional about this. I was at a meeting about six months ago. We had two individuals on the stage. They had both lung cancer. They were non-smokers. They were just 40, 45. And they were pleading with us to put more research and effort into lung cancer in never smokers. So I think the thing is that that is the emotional side. I could see how the audience responded. But when you start to look at it from the outside, how practical is it to take it forward? How do we define never risk? The current tools are blunt. I'm not going to give you a conclusion because I think the question is still hanging. Lung cancer risk assessment, environmental factors. How practical are they to implement? Volatile compound data. Now the paper that came from Charlie Swanton was wonderful. I mean, we all actually probably read it several times and then went back and reread it again to try to understand it. But the bottom line was, I think Ray had the same problem. The thing is that it was something that demonstrated that actually if you put enough work into it, you can show occupational exposure or let's say even atmospheric pollution has a problem. Wildfires, I had never even considered this and there was a great presentation from Canada. I'm not sure how we use this information. That's something we probably have to build in the future. But it's quite clear, not exercising if there's a gray cloud over you is probably the first bit of advice. Exposure PM 2.5, national datasets are available. Are we talking about only Pacific countries that collect that data or Pacific regions of the country? Yes, we would like to include it. But I have my doubts actually with this be practical if we're considering this as an international approach. Tumor biology, I don't want to race through this but on the other hand, great talk, methylation, proteomics. I'm involved in the integral study so I wouldn't actually say it's great. But the thing is there's some wonderful publications coming out from it and Ray Jing has really led a lot of this and there's a dual purpose platform. I've also undertaken some of this work independently and proteomics would appear to be an excellent marker for the future. I think one individual asked, one individual asked, can this be actually used over time? In other words, how soon does a biomarker pop up? And we looked at specimens for one year, three years, five years before diagnosis and yes, there is a proteomic signature. Next generation risk models. Absolute risk models, UK, USA, Asian models. I believe we have to start to consider these as precise models for each population. I've put down here large data sets that use of imputation. I'm not going to go back in that discussion which I started but I do feel there are major issues. Great science but how do you implement models that are built through imputation? Somebody else can answer that. Polygenic risk models. This is actually the, on the crest of the wave, I think there's an enormous potential behind this and using the biobank data, something that I actually hadn't appreciated, non-smokers did not reach the threshold within that UK. Is that telling us something to answer the previous question? Future incorporation of ETS, PM2.5 and genetic risk I actually support this concept and I started Life Office as a geneticist so I would naturally like to see genetics incorporated. I also believe in the concept of, let's say the diabetes dipstick technique. We want to have a test that is probably five or $10, not the same price as a CT scan. So will these genetic tests be cost effective? We will have to develop completely new platforms if we're going to do this. I sort of think of the terminology nanotechnology. Modifying lung cancer risk. Excellent presentation. I bring you back to Pitou's 1950s work. He actually demonstrated lung cancer risk was by age and also by, it was actually in GPs which scared the wits out of the GPs in the UK at that time. Air pollution, radon, asbestos. I actually think asbestos can be used in risk models and if you're cutting up asbestos, if you're putting asbestos lagging into a boat, you will remember it. I think knowing about air pollution and radon in your area or somebody else finding out for you, I think this could be an issue. Preventive action. Yes, some great UK successes and I've just put that slide in there. Risk assessment. Is race a risk factor? I'm amazed that we're actually even allowed to ask that question. Different SNPs were found in European and Asian populations. Now, that actually tells us something. And the fact that 30% of Australians born in other countries, we need to start to consider the, let's say, ethnic divergence of risk and develop the suitable models for each of these groups. PLCO, great model. LLP, pretty good, I would say. There are others out there but it's only the PLCO and the LLP are actually used in anger, if I can phrase it, at this moment in time. They've been validated multiple times. And what's of interest, and I am the first person to say it, no risk model is perfect, and I'm delighted in the UK, they're using both the PLCO and the LLP risk model. I would not like to have that responsibility on my shoulder in five, 10 years time when they turn around and say, that model was really pretty awful. Okay, how early can a tumor be detected? That is an extremely good question. And are we actually looking at risk? Are we looking at development of biological process? But actually, that is something from a biological marker we really need to sort of put much more work and effort into. Jim Mulshine actually hit the button. Cost, that is the main issue for implementing biomarkers. There is no point having a biomarker which costs $150 per patient. It may be suitable for a very, let's say small subset of individuals with particular nodules on follow-up, but it's not in the screening setting. Do biomarkers reduce over diagnosis? I agree, the individual answer to themselves. They said it was a hard question. And I personally believe it's difficult to add air pollution to risk models. Right, the risk assessment, the discussion. There was actually great praise for the two models that are used in UK, Canada, and Australia. I personally believe and I think the person we're discussing utilize the exposure data level of precision questioned. Risk models vary by population investigation and we need to develop population-specific risk model. I've already said that. Challenges of implementing. We are very pleased in the UK that we have 40% of the people that actually maybe in places up to 50% in the targeted lung health check. But you could put that another way and say up to 60% don't participate. Isn't that our main question and problem today is how we get people to participate? And it's mainly the current smokers that don't want to participate. Diagnostic strategies. I'm enlisting these. But in fact, what I really liked was a comment from the floor. Robotic diagnostic bronchoscopy, a game changer. And it's quite clear for the people that are involved at the forefront of this field. I appreciate equipment is expensive, but this actually does appear to be making a major stride forward. The watch and spot trial. I thought that was fascinating and the detail that was given it. But actually, lung cancer diagnosis less frequently in patients was incidental compared to screen-detected nodules. And I think this was probably the main message I took from that. Diagnostic strategy. The foreign lung trial, which is Horizon 2020. My colleague, Professor Mathias Ueckhoek is running it. The first point, and many people don't think about this, is an end-to-end management tool. And I would highly recommend this system if you are actually starting to build a new screening system. I know there are others out there. I'm not trying to sell this one to you. What I'm trying to say is that we need to have end-to-end management systems. And in the UK, unfortunately, we haven't got one universal system. So I'm pointing the finger at ourselves so we can all learn from that. And I'm pointing that comment at the UK. Major finding, coronary artery calcification. What do we do with that information? I know in the UK, well, in Liverpool, we're actually going to set up a trial to do it. But in trial, that's in service evaluation. But Mathias has actually already provided some fascinating data that the score over 400, 30.9%, these patients need treatment. Maybe it's statins, maybe they need something more. So the whole concept, we talk about lung cancer screening. Should we be thinking of the big three? Lung cancer, coronary heart disease, COPD. And when we think more people that derive coronary heart disease and lung cancer, I would have thought from a public health point of view, we should be doing it. I'm giving a net, and it's conclusions. I actually felt her comments were really to the point. There's no wrong approach. Lung RADs and the Nelson in the foreign lung run. The different systems are built up, let's say, across the pond. And from my point of view, each country decides what way they're going to go. And I'm sure the US are comfortable with the lung RADs. In Europe, we're comfortable using volumetrics, which is very much the Nelson approach. But I really liked her term, hunting for treasure, the new navigational approaches. And that probably sort of echoes her own background. But we do need to focus on small lesions, but use the correct terminology. And that actually came up in the very last presentation. Bringing art to science, I like that. Application of AI and lung cancer screening. Predicting lung cancer from a single chest CT. And this was, the first image you can see is actually something that I think came from the Mississippi group, am I correct? Grasp and reach of the lung cancer screening and other screening uses. I like that concept, other screening uses, because we need to actually ensure that actually lung cancer screening is in the community, but from a public health point of view, we use it as cleverly as possible. Application of AI and lung cancer screening. Now, I believe AI is something that we will all come to utilize in the coming years. But we need to start from the right material. And it's called the ground truth. And if you don't have the right data on which to build your AI tool, you're destined to fail. I also like the whole concept, a malignancy excluder. In other words, you're basically looking to remove probably up to 90% of the scans, because there's nothing there, and then the radiologist, your consultant radiologist focuses on that 10%. And that is something that would really help us in the UK because we have about 15% of consultant positions that aren't filled. Negative nodule workflow, and utilizing the Danish, Moscow, and also now the UKLS data. Wonderful piece of work, and I'd like to spend more time on it. The clinical impact of identifying additional incidental findings. Now, David sort of went into the detail of this, but actually, I think the term neglectable benefit. I've never heard that one before. It's an interesting viewpoint. But actually, how and should we provide this information to patients? And I think how we do it is probably the most important. But also, not if it's how, but how we actually share that information without worrying them unnecessarily. Developing quality indicators and developing quality improvement. Now, this is something that is really the next step. It's putting the icing on the cake. We've got lung cancer screening, but we need to think of quality control all the way through. We need to know where the gaps are, and a very good example was given of the ACR-LCS registry. The L-L-C-R-T consensus paper is naturally pointing us in the right direction, and the ISLAC group are currently bringing forward a publication in that area. I was going to go back to the very first slide, if I can go quickly, just to say what have we achieved in this workshop? And if I look at what is being put here as the aims over the next five years, which actually came from last year's, we have started to look at educational needs, but not in great detail here. We have looked at publicly accessible data, but not in great detail here. Recommendations for incidentally detected lung. We have focused on that, but we are still at the point, in my mind, of making up our mind. I know several groups in several countries are putting time and effort into it. I know the ISLAC group is putting effort into this, but we do need to decide how much extra work we put towards the primary care physicians, because that is where, let's say, the problem is going to lie. We can set up the screening, but this will end up with the primary care physicians. Evidence to support clinical cost-effectiveness. I know we've done that in the UK. Each country has to do that to prove it to their own, let's say, government officials if they want to have a full program implemented. Personalized screening interval. We haven't touched on that. And integration of artificial intelligence and biomarkers. We are still, I think, in the very early days. So we've still got another four years, but obviously quite a lot to talk about. Thank you very much.
Video Summary
In this video, the speaker discusses the importance of quality in lung cancer screening. They define quality as doing the right thing at the right time for the right person and achieving the best possible outcome. They mention that quality can be assessed from different perspectives, including individuals, the public, health systems, and national health organizations.<br /><br />The speaker goes on to talk about the different categories of quality measures in lung cancer screening, such as eligibility criteria and annual screening rates. They stress the significance of safety in screening, which involves preventing errors and adverse events for patients.<br /><br />The video also mentions the development of a project to establish consensus on quality measures in lung cancer screening. This highlights the ongoing efforts to improve the quality and safety of screening procedures. The speaker concludes by emphasizing the need to ensure the best possible outcomes for patients through enhanced quality and safety measures.<br /><br />Overall, the video underscores the importance of quality in lung cancer screening and the ongoing initiatives to improve its effectiveness and safety.
Keywords
lung cancer screening
quality measures
importance of quality
assessing quality
eligibility criteria
annual screening rates
safety in screening
preventing errors
adverse events
project development
improving quality
enhancing outcomes
×
Please select your language
1
English