標(biāo)題: Titlebook: Ethics and Fairness in Medical Imaging; Second International Esther Puyol-Antón,Ghada Zamzmi,Roy Eagleson Conference proceedings 2025 The E [打印本頁(yè)] 作者: 難受 時(shí)間: 2025-3-21 17:05
書(shū)目名稱Ethics and Fairness in Medical Imaging影響因子(影響力)
書(shū)目名稱Ethics and Fairness in Medical Imaging影響因子(影響力)學(xué)科排名
書(shū)目名稱Ethics and Fairness in Medical Imaging網(wǎng)絡(luò)公開(kāi)度
書(shū)目名稱Ethics and Fairness in Medical Imaging網(wǎng)絡(luò)公開(kāi)度學(xué)科排名
書(shū)目名稱Ethics and Fairness in Medical Imaging被引頻次
書(shū)目名稱Ethics and Fairness in Medical Imaging被引頻次學(xué)科排名
書(shū)目名稱Ethics and Fairness in Medical Imaging年度引用
書(shū)目名稱Ethics and Fairness in Medical Imaging年度引用學(xué)科排名
書(shū)目名稱Ethics and Fairness in Medical Imaging讀者反饋
書(shū)目名稱Ethics and Fairness in Medical Imaging讀者反饋學(xué)科排名
作者: 表示問(wèn) 時(shí)間: 2025-3-21 22:52 作者: Subdue 時(shí)間: 2025-3-22 02:06
https://doi.org/10.1007/978-1-4614-1854-2e performance across various demographic groups. However, their performance varies strongly across nodule characteristics (size and type) in line with their prevalence in the training set. To ensure continued equitable performance, algorithms should not only consider demographic but also nodule attributes representativeness in their training.作者: 現(xiàn)任者 時(shí)間: 2025-3-22 04:39 作者: irreducible 時(shí)間: 2025-3-22 12:07
https://doi.org/10.1007/978-3-658-14777-8forming impact assessments of sociotechnical harms can assist in operationalizing the medical ethics principle of non-maleficence, thereby guiding the ethical development and implementation of AI technologies in healthcare.作者: GOUGE 時(shí)間: 2025-3-22 15:58
https://doi.org/10.1007/978-3-319-31287-3ectual property, and data ownership. Furthermore, we discuss regulations governing the use of synthetic medical data. To promote equitable application of these powerful tools, we also propose clear guidelines for promoting fairness, mitigating bias, and ensuring diversity within generative AI models.作者: GOUGE 時(shí)間: 2025-3-22 20:41 作者: synovitis 時(shí)間: 2025-3-23 00:06 作者: Cholesterol 時(shí)間: 2025-3-23 01:45
On Biases in?a?UK Biobank-Based Retinal Image Classification Modelresponds differently to the mitigation methods. We also find that these methods are largely unable to enhance fairness, highlighting the need for better bias mitigation methods tailored to the specific type of bias.作者: 斥責(zé) 時(shí)間: 2025-3-23 07:27
Assessing the?Impact of?Sociotechnical Harms in?AI-Based Medical Image Analysisforming impact assessments of sociotechnical harms can assist in operationalizing the medical ethics principle of non-maleficence, thereby guiding the ethical development and implementation of AI technologies in healthcare.作者: insert 時(shí)間: 2025-3-23 10:23 作者: 預(yù)知 時(shí)間: 2025-3-23 16:05
Conference proceedings 2025ed from 21 submissions...FAIMI aimed to raise awareness about potential fairness issues in machine learning within the context of biomedical image analysis..The instance of EPIMI concentrates on topics surrounding open science, taking a critical lens on the subject..作者: 處理 時(shí)間: 2025-3-23 21:21
0302-9743 I 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, in October 2024...The 17 full papers presented in this book were carefully reviewed and selected from 21 submissions...FAIMI aimed to raise awareness about potential fairness issues in machine learning within the context of biomedical作者: Femish 時(shí)間: 2025-3-23 23:36
Mobile Information Service for Networksis bias in CNN-based AD diagnosis for these groups. We present a post-hoc bias mitigation technique that significantly improves fairness by reducing overdiagnosis and enhances reliability by improving calibration without compromising overall model accuracy. Code is available at: ..作者: Infirm 時(shí)間: 2025-3-24 04:12 作者: Fabric 時(shí)間: 2025-3-24 10:01
0302-9743 image analysis..The instance of EPIMI concentrates on topics surrounding open science, taking a critical lens on the subject..978-3-031-72786-3978-3-031-72787-0Series ISSN 0302-9743 Series E-ISSN 1611-3349 作者: 微塵 時(shí)間: 2025-3-24 14:17
https://doi.org/10.1007/978-3-319-70878-2model trained using Empirical Risk Minimization on a dataset containing a shortcut. In several cases, we close the gap on our clean baseline to the point that there is no statistically significant difference in performance. We also address the practical challenge of obtaining a clean oracle model, enhancing the method’s real-world applicability.作者: 磨坊 時(shí)間: 2025-3-24 15:28 作者: 良心 時(shí)間: 2025-3-24 21:01 作者: extinguish 時(shí)間: 2025-3-25 01:34 作者: Harrowing 時(shí)間: 2025-3-25 07:04
Lecture Notes in Computer Sciencehttp://image.papertrans.cn/f/image/320731.jpg作者: FLOAT 時(shí)間: 2025-3-25 11:17
978-3-031-72786-3The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerl作者: 令人發(fā)膩 時(shí)間: 2025-3-25 14:05 作者: 非秘密 時(shí)間: 2025-3-25 18:17 作者: Pandemic 時(shí)間: 2025-3-25 20:48 作者: 厚顏 時(shí)間: 2025-3-26 02:27
https://doi.org/10.1007/978-1-4615-0051-3f skin lesion classification using ResNet-based CNNs, focusing on patient sex variations in training data and three different learning strategies. We present a linear programming method for generating datasets with varying patient sex and class labels, taking into account the correlations between th作者: 稀釋前 時(shí)間: 2025-3-26 08:06
https://doi.org/10.1007/978-3-658-37093-0or, affects the presentation of disease in medical images, and hence the performance of AI algorithms. Existing fairness criteria such as equalized odds do not capture this effect, as is illustrated by an example. Additionally, a new metric is proposed based on the information theoretic notion of ad作者: ZEST 時(shí)間: 2025-3-26 08:56
https://doi.org/10.1007/978-3-319-60777-1ally changes the appearance of Computed Tomography (CT) images, which - if unknown - can significantly deteriorate the diagnostic performance of neural networks. Artificial Intelligence (AI) can help to detect IV contrast, reducing the need for labour-intensive and error-prone manual labeling. Howev作者: Medley 時(shí)間: 2025-3-26 12:53 作者: Arthr- 時(shí)間: 2025-3-26 17:23 作者: 一再困擾 時(shí)間: 2025-3-26 23:56 作者: deface 時(shí)間: 2025-3-27 02:11
https://doi.org/10.1007/978-1-4614-1854-2s advanced. Resource constraints have resulted in increasing reliance on computer-aided detection (CADe) systems to assist with scan evaluation. The datasets used to train these algorithms are often unbalanced in their representation of protected groups e.g. sex and ethnicity. This project investiga作者: hypotension 時(shí)間: 2025-3-27 08:10
https://doi.org/10.1007/978-3-031-02204-3linical practice. One of the main barriers is the challenge of domain generalisation, which requires segmentation models to maintain high performance across a wide distribution of image data. This challenge is amplified by the many factors that contribute to the diverse appearance of medical images,作者: FAR 時(shí)間: 2025-3-27 10:25 作者: 有毛就脫毛 時(shí)間: 2025-3-27 15:27
https://doi.org/10.1007/978-3-531-19484-4mage diagnostics and promoting equitable healthcare. However, many databases do not provide protected attributes or contain unbalanced representations of demographic groups, complicating the evaluation of model performance across different demographics and the application of bias mitigation techniqu作者: Humble 時(shí)間: 2025-3-27 20:07
https://doi.org/10.1057/9781137309815ned on multi-site datasets, models may excel with data from certain institutions but struggle with others, even when exposed to their training data. This emphasizes the importance of investigating whether all sites benefit from AI models, especially within distributed learning setups. Distributed le作者: Arthr- 時(shí)間: 2025-3-27 23:10 作者: nutrients 時(shí)間: 2025-3-28 05:01
https://doi.org/10.1007/978-981-10-0027-0parities are present in the UK Biobank fundus retinal images by training and evaluating a disease classification model on these images. We assess possible disparities across various population groups and find substantial differences despite strong overall performance of the model. In particular, we 作者: 燈絲 時(shí)間: 2025-3-28 06:24 作者: DEFER 時(shí)間: 2025-3-28 12:20
https://doi.org/10.1007/978-3-658-14777-8 this domain, and in other everyday applications, has brought an increased awareness of the potential impacts and negative consequences that may occur throughout the sociotechnical systems that these technologies are implemented in. In this paper, we review and apply a previously published taxonomy 作者: armistice 時(shí)間: 2025-3-28 15:17
https://doi.org/10.1007/978-3-319-31287-3, treatment planning, interventions, and drug development. It benefits the clinical flow with real-time decision-support systems. While generative AI can potentially improve healthcare, it also introduces new ethical issues that require careful analysis and mitigation strategies. This work emphasize作者: organism 時(shí)間: 2025-3-28 20:54
Slicing Through Bias: Explaining Performance Gaps in?Medical Image Analysis Using Slice Discovery Mee challenges to their clinical utility, safety, and fairness. This can affect known patient groups – such as those based on sex, age, or disease subtype – as well as previously unknown and unlabeled groups. Furthermore, the root cause of such observed performance disparities is often challenging to 作者: SOB 時(shí)間: 2025-3-29 02:47
Dataset Distribution Impacts Model Fairness: Single Vs. Multi-task Learningf skin lesion classification using ResNet-based CNNs, focusing on patient sex variations in training data and three different learning strategies. We present a linear programming method for generating datasets with varying patient sex and class labels, taking into account the correlations between th作者: Intellectual 時(shí)間: 2025-3-29 05:01
AI Fairness in?Medical Imaging: Controlling for?Disease Severityor, affects the presentation of disease in medical images, and hence the performance of AI algorithms. Existing fairness criteria such as equalized odds do not capture this effect, as is illustrated by an example. Additionally, a new metric is proposed based on the information theoretic notion of ad作者: metropolitan 時(shí)間: 2025-3-29 08:45 作者: Aspirin 時(shí)間: 2025-3-29 13:54
Mitigating Overdiagnosis Bias in?CNN-Based Alzheimer’s Disease Diagnosis for?the?Elderlyd early-stage dementia. While AI algorithms have matched specialist performance in diagnosing AD, they tend to produce unreliable results for the oldest populations, generating false positives that increase radiologist workloads and healthcare costs. In this study, we focus on mitigating overdiagnos作者: Axillary 時(shí)間: 2025-3-29 17:07
Positive-Sum Fairness: Leveraging Demographic Attributes to?Achieve Fair AI Outcomes Without Sacrifi the importance of equal performance, we argue that decreases in fairness can be either harmful or non-harmful, depending on the type of change and how sensitive attributes are used. To this end, we introduce the notion of positive-sum fairness, which states that an increase in performance that resu作者: modish 時(shí)間: 2025-3-29 21:19 作者: 優(yōu)雅 時(shí)間: 2025-3-30 01:29
Exploring Fairness in?State-of-the-Art Pulmonary Nodule Detection Algorithmss advanced. Resource constraints have resulted in increasing reliance on computer-aided detection (CADe) systems to assist with scan evaluation. The datasets used to train these algorithms are often unbalanced in their representation of protected groups e.g. sex and ethnicity. This project investiga作者: 逗留 時(shí)間: 2025-3-30 08:03
Quantifying the?Impact of?Population Shift Across Age and?Sex for?Abdominal Organ Segmentationlinical practice. One of the main barriers is the challenge of domain generalisation, which requires segmentation models to maintain high performance across a wide distribution of image data. This challenge is amplified by the many factors that contribute to the diverse appearance of medical images,作者: 步兵 時(shí)間: 2025-3-30 11:20 作者: BOOST 時(shí)間: 2025-3-30 16:22 作者: crumble 時(shí)間: 2025-3-30 18:29
Do Sites Benefit Equally from?Distributed Learning in?Medical Image Analysis?ned on multi-site datasets, models may excel with data from certain institutions but struggle with others, even when exposed to their training data. This emphasizes the importance of investigating whether all sites benefit from AI models, especially within distributed learning setups. Distributed le作者: 熱情贊揚(yáng) 時(shí)間: 2025-3-30 23:21
Cycle-GANs Generated Difference Maps to?Interpret Race Prediction from?Medical Images in medical images, but what such features may be remains an unanswered question. In this work, we aim to identify image regions relevant to race prediction. We argue that previous methods toward this goal (namely, occlusion maps) are not sufficient as they are unable to locate such regions, and we 作者: Pcos971 時(shí)間: 2025-3-31 02:28 作者: transdermal 時(shí)間: 2025-3-31 06:07 作者: 昏迷狀態(tài) 時(shí)間: 2025-3-31 11:11
Assessing the?Impact of?Sociotechnical Harms in?AI-Based Medical Image Analysis this domain, and in other everyday applications, has brought an increased awareness of the potential impacts and negative consequences that may occur throughout the sociotechnical systems that these technologies are implemented in. In this paper, we review and apply a previously published taxonomy 作者: 微不足道 時(shí)間: 2025-3-31 14:13
Practical and?Ethical Considerations for?Generative AI in?Medical Imaging, treatment planning, interventions, and drug development. It benefits the clinical flow with real-time decision-support systems. While generative AI can potentially improve healthcare, it also introduces new ethical issues that require careful analysis and mitigation strategies. This work emphasize