- Location: DAEJEON / REPUBLIC OF KOREA
- Submission Deadlines: 27th
of June 2025 AOE 1st of July 2025 AOE
- Notification of Paper Decisions: 16th of July 2025 AOE
- Camera-ready Paper Due: 30th of July 2025 (23:59 Pacific Time)
[March] The DeCaF Workshop has been approved for MICCAI 2025
Deep learning, AI's fastest-growing field, empowers enormous advances in
applications in both science and real-world scenarios. It has reached a
consensus that models could be further improved with a growing amount of
data. However, enabling learning on these huge datasets or training huge
models in a timely manner requires distributing the learning on several
devices. One particularity in the medical imaging setting is that data
sharing across different institutions often becomes impractical due to
strict privacy regulations as well as data ownership concerns, making
the collection of large-scale diverse centralized datasets practically
impossible.
Some of the problems, therefore, become: how can we train models in a
federated way on several devices? And is it possible to achieve models
as strong as those that can be trained on large centralized datasets
without sharing data and breaching the restrictions on privacy and
property? How can we ensure data privacy and model fairness? How can we
handle multi-site heterogeneous data? Federated learning (FL) allows
different institutions to contribute to building more powerful models by
performing collaborative training without sharing any training data. The
trained model can be distributed across various institutions instead of
the actual data. We hope that with FL and other forms of distributed and
collaborative learning, the objective of training better and more robust
models with higher clinical utility while protecting privacy within the
data can be achieved.
Through the fourth MICCAI Workshop on Distributed, Collaborative and Federated Learning (DeCAF), we aim to provide a discussion forum to compare, evaluate and discuss methodological advancements and ideas around federated, distributed, and collaborative learning schemes that are applicable in the medical domain. We invite full paper (8-page) submissions using the MICCAI 2025 template through CMT (https://cmt3.research.microsoft.com/DeCaF2025/). Topics include but are not limited to:
Nanyang Technological University, Singapore
Dr Han Yu is a tenured Associate Professor in the College of Computing and Data Science (CCDS), Nanyang Technological University (NTU), Singapore. His work focuses on trustworthy federated learning. He has published over 300 research papers in leading international conferences, journals and book chapters. He co-authored the book Federated Learning - the first monograph on the topic of federated learning. His research work has been recognized with multiple scientific awards. In 2021, he co-founded the Trustworthy Federated Ubiquitous Learning (TrustFUL) Research Lab. He is a Distinguished Member of CCF, and a Senior Member of AAAI and IEEE. For his continued contributions to the field of trustworthy AI and real-world impact in the society, he has been identified as one of the World's Top 2% Scientists in AI, and selected as one of the JCI Ten Outstanding Young Persons (TOYP) of Singapore.
Title: Challenges and Opportunities for Federated Learning in the Age of Foundation Models
Abstract: The rise of large foundation models underscores the importance and relevance of federated learning as a key research direction. As LLMs become the mainstream in machine learning development, the research focus is shifting from model architecture design to addressing challenges related to privacy preservation and distributed learning in order to efficiently leverage privately owned valuable but sensitive data. Advances in federated learning as an infrastructure for collaborative foundation model training/finetuning has the potential to unlock the value of large models by enabling efficient and scalable training while safeguarding sensitive data. The long-term healthy development of this field requires continuously attracting high-quality data owners to collaboratively build models and share the benefits. In this talk, I will share some of the efforts in this emerging area from the Trustworthy Federated Ubiquitous Learning (TrustFUL) Lab at Nanyang Technological University, Singapore. These include methods for quantifying participant contributions in federated settings, ensuring fairness among diverse participants, establishing multi-agent automated data auctions to support federated training, as well as collaborative training of foundation models in federated settings. I will also share about a deployed case study related to public sector service provision.
MLCommons, USA
Dr. Alexandros Karargyris is the co-chair for the Medical working group at MLCommons, a group aiming to develop best practices for benchmarking medical AI on real-world data to improve clinical translation of AI. Previously, he worked as a researcher at IBM and NIH. His research interests lie in the space of medical imaging, machine learning and mobile health. He has contributed to healthcare commercial products and imaging solutions deployed in under-resourced areas. His work has been published in peer-reviewed journals and conferences.
Title: MedPerf, an Open Framework for Benchmarking Medical AI in the Real World
Abstract: Medical Artificial Intelligence (AI) has the potential to advance healthcare and improve lives across the world, but careful considerations need to be taken into account during its product development life cycle with AI validation being an important phase for clinical translation. Evaluating the performance of AI models on diverse real-world data can be considered one of the highest levels of validation assisting their regulatory compliance pathway. However implementation of an infrastructure that supports such real-world data evaluation comes with technical, privacy, governance and sustainability concerns. MLCommons, a global, open, non-profit organization focused on developing benchmarks, benchmarking best practices and benchmarking tools has supported the development of MedPerf. MedPerf is an open source platform that enables healthcare organizations to assess the performance of medical AI models in an efficient and human-supervised process without accessing patient data. To accomplish this the platform’s design relies on federated evaluation in which medical AI models are remotely deployed and evaluated within the premises of healthcare data providers. This approach aims to address technical complexities, alleviate data privacy concerns and build trust among healthcare stakeholders, leading to a more effective real-world evaluation. In this talk we will be presenting the design of the platform, its benchmark principles, our experiences on supporting clinical studies using MedPerf.
University Hospital Bonn | Helmholtz Munich, Germany
University of British Columbia, Canada
NVIDIA, Germany
Indiana University, USA
Chinese University of Hong Kong, China
NVIDIA, USA
Anabik Pal, IISER Berhampur
Anna Banaszak, Technical University of Munich
Chamani Shiranthika Jayakody Kankanamalage, Simon Fraser
University
Di Fan, USC
Herve Delingette, Inria
Jonny Hancox, NVIDIA
Kevinminh Ta, Yale University
Lucia Innocenti, INRIA, King's College London
Moritz Fuchs, TU Darmstadt
Nikhil J Dhinagar, Imaging Genetics Center, University of
Southern California
Onat Dalmaz, Stanford University
Shunxing Bao, Vanderbilt University
Tolga Cukur, Bilkent University
Xiangyi Yan, University of California, Irvine
Zhao Wang, The Chinese University of Hong Kong
University Hospital Bonn, Germany
Chinese University of Hong Kong, China
NVIDIA offers GPU resources to all accepted papers.
Format: Papers will be submitted electronically following Lecture Notes in Computer Science (LNCS) style of up to 8 + 2 pages (same as MICCAI 2025 format). Submissions exceeding page limit will be rejected without review. Latex style files can be found from Springer, which also contains Word instructions. The file format for submissions is Adobe Portable Document Format (PDF). Other formats will not be accepted.
Double Blind Review: DeCaF reviewing is double blind. Please review the Anonymity guidelines of MICCAI main conference, and confirm that the author field does not break anonymity.
Paper Submission: DeCaF uses the CMT system for online submission.
Supplemental Material: Supplemental material submission is optional, following same deadline as the main paper. Contents of the supplemental material would be referred to appropriately in the paper, while reviewers are not obliged to read them.
Submission Originality: Submissions should be original, no paper of substantially similar content should be under peer review or has been accepted for a publication elsewhere (conference/journal, not including archived work).
Proceedings: The proceedings of DeCaF 2025 will be published as part of the joint MICCAI Workshops proceedings with Springer (LNCS)
Papers will be published as part of the MICCAI Satellite Events joint
LNCS proceedings.
Please have a look at our previous workshop: DeCaF 2024 , DeCaF 2023 , DeCaF 2022 , DCL 2021 and DCL 2020