Explainability for regression CNN in fetal head circumference estimation from ultrasound images
Zhang, Jing; Petitjean, Caroline; Yger, Florian; Ainouz, Samia (2020), Explainability for regression CNN in fetal head circumference estimation from ultrasound images, in Cardoso, Jaime; Van Nguyen, Hien; Heller, Nicholas, Interpretable and Annotation-Efficient Learning for Medical Image Computing: Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3ID 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, Proceedings, Springer, p. 73-82. 10.1007/978-3-030-61166-8_8
Type
Communication / ConférenceDate
2020Conference title
Workshop on Interpretability of Machine Intelligence in Medical Image Computing at MICCAI 2020Conference date
2020-10Conference city
LimaConference country
PeruBook title
Interpretable and Annotation-Efficient Learning for Medical Image Computing: Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3ID 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, ProceedingsBook author
Cardoso, Jaime; Van Nguyen, Hien; Heller, NicholasPublisher
Springer
ISBN
978-3-030-61166-8
Number of pages
292Pages
73-82
Publication identifier
Metadata
Show full item recordAuthor(s)
Zhang, JingPetitjean, Caroline
Yger, Florian

Laboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
Ainouz, Samia
Abstract (EN)
The measurement of fetal head circumference (HC) is performed throughout the pregnancy to monitor fetus growth using ultra-sound (US) images. Recently, methods that directly predict biometric from images, instead of resorting to segmentation, have emerged. In our previous work, we have proposed such method, based on a regression con-volutional neural network (CNN). If deep learning methods are the gold standard in most image processing tasks, they are often considered as black boxes and fails to provide interpretable decisions. In this paper, we investigate various saliency maps methods, to leverage their ability at explaining the predicted value of the regression CNN. Since saliency maps methods have been developed for classification CNN mostly, we provide an interpretation for regression saliency maps, as well as an adaptation of a perturbation-based quantitative evaluation of explanations methods. Results obtained on a public dataset of ultrasound images show that some saliency maps indeed exhibit the head contour as the most relevant features to assess the head circumference and also that the map quality depends on the backbone architecture and whether the prediction error is low or high.Subjects / Keywords
Saliency maps; Explanation evaluation; Regression CNN; Biometric prediction; Medical imagingRelated items
Showing items related by title and author.
-
Jia, Linlin; Gaüzère, Benoit; Yger, Florian; Honeine, Paul (2021) Communication / Conférence
-
Corsi, Marie-Constance; Chevallier, Sylvain; Barthélemy, Quentin; Hoxha, Isabelle; Yger, Florian Communication / Conférence
-
Pauty, Joris; Usuba, Ryo; Cheng, Irene Gayi; Hespel, Louise; Takahashi, Haruko; Kato, Keisuke; Kobayashi, Masayoshi; Nakajima, Hiroyuki; Lee, Eujin; Yger, Florian; Soncin, Fabrice; Matsunaga, Yukiko (2018) Article accepté pour publication ou publié
-
Chevallier, Sylvain; Corsi, Marie-Constance; Yger, Florian; de Vico Fallani, Fabrizio (2022) Article accepté pour publication ou publié
-
Galarce, Felipe; Gerbeau, Jean-Frédéric; Lombardi, Damiano; Mula, Olga (2019) Document de travail / Working paper