Show simple item record

hal.structure.identifier
dc.contributor.authorZhang, Jing
hal.structure.identifier
dc.contributor.authorPetitjean, Caroline
HAL ID: 5518
hal.structure.identifierLaboratoire d'analyse et modélisation de systèmes pour l'aide à la décision [LAMSADE]
dc.contributor.authorYger, Florian
HAL ID: 17768
ORCID: 0000-0002-7182-8062
hal.structure.identifier
dc.contributor.authorAinouz, Samia
HAL ID: 174726
dc.date.accessioned2021-01-15T09:37:36Z
dc.date.available2021-01-15T09:37:36Z
dc.date.issued2020
dc.identifier.urihttps://basepub.dauphine.fr/handle/123456789/21517
dc.language.isoenen
dc.subjectSaliency mapsen
dc.subjectExplanation evaluationen
dc.subjectRegression CNNen
dc.subjectBiometric predictionen
dc.subjectMedical imagingen
dc.subject.ddc006.3en
dc.titleExplainability for regression CNN in fetal head circumference estimation from ultrasound imagesen
dc.typeCommunication / Conférence
dc.description.abstractenThe measurement of fetal head circumference (HC) is performed throughout the pregnancy to monitor fetus growth using ultra-sound (US) images. Recently, methods that directly predict biometric from images, instead of resorting to segmentation, have emerged. In our previous work, we have proposed such method, based on a regression con-volutional neural network (CNN). If deep learning methods are the gold standard in most image processing tasks, they are often considered as black boxes and fails to provide interpretable decisions. In this paper, we investigate various saliency maps methods, to leverage their ability at explaining the predicted value of the regression CNN. Since saliency maps methods have been developed for classification CNN mostly, we provide an interpretation for regression saliency maps, as well as an adaptation of a perturbation-based quantitative evaluation of explanations methods. Results obtained on a public dataset of ultrasound images show that some saliency maps indeed exhibit the head contour as the most relevant features to assess the head circumference and also that the map quality depends on the backbone architecture and whether the prediction error is low or high.en
dc.identifier.citationpages73-82en
dc.relation.ispartoftitleInterpretable and Annotation-Efficient Learning for Medical Image Computing: Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3ID 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, Proceedingsen
dc.relation.ispartofeditorCardoso, Jaime
dc.relation.ispartofeditorVan Nguyen, Hien
dc.relation.ispartofeditorHeller, Nicholas
dc.relation.ispartofpublnameSpringeren
dc.relation.ispartofpages292en
dc.relation.ispartofurl10.1007/978-3-030-61166-8en
dc.subject.ddclabelIntelligence artificielleen
dc.relation.ispartofisbn978-3-030-61166-8en
dc.relation.conftitleWorkshop on Interpretability of Machine Intelligence in Medical Image Computing at MICCAI 2020en
dc.relation.confdate2020-10
dc.relation.confcityLimaen
dc.relation.confcountryPeruen
dc.relation.forthcomingnonen
dc.identifier.doi10.1007/978-3-030-61166-8_8en
dc.description.ssrncandidatenonen
dc.description.halcandidatenonen
dc.description.readershiprechercheen
dc.description.audienceInternationalen
dc.relation.Isversionofjnlpeerreviewednonen
dc.relation.Isversionofjnlpeerreviewednonen
dc.date.updated2021-01-15T09:33:28Z
hal.author.functionaut
hal.author.functionaut
hal.author.functionaut
hal.author.functionaut


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record