Cafés de la Neuroinformatique
Organised on a monthly basis, the cafés constitutes an informal get-together for the ICM Neuroinformatics community.
The many themes & issues around Data Science and Neuroinformatics are addressed and it is a time for sharing and informing. You will find below the themes presented since 2018.
The Feedback on the use of the Jean Zay supercomputer
13 February 2020
Launched by GENCI, the Jean Zay converged supercomputer is available since October 2019 and is dedicated to the AI community (more than 1000 GPU V100 cards). We will present how to access and use it, the positive aspects, and the difficulties encountered during our experience.
Presentation by Mauricio Diaz-Melo (AramisLab)
The Data Management Plan (DMP) &
Clinical research data collection & management with REDCap
16 January 2020
A DMP is compulsory for all grant applications as of 01/01/2020, presentation by Ségolène Aymé (DAR/Data Management expert) & Hélène Bensoussan (DAMS)
REDCap (Research Electronic Data Capture) is a secure web-based and user friendly application that supports structured data collection and management for research studies. No IT background is necessary to start using REDCap and handle all your projects’ clinical data (annotations) in a centralized and controlled database. Presentation by Mathias Antunes & Nabila Elarouci (iCONICS)
The Electronic Lab Notebook at ICM
12 December 2019, Stéphane Chaillou (DSI) & Françoise Piguet (Team Cartier)
- Presentation of the Labguru solution proposed by Inserm
- How to implement it in your team
- Feedback from Françoise Piguet (Team Cartier), the first team to use it at ICM
Atlas YEB WEB
28 November 2019, Jordan Cornillault (CENIR)
Challenges Online automated atlas-based segmentation of the basal ganglia of the brain with integration of 3D vizualisation deployed on Amazon Web Service (AWS).The system provides basal ganglia as meshes or binary masks (Regions Of Interest) and 3D vizualisation online.
This Web application is based on a dockerized CENIR-STIM software on the cloud with Django framework (Python) back-end orchestration and modern front-end UI (MDBootsrap, jQuery, Vue.js).
Already available at ICM, it will deployed on the Web in December.
Computational Pathology: a path ahead
19 September 2019, Daniel Racoceanu (Hassan Lab)
Anatomic pathology represents an essential referential examination in medicine. The evolution of slide scanning currently allows for more ergonomic work on virtual slides, leading to the so-called Digital Pathology.
Beyond the challenges generated by this technology (adaptation of the diagnosis protocol, high-resolution / high-content images, fluidity of online examination / streaming, significant storage resources, control of data security), this represents a real clinical and medical / scientific research opportunity.
The spectacular development of high-content image analysis techniques, as well as deep learning, generates an increasing interest in Computational Pathology, particularly through its integrative aspects (Integrative Digital Pathology), a strategic link between clinic, pathology and molecular biology.
In this talk, we wish to introduce some of the recently developments in the field of tumor environment analysis, tumor heterogeneity and the simulation through the prism of Digital Patology. Even if presented in the cancer context, these technologies could be of a great use for a series of pathologies.
Introduction to OMERO
11 July 2019, Nabila Elarouci (Centre for Neuroinformatics)
OME Remote Objects (OMERO) is a client-server software platform for visualizing, managing, and annotating scientific image data. Using the power of Bio-Formats, OMERO supports over 140 image file formats, including all major microscope formats.
The Centre for Neuroinformatics and the ICM microscopy imaging facilities have been testing Omero, it will help the teams to manage all images using a secure central repository, from the microscope to publications.
Ontological model to integrate of heterogeneous omic data
13 June 2019, Vincent Henry (AramisLab)
The combination of ontology and systems engineering provides a consistent and formal network of classes with an appropriate granularity to integrate heterogeneous multi-omic data related to molecular physiopathology; a use case on Alzheimer Disease.
From data Quality Control to quantitative Magnetic Resonance, the big data perspective.
16 May 2019, Romain Valabregue (CENIR)
Quality Control (QC) is often presented as the first necessary step for big data analysis. Here we are interested by the opportunity to use big data analysis in order to derive quantitative QC metrics that will allow the estimate of the bias and the error on computed IDP: Image Derived Phenotype
For anatomical T1 images, IDP are volumetric measures deduce from segmentation process. The bias and the precision of those measures are dependent on
- The acquisition parameters (hardware/sequences) that will change the SNR and the contrast.
- The subject’s physiology which in case of pathologies will alter the contrast.
- The data quality (physical and subjects artifacts like motion) that will change the precision.
MR technique is a very sensitive method but in counterpart less specifics. This is due to the complex interactions between the subject’s physiology and acquisition parameters where an explicit modelization is not possible. The big data approach gives the opportunity to directly estimate from the data, the factors that influence the bias and the precision of the derived IDP.
Epimicro: human intracranial EEG data storage
25 April 2019, Katia Lehongre (STIM)
Since 2010, intracranial electroencephalography recorded for clinical purposes in epileptic patients who undergo presurgical exploration, are simultaneously recorded with an amplifier dedicated to research. The recording is continuous and lasts for 2 to 3 weeks. For each patient, all iEEG data is stored at the ICM, along with clinical information and neuroimages. Data are then accessed by research Teams that have research protocols ongoing with those patients. A graphical database to share clinical information is in progress.
A general statistical framework for data fusion
21 March 2019, Arthur Tenenhaus (CentraleSupelec/Laboratoire des Signaux et Systèmes (L2S), Paris-Saclay & iCONICS, ICM)
Challenges related to making use of the wealth data (e.g omics data, imaging-genetic data, etc…) include extracting relevant elements within massive amounts of variables, reducing dimensionality, summarizing information in a comprehensible way and displaying it for interpretation purposes. Dedicated modelling algorithms able to cope with the inherent properties of these structured datasets are therefore mandatory for harnessing their complexity and provide relevant information.