King's College London
United Kingdom
History
The Network was established in 2026 in response to the growing need for a coordinated, international approach to AI in mental health. Developed through collaboration across leading European centres, it represents one of the first structured efforts globally to integrate discovery, prediction, intervention, evaluation, and governance within a single framework. The Network aligns with ECNP’s strategic priorities to advance translational neuroscience and positions Europe at the forefront of responsible, clinically meaningful AI in psychiatry.
Artificial Intelligence is at the forefront of scientific progress. This network will offer an expert-driven platform at the European level that will drive progress in AI in mental health across several interconnected domains. We are very excited to work towards our goals with colleagues across Europe and lead on AI developments in the field of mental health.
Goals
Data and AI Infrastructure: Establish interoperable, privacy-preserving data ecosystems (e.g. federated learning, trusted data repositories) alongside scalable infrastructure for AI model training, evaluation, and deployment across European centres.
Stratification, Predictive Models, and Digital Twins for Precision Psychiatry: Develop and validate robust stratification and predictive models and patient-specific digital twins to enable individualised risk prediction, stratification, and modelling of disease trajectories and treatment response.
Knowledge Systems and AI-Enabled Therapeutics: Design and translate AI-based knowledge systems (including LLM-based and agentic tools) and digital therapeutics into clinically usable solutions supporting decision-making and adaptive, personalised interventions.
Evaluation, Benchmarking, and Validation: Establish rigorous, standardised frameworks for benchmarking and validating stratification and predictive models, digital twins, and therapeutic systems to ensure reproducibility, generalisability, and clinical utility.
Implementation, Governance, Monitoring, and Responsible AI: Develop regulatory-aligned frameworks for safe deployment, continuous monitoring, and lifecycle management of AI systems, ensuring fairness, transparency, and mitigation of risks in real-world clinical settings.
Core members
| Member name | Institution | City | Country |
| Linda Antonucci | University of Bari | Bitetto | Italy |
| Francesca Edelmann | Medicines and Healthcare products Regulatory Agency | London | United Kingdom |
| Oliver Grimm | University of Frankfurt | Frankfurt | Germany |
| Raquel Iniesta | King's College London | London | United Kingdom |
| Peter Keri | GAMIAN-Europe | Ocsa | Hungary |
| Andre Marquand | Donders Institute for Brain, Cognition and Behaviour | Nijmegen | The Netherlands |
| Pavol Mikolas | Jena University Hospital | Jena | Germany |
| Mariana Pinto da Costa | King's College London | London | United Kingdom |
| Ioana Podina | University of Bucharest | Bucharest | Romania |
| Saige Rutherford | University of Michigan | Michigan | USA |
| Guido van Wingen | Amsterdam UMC | Amsterdam | The Netherlands |
| Gerrik Verhees | University of Dresden | Dresden | Germany |
| Sandra Vieira | CHUV | Lausanne | Switzerland |
| Thomas Wolfers | Jena University Hospital | Jena | Germany |
Shell members
| Member name | Institution | City | Country |
| Joseph Kambeitz | University of Cologne | Cologne | Germany |
| Jannis Pasoglou | Aidvance | Berlin | Germany |
| Emanuel Schwarz | Central Institute of Mental Health | Mannheim | Germany |