We are happy to announce our participation in Horizon2020 EU project CERBERO. The Cross-layer modEl-based fRamework for multi-oBjective dEsign of Reconfigurable systems in unceRtain hybRid envirOnments (CERBERO) project aims at developing a design environment for CPS based of two pillars: a cross-layer model based approach to describe, optimize, and analyze the system and all its different views concurrently; an advanced adaptivity support based on a multi-layer autonomous engine. To overcome the limit of current tools, CERBERO provides: libraries of generic Key Performance Indicators for reconfigurable CPSs in hybrid/uncertain environments; novel formal and simulation-based methods; a continuous design environment guaranteeing early-stage analysis and optimization of functional and non-functional requirements, including energy, reliability and security.
The CERBERO project, coordinated by IBM, is use-case driven. There are following use cases:
- Self-healing system for planetary exploration
- Smart traveling for electric vehicle
- Ocean monitoring
AmbieSense is leading the research and development on the Ocean Monitoring use-case. Smart video-sensing unmanned vehicles with immersive environmental monitoring capabilities and capable of individual and fleet self-operation and navigation. All of this has to be realized with smart multi-lens camera systems based on COTS components. However, current vision/sensing technologies for OM are based on default software on existing off-the-shelf cameras and other sensors, not originally intended for marine applications. Furthermore, vision/sensing challenges that occur on sea surface, but particularly subsea, are not addressed. CERBERO will define algorithms for data analysis and information fusion to enable smart adaptation strategies to address rapidly changing environment conditions in order to obtain or maintain positions on sea. Specific challenges that will be addressed during the project involve the study and develop strategies and tools to minimize the designer effort required system-level analysis in the different design cycles; new adaptive image processing methods for enhancing the captured imagery, along with object/motion detection; and improved system-level (self-) adaptive run-time management of vision/sensing capabilities.