Verification & Validation for Safety in Robots
The objective of verification and validation (V&V) is to gain confidence in the correctness of a system with respect to its specification (verification) and when it is placed into its target environment (validation). This is particularly challenging for autonomous systems and for systems that interact with humans.
Research Lead: Dr. Kerstin Eder
Contact the team here
Research Objectives
At BRL we develop new V&V techniques for autonomous assistive robots, drawing on our extensive experience of verifying complex microelectronic designs, where no single technique is adequate to cover a whole design in practice.
We are interested in all aspects of V&V needs related to robotics and autonomous systems in general. A special interest currently under development is the V&V for adaptive and human interactive systems where we explore shifting V&V from design time to runtime.
Current Projects
RIVERAS
RIVERAS aims to develop techniques and methodologies that can be used to design autonomous intelligent systems that are verifiably trustworthy
RIVERAS aims to develop techniques and methodologies that can be used to design autonomous intelligent systems that are verifiably trustworthy. Research is focused:
- on developing flexible specifications that capture uncertainty and leave scope for self-adaptation of the system within a set of safety constraints.
- on developing V&V methods for state-of-the-art adaptive control algorithms.
Details of grant information can be viewed on the EPSRC website.
The RIVERAS team is led by Dr Kerstin Eder, Computer Science, who is an expert in Design Verification - the core focus of the research programme. The team includes Dr Arthur Richards from Aerospace Engineering who is an expert in optimisation and control for vehicle planning and decision making, and two experts in modelling vagueness and uncertainty in intelligent systems and formal specifications, Prof Jonathan Lawry and Prof Trevor Martin from Engineering Mathematics / Intelligent Systems Laboratory.
Active work
Verification of high-level properties of control systems (stability, robustness) implemented in Simulink, using formal methods like theorem proving.
Demo of our proposed methodology and tool for translation from Simulink to Why3 language, applied to the verification of stability of a simple discrete system through eigenvalue analysis:
STAARs
A new Institute for Advanced Studies funded inter-disciplinary research initiative and associated workshop series
The Institute for Advanced Studies have funded a new inter-disciplinary research initiative and associated workshop series on Safe and Trustworthy Autonomous Assistive Robots (STAARs).
In recognition of the new ethical, societal and legal challenges associated with autonomous assistive robots, the University of Bristol’s Institute for Advanced Studies is funding a series of initially three dedicated workshops on “Safe and Trustworthy Autonomous Assistive Robots” (STAARs).
These workshops provide a collaborative forum for researchers from engineering, including but not limited to Computer Science, Robotics, Aerospace, Mechanical and Systems Engineering. They encourage individuals to engage with experts from other research disciplines, such as Science (e.g. Experimental Psychology) and Social Sciences and Law, and with the robotics industry as well as with current standardisation committees, to seek their input before important engineering decisions are being made.
Through the STAARs workshops, research challenges and ongoing work can be debated with a set of experts from diverse backgrounds with a common interest in safety of autonomous assistive robots.
Cross-Theme Projects

Past Projects