Event description
The development of autonomous systems, including software agents and robots, requires high levels of reliability, safety, and explainability. This talk explores how formal methods, multi-agent systems (MAS), and runtime verification can contribute to building trustworthy autonomous systems. Key topics include the rational, reactive, proactive, and social nature of intelligent agents, along with the challenges of ensuring safety through formal verification techniques such as model checking and runtime enforcement. Additionally, the talk presents methodologies for engineering reliable agent-based systems and monitoring their interactions, particularly in safety-critical applications like space exploration, nuclear environments, and manufacturing.