Security-Minded Verification of Cooperative Awareness Messages

Marie Farrell, Matthew Bradbury, Rafael C. Cardoso, Michael Fisher, Louise A. Dennis, Al Tariq Sheik, Yuan Hu, Carsten Maple

Research output: Contribution to journalArticlepeer-review


Autonomous robotic systems systems are both safety- and security-critical, since a breach in system security may impact safety. In such critical systems, formal verification is used to model the system and verify that it obeys specific functional and safety properties. Independently, threat modelling is used to analyse and manage the cyber security threats that such systems may encounter. Both verification and threat analysis serve the purpose of ensuring that the system will be reliable, albeit from differing perspectives. In prior work, we argued that these analyses should be used to inform one another and, in this paper, we extend our previously defined methodology for security-minded verification by incorporating runtime verification. To illustrate our approach, we analyse an algorithm for sending Cooperative Awareness Messages between autonomous vehicles. Our analysis centres on identifying STRIDE security threats. We show how these can be formalised, and subsequently verified, using a combination of formal tools for static aspects, namely Promela/SPIN and Dafny, and generate runtime monitors for dynamic verification. Our approach allows us to focus our verification effort
on those security properties that are particularly important and to consider safety and security in tandem, both statically and at runtime.
Original languageEnglish
JournalIEEE Transactions on Dependable and Secure Computing
Publication statusAccepted/In press - 18 Dec 2023


  • Verification
  • Security
  • Safety
  • Threat Modelling
  • Connected Autonomous Vehicles
  • Cooperative Awareness Messages


Dive into the research topics of 'Security-Minded Verification of Cooperative Awareness Messages'. Together they form a unique fingerprint.

Cite this