Uncertainty is one of the key challenges in Stackelberg security games (SSGs) — a well known class of games which have been used to solve various real-world problems in security domains such as airport security and wildlife protection. In these domains, security agencies (defender) are generally uncertain about underlying characteristics of attackers such as the attackers’ preference or behavior. Previous work in SSGs has proposed different learning methods to reduce uncertainty about attackers based on historical attack data. However, since these learning algorithms depend on the attack data, a clever attacker can manipulate its attacks to influence the learning outcome. Such deception of the attacker could lead to ineffective defense strategies that favor the attacker.
This work studies the strategic deception of the attacker with private type information in a repeated Bayesian Stackelberg security game (RBSSG) setting. We investigate a basic deception strategy, named imitative attacker deception — that is the attacker pretends to have a different type and consistently plays according to that deceptive type throughout the whole time horizon. We provide four main contributions. First, we present a detailed equilibrium computation and analysis of standard RBSSGs with a non-deceptive attacker. To our knowledge, our work is the first to present an exact algorithm to compute an RBSSG equilibrium. Second, we introduce a new counter-deception algorithm, built on our equilibrium computation of standard RBSSGs, to address the attacker’s deception. Third, we introduce two new heuristics to overcome the computational challenge of the exact algorithm: one heuristic limits the number of time steps to look ahead and the other limits the number of observational attack histories in consideration. Fourth, we conduct extensive experiments, showing the significant loss of the defender and benefit of the attacker as a result of the attacker deception. We also show that our counter-deception algorithm can help in substantially diminishing the impact of the attacker’s deception on the players’ utility.