About the Event
A network entity's effort/investment in security in an interdependent system improves the security standing of other inter-connected network entities as well, by reducing the probability of indirect attacks. Therefore, the security actions of users are often viewed as a public good, where all users benefit from the positive externality of others' expenditures. Consequently, users' expenditures at the equilibrium state of the system are often far from the social-welfare-maximizing levels of investment. This is because users do not consider the effects of their actions on others' welfare, while some choose to free-ride on the existing externalities.
In this talk I will present two parallel efforts aimed at achieving a better understanding on how to design good incentive mechanisms to induce collectively better decisions on security. In the first, more theoretical direction, I will present a mechanism that incentivizes users to make socially optimal investments in security using cyber-insurance contracts, designed through a message exchange process among users, and backed by a single profit-neutral insurer. As opposed to many existing incentive mechanisms for security, the insurer in our framework does not need to monitor or audit users. It is shown that however, due to the non-excludable nature of security, there may exist scenarios in which it is impossible to guarantee that users voluntarily purchase insurance, irrespective of how the insurer designs the contracts. The implication of this impossibility and possible ways to circumvent it will be discussed.
Along the second, more empirical direction, I will advocate the notion of "network reputation" as a means of monitoring the security standing of networks and encouraging better security behavior. Commonly used host reputation blacklists (RBLs) focus on individual hosts, i.e., IP addresses, while the idea of network reputation moves away from this microscopic/host-level view of the Internet, and instead focuses on a larger entity (e.g., a prefix or an administrative domain). This allows us to inspect and identify at a higher level more stable and predictive security behaviors, which in turn enables more consistent incentive policies to be applied.
This is joint work with my PhD students Parinaz Naghizadeh, Yang Liu, and Armin Sarabi, as well as our CSE collaborators Jing Zhang and Dr. Michael Bailey.