Imposing security constraints on agent-based decision support

Ekenberg, L. ORCID:, Danielson, M., & Boman, M. (1997). Imposing security constraints on agent-based decision support. Decision Support Systems 20 (1) 3-15. 10.1016/S0167-9236(96)00072-3.

Full text not available from this repository.


The principle of maximising the expected utility has had a large influence on agent-based decision support. Even though this principle is often useful when evaluating a decision situation, it is not always the most rational decision rule and other candidates are worth considering. A decision making agent may want, for example, to exclude particular strategies which, in some sense, are too risky with respect to specific thresholds. A theory is presented for situations where a decision making agent, human or machine, has to choose between a finite set of strategies having access to a finite set of autonomous agents reporting their opinions on the strategies. The approach considers a decision problem with respect to the contents and the credibilities of the reports, and the main emphasis is on how to perform analyses in decision situations where the available information is vague or numerically imprecise.

Item Type: Article
Uncontrolled Keywords: Decision analysis; Multi-agent systems; Security constraints; Uncertain reasoning; Utility theory
Research Programs: Risk, Modeling, Policy (RMP)
Depositing User: Romeo Molina
Date Deposited: 22 Apr 2016 07:56
Last Modified: 27 Aug 2021 17:26

Actions (login required)

View Item View Item