cm30174 cm50206 intelligent agents
play

CM30174 + CM50206 Intelligent Agents Marina De Vos, Julian Padget - PowerPoint PPT Presentation

Overview Context Trust Reputation CM30174 + CM50206 Intelligent Agents Marina De Vos, Julian Padget East building: x5053, x6971 Reputation November 8, 2011 De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 1 / 45 Overview


  1. Overview Context Trust Reputation CM30174 + CM50206 Intelligent Agents Marina De Vos, Julian Padget East building: x5053, x6971 Reputation November 8, 2011 De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 1 / 45

  2. Overview Context Trust Reputation Authors/Credits for this lecture Multiagent Systems, Chapter 9 “Trust and Reputation in Multiagent Systems” [Sabater-Mir and Vercouter, 2012] De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 2 / 45

  3. Overview Context Trust Reputation Content Overview 1 Context 2 Trust 3 Trust models Trust for agents Sources of trust Reputation 4 Reputation models Implementation and use Drawbacks of reputation De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 3 / 45

  4. Overview Context Trust Reputation Content Overview 1 Context 2 Trust 3 Reputation 4 De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 4 / 45

  5. Overview Context Trust Reputation Introduction Autonomy, reactiveness and proactivity change the “rules of the game” for software systems A component may not do what you tell it: it is “self-interested” because it has goals and an agenda As with human societies, mechanisms are needed to provide some degree of control – inhibit anarchy Solutions? computational security – technical artifacts normative systems – constraints on behaviour trust and reputation – behavioural influence What is the computational representation of trust and reputation? De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 5 / 45

  6. Overview Context Trust Reputation Plan Context Trust, then reputation Roles of trustor and trustee: T a → b and R a → b Trust = direct experience Reputation = collective experience Clear separation of concepts How to build computational model(s) How to integrate with agent decision-making Analyse shortcomings De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 6 / 45

  7. Overview Context Trust Reputation Content Overview 1 Context 2 Trust 3 Reputation 4 De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 7 / 45

  8. Overview Context Trust Reputation What’s the problem? How can trust or reputation be relevant to software? Likewise negotiation or argumentation But agent characteristics: Reactive: on-going interaction with environment, and responds to changes that occur in it (in time for the response to be useful). Pro-active: generates and attempts to achieve goals Social: ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others Make conventional approaches to system security obsolete De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 8 / 45

  9. Overview Context Trust Reputation Social Ability The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account. Some goals can only be achieved with the cooperation of others. Suggests need for: Information/models of other agents’ state Trust metrics Reputation models (e.g. FOAF) Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others. De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 9 / 45

  10. Overview Context Trust Reputation Norms Meaning: A standard or pattern of social behaviour that is accepted in or expected of a group (OED) � a way to specify (in)correct behaviour in a given context Agent characteristics ⇒ agent can observe or ignore group expecations � � punish! Why should it observe? Why should it ignore? incentivise! But how are norms enforced? Or, who is going to enforce them? Individuals with enforcement powers? “police” agents Groups with enforcement powers? “peer” pressure Trust + reputation � “soft” security De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 10 / 45

  11. Overview Context Trust Reputation Soft security Tradeoff: protection vs. restriction Cannot prevent all undesirable events But adapt system to reduce or prevent future incidence Trust and reputation do this for humans Two aspects: local: integrate into agent decision-making global: social control mechanism Agents observe each other Fear of ostracism is a deterrent Need representation of social evaluation De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 11 / 45

  12. Overview Trust models Context Trust for agents Trust Sources of trust Reputation Content Overview 1 Context 2 Trust 3 Trust models Trust for agents Sources of trust Reputation 4 De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 12 / 45

  13. Overview Trust models Context Trust for agents Trust Sources of trust Reputation Computational representation Different formalisms depending on agent reasoning Tradeoff: simplicity vs. expressiveness Simplicity: Ease of calculation Loss of information Limitation of reasoning Expressiveness... the converse De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 13 / 45

  14. Overview Trust models Context Trust for agents Trust Sources of trust Reputation Boolean T RUE : ⇒ trustee is trustworthy F ALSE : ⇒ not Rarely used Trust (and reputation) is best graded De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 14 / 45

  15. Overview Trust models Context Trust for agents Trust Sources of trust Reputation Numerical Real or integer value Trust in agent X is 0.4 Reputation of agent Y is -1 Most common representation [ − 1 , 1 ] ReGreT [Sabater, 2003] [ 0 , 1 ] [Sen and Sajja, 2002] Indicates degree of trust (distrust) or good (bad) reputation Permits absolute and relative comparison Semantics: does/should 0 ⇒ neutral, etc.? Interpretation is typically subjective Creates ambiguity if agents communicate values De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 15 / 45

  16. Overview Trust models Context Trust for agents Trust Sources of trust Reputation Qualitative labels Human assessments typically imprecise X’s reputation is very good I only trust Y so far... Is numerical any better? Compare 0 . 6 and 0 . 7 Finite set of labels may (paradoxically) help precision: { bad, neutral, good } or { very bad, bad, neutral, good, very good } Lose fine-grained comparison Gain recognizable semantics Human accessible See [Abdul-Rahman and Hailes, 2000] De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 16 / 45

  17. Overview Trust models Context Trust for agents Trust Sources of trust Reputation Probability distribution and fuzzy sets Some applications need to express expected collective behaviour What is the probability that an agent will behave badly? ... a skewed distribution This agent is unpredictable ... a uniform distribution This agent is bipolar ... double-peaked distribution Similar effects can be achieved using fuzzy sets De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 17 / 45

  18. Overview Trust models Context Trust for agents Trust Sources of trust Reputation Trust and reputation as beliefs Integrate with agent architecture so agent can reason Add to BDI model as beliefs Socio-cognitive theory [Castelfranchi and Falcone, 2010]: “an agent i trusts another agent j to do an action α with respect to a goal φ ” Connect with intentional systems + speech acts Key observation: trust is relative To an action, and To a goal ForTrust model formalizes this as “occurent trust” 1 OccTrust ( i , j , α, φ ) for: trustor i + trustee j + action α + goal φ 1 trust that holds here and now De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 18 / 45

  19. Overview Trust models Context Trust for agents Trust Sources of trust Reputation BDI + RepAge RepAge uses distributions (dsicussed earlier) Uses L BC – belief language and logic Adds belief predicate S to represent the community belief If the reputation of agent j playing the role seller is Rep ( j , r , [ 0 . 6 , 0 . 1 , 0 . 1 , 0 . 1 , 0 . 1 , 0 . 1 ] where the proabability distribution is over the finite set: { VBadProduct , BadProduct , OkProduct , GoodProduct , VGoodProduct } the belief set is: S ( buy ( j ) , VBadProduct , 0 . 6 , seller ) S ( buy ( j ) , BadProduct , 0 . 1 , seller ) S ( buy ( j ) , OKProduct , 0 . 1 , seller ) S ( buy ( j ) , GoodProduct , 0 . 1 , seller ) S ( buy ( j ) , VGoodProduct , 0 . 1 , seller ) and available for standard BDI reasoning De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 19 / 45

  20. Overview Trust models Context Trust for agents Trust Sources of trust Reputation Reliability Issue so far is representation But to what extent can value be relied upon? Some models add a reliability measure ∈ R capturing opinion count opinion variance (higher variance ⇒ less reliable) opinion recency opinion source credibility De Vos/Padget (Bath/CS) CM30174/CM50206 November 8, 2011 20 / 45

Recommend


More recommend