Games Servers Play: A Procedural Approach

Share Embed


Descrição do Produto

The Rights of Agents Christen Krogh ? NRCCL and Department of Philosophy, University of Oslo P.O.Box 6702, 0130 Oslo, NORWAY. http://www.uio.no/˜krogh/ [email protected]

Abstract. Many agents are conceived to achieve certain goals on behalf of their owners by interacting with other agents. In order to make these agents behave so as not to violate other agent’s rights (or, more uncontroversially: the rights of the owners of other agents), we have to build into agents an attitude about such rights. To do this, we need a language to represent and reason about rights. The contribution of this paper is to offer formal and informal arguments in favour of employing the notion of rights when characterizing agent behaviour. A language for analysing rights–relations between two agents is proposed, and a sample analysis is performed on an example which is both relevant and realistic.

1 Teaser In [26] Curtis E.A. Karnow, proposes establishing a novel legal entity which he terms ‘electronic persona’ (epers, for short). Epers are programs [26, page 9], and – if we interpret Karnow correctly – quite similar to what are usually conceived as artificial agents in the multiagent systems community2. Legal entities may be thought of as objects – real or imaginary – which enjoy and are subject to legal rights and duties. Examples of legal entities are human persons, and corporations. Karnow’s proposal is based on an analogy between corporations and agents. Legally speaking, a corporation is not equated with any physical person. A corporation may be assigned rights and duties, which neither its employees, managers, owners, nor other humans associated with it necessarily share. An example of this is that the European Court of Human Rights has allowed private corporations to invoke Article 10 of the European Convention for the Protection of Human Rights and Fundamental Freedoms (cf. [20, page 485]). Article 10 safeguards the right to ‘freedom of expression’ [11]. It is possible to violate the right to freedom of expression of a corporation without violating this right for any of its human associates. The parallel between corporations and agents is highlighted by considering that just as a human may instantiate thousands of agents, she may initiate thousands of corporations. Karnow’s rationale for proposing that agents (epers) be given status as legal entities seems to be that he seeks to formulate a legal instrument which may serve in the process ?

On sabbatical leave from SINTEF Information Technology. This work has been carried out within terms of reference of ESPRIT Basic Research Project no. 6471, MEDLAR II, and ESPRIT basic research working group no. 8319, ModelAge. 2 We are aware of this not being one single conception, but our imprecise attitude here bear no importance on the points we are about to make. Later, we will delimit, somewhat, the class of agents we are considering.

of safeguarding certain human rights (i.e. rights for humans) such as privacy. Karnow argues that a feasible way to achieve this is to grant agents (epers) similar rights of privacy. More concretely, agents (epers) should have the right to decline to produce information aside from key identification material. The idea is that such a right for agents (epers) might secure that no–one can legally determine the goals of the owners of the agents by for instance opening them up for analysis. Interestingly, we note that this proposed right to decline to produce all kinds of information is in direct conflict with the principle of veracity argued by Genesereth and Ketchpel in e.g. [13]. This conflict is most clearly seen when considering Genesereth’s and Ketchpel’s view that the principle of veracity is guaranteed by the fact that (for their software agents) “An agent can always state its inputs, outputs, and definitions with confidence, and it can nest its conjectures inside statements of its beliefs.” ([13, page 50]). As such, Karnow‘s proposal harmonises better with e.g. Rosenschein’s [41]: “We want them [the agents] to be secretive at times, not revealing all their information” [41, page 793]. Curtis Karnow is a lawyer, and we consider his proposal particularly interesting in lieu of this fact. In a sense, his proposal constitutes a jurisprudential call for arms to invade a domain which has so far been reigned by scientists and engineers. The realisation of agents and multiagent systems has traditionally been conceived as not much different than for other computerized entities. The degrees of autonomy which are strived for within certain classes of agents may, however, necessitate other considerations. Nonetheless, we will not analyse the legitimacy of Karnow’s proposal, nor defend it, however interesting it may be. We will rather be concerned with the development of a conceptual vocabulary initially conceived for legal analysis, which we argue may fruitfully be applied for analysing multiagent systems. Rather than escalate computer systems to a novel legal status and then apply legal analysis to them, we attempt to import analytical jurisprudential tools to computer science, and then apply them. Thus it should be noted that in this paper, the notion of rights is used in a rather abstract and loose fashion. The notion of rights we will employ is not directly connected to any specific legal or moral framework, even though we shall draw on analogues from e.g. English law. Below, we shall first first delimit the class of agents (and thus multiagent systems) we are considering. We shall then consider some methodological issues concerned with how we conceive the characterization of agents and multiagent systems. Thereafter, we shall give an example intended to highlight some problems that may arise in large, distributed, heterogeneous, multiagent systems such as the internet. We will offer a preliminary analysis of the problems, and will use the incompleteness of this analysis as an argument for reconstructing a formal theory of rights which we intend to demonstrate may be instrumental in a superior analysis.

2

What’s an agent, anyway?

Agents may be many things. Attempts to find one central common denominator of operative or theoretical conceptions of agents in recent publications on the topic (e.g. [52], [50]) will probably fail. Though unpretentious and technically useful, we find the definition of (software) agents used in e.g. [13] too strong: “An entity is a software agent if and only if it communicates correctly in an agent communication language such as

ACL.”. In particular, we object to the use of the term ‘correctly’ in the above definition, as it seems to imply that either a program is an agent and (communication–wise) does what it ought to do, or it isn’t an agent. Considering that the only task some agents are conceived to carry out is communicate, we find that this excludes a class of entities to which we want to ascribe agenthood. We will, however, not replace the above definition with our own, we will rather offer a delineation of a class of agents with which we will be concerned. The agents with which we will be concerned are (i) self–centered in that they do not assume a global plan or controller, (ii) self–motivated in that they are not completely determined by their input, (iii) interacting in that they are not completely determined by their internal constituents, (iv) heterogeneous in that we make no assumption about them having the same constituents, and (v) persistent in that they have prolonged existence – they do not come into existence and immediately disappear. For all practical purposes the reader may think of agents as being programs, even though we consider it conceivable that also hybrid systems may benefit from the analysis we shall undertake. We would like to stress that many entities falling outside this delineation may be considered qualified for agenthood. We will, however, not be concerned with these. 2.1 Anthropomorphising We are supporters of the tradition that describe agents in human terms such as action, beliefs, goals, and intentions, assuming that these terms correspond to well–defined internal and/or external states of the agents. We agree with, for instance, the view stated in [43]. We do not, however, use human terms because we consider it plausible (or even remotely possible) that artificial agents actually do act, have beliefs, or form goals in any human sense. We do it because we consider it natural , and fruitful to use some of these terms in such a manner. We argue that it is natural because we tend to use the terms in question when analysing analogue human behaviour or constituents: he believed such and such, he intended to such and such, etc. Employing the terms on non–persons, we need not invent a novel vocabulary. We can use existing terms, relying on a conventional and informal understanding of them. There is a danger, we should add, in ascribing too much when borrowing terms from the human sphere. For this reason, we insist that each such term should correspond to a well–defined internal and/or external state of the agents. A well–defined internal state corresponding to an agent believing that could for instance be that is a member of a datastructure called ‘beliefs’, or it could be ( ) is/can be/will be proved by some inference engine belonging to the agent. that It is fruitful to ascribe such human terms when we gain fresh insights into the constituents, behaviour, or otherwise, of agents by using them. The fruitfulness of a term is best demonstrated. That one particular term can naturally and fruitfully be applied within multiagent systems may easily become the subject of a whole paper. Indeed, part of the contribution of this paper is to argue convincingly that the term ‘rights’, as in e.g. ‘having a right’, may be so employed. This presupposes, however, the use of two terms into which ‘rights’ are commonly analysed: obligation and permission. Since we are concerned with norms (obligations, permissions, rights, ) we shall be using deontic logic – the logic of obligation and permission – as our primary analytical tool. The interested reader is referred to [16] and [17] for classical readings on deontic logic. Recent

Bel A

A

A

:::

overviews of various applications of deontic logic in computer science may be found in [37] and [22]. Deontic logic is conventionally seen as a kind of modal logic. Conferring with this view, and applying modal logics for analysing agents, places us both within an ongoing trend in the multiagent systems society (cf. [51]), as well as within a long tradition of such analysis (e.g. [49], [18], [39], [3], just to mention a few).

3

Setting the stage It is a dark and stormy night. Suddenly, the internet agent Hugin-13 attempts to access the document http://www.yggdrasil.no/info.html. The attempt at access is intercepted, however, by the guardian agent Mime-17, and Hugin is asked to identify itself. It does so, and requests permission to access /info.html and any further documents that are referred to there. Having checked that Hugin is not registered on the blacklists of Yggdrasil, Mime answers that it will charge 100 ebucks for 10 minutes access. Hugin agrees, the money is deposited, and Mime grants permission for accessing the files. Subsequently, Hugin proceeds to access Yggdrasil. Five minutes later, Mime contacts Hugin, and issues a message to the effect that because of heavy load, Mime’s access will be broken off shortly. Hugin protests, referring to the payment he put forward, and that he has only used five of his ten minutes, but to no avail.3

There are two aspects that should be addressed regarding this example before analysing it: The first concerns its realism, and the second its relevance. In [46] we read that any critical response to a philosophical position can be classified either as an “Oh yeah?” or a “So what?”. Arguing for the realism of the example, we hope to counter the “Oh yeah?”s. Arguing for the relevance, we hope to counter the “So what?”s. First the realism. Internet agents (robots, spiders, worms, ) already roam the world wide web searching for information (cf. [40], e.g. [35]). Even if these robots are only embryo agents, they still fall into our delineated class of agents. Protocols have been established in order to enable information providers to communicate requests to visiting agents and thus (hopefully) constrain their behaviour (e.g. [27]). Several agents have been constructed in a manner so as to adhere to such constraints (e.g. [47]). These constraints, or regulations, are not enforced effectively: breach of regulations may occur (cf. [9]). It is not inconceivable that the protocols may be enhanced to put similar kinds of regulations on the providers. The establishment of currencies to be used on the information highway is not far off. Pilot testing of anonymised, public–key based means of payment, which both in principle and in practice may be employed by agents, are already taking place (e.g. by the techniques presented in [7]). That information services are not free in the example should surprise no–one (e.g. [10] – see [48] for a presentation). Even though effective means of interagent communication are hard to implement,

:::

3

A note about the names used: in Norse mythology, the troll, Mime, guarded a magical source at the foot of the world–tree, Yggdrasil. This source was known to give supreme wisdom to anyone who drank from it. The king of the gods, Odin, gave one of his eyes to drink from this source. Hugin was one of Odin’s two ravens that roamed the world collecting information for him [5].

there have been many proposals for such means (e.g. [13]), and we believe that some restricted kind of such will be applicable for heterogeneous agents in the near future. Now the relevance. In the example, Hugin is permitted towards Mime that it access the file /info.html (and any further documents referred to there) at Yggdrasil. Furthermore, Hugin pays for this permission. Mime effectuates something which blocks Hugin’s access. The question may be posed whether Mime violated a right of Hugin (or of the human/corporation Hugin represents). The example illustrates that situations may occur, the normative status of which is not immediately clear. Since the normative status of the situation is not clear, it is not clear what should subsequently be done by any of the agents: should Hugin complain to someone about Mime’s behaviour?, should Hugin forget about the incident?, should Mime have acted otherwise? The example is relevant, we argue, because it highlights the need for cues for action and therefore detection of nuances in normative patterns.

:::

4 First analysis Let us analyse the example. The answer to the question ‘whether Mime violated a right of Hugin when it cut off Hugin’s access?’, cannot be answered by itself. Considering an analogous human situation we would have to take a norm system into account – preferably a legal system in which claims for compensation may be put forward. Is the idea of a norm system applicable to multiagent systems? It is our conviction that it is so. Social laws may constitute a norm system. Such ‘laws’, adequately modified to be applicable to artificial subjects, are currently discussed in relation to multiagent systems (cf. [42], [34], [41] and [6]). A norm system may in such a context be considered as to be put into force by, for instance, ‘the setting of standards’ [41, page 793]: designers of agents come together and agree upon basic principles of good conduct. By ‘put into force’ we mean considered as holding, not considered always to be followed. It is important to see that this does not ensure compliance to the laws, nor presuppose a central arbiter. Another manner a norm system may be put into force than by ‘the setting of standards’, is through normal, human, legislation (as suggested in [26]). Norm systems are conventionally analysed in terms of obligations and permissions. We have argued in favour of the legitimacy of employing the notions of obligations and permission in multiagent systems elsewhere [28]. There, we argue that the gain in applying these terms is seen most clearly when considering that by using the terms we are able to reason about situations we would otherwise be at a loss representing. It should be noted, however, that we do not claim that ours is the only way of achieving this. We do, however, have objections towards the most immediately competing way of representing the situation: using natural necessity and possibility instead of obligation and permission. To say that it is possible for Hugin to access Yggdrasil seems to be too weak. It does not say anything about what we can or should expect from this being possible (it is even conceivable that it be accidentally possible). To say that it is necessary for Hugin to access Yggdrasil seems to be too strong. Assuming the Feys/von Wright’s principle T (2  ) for this necessity operator, we would have an embarrassing contradiction (since Mime does in fact see to it that Hugin does not access Yggdrasil). We will choose to work with a hypothetical legal system whose domain is thought

A A

to be the behaviour of agents as the ones described in our example. Let this legal system bear resemblance to English law. In such a (hypothetical) legal system, there would have to be an offer, an agreement, and a payment (quid pro quo) in order for a legally binding agreement to be established. Let us consider Mime’s action in light of this. There certainly seems to be an (initial) agreement that Hugin access Yggdrasil. There also seems to be an offer (Mime offers Hugin access in return for money). There even is a payment by Hugin for the privilege of accessing Yggdrasil. In this situation, under the hypothetical legal system, there would be a legally binding agreement (or contract) between Hugin and Mime. What more could be said? Under (English) contract law we would at least be able to say the following: (1) (2)

Hugin is permitted that it access Yggdrasil for the period of 10 minutes Mime is obliged not to see to it that Hugin is blocked from this access

Since Mime violated (2), we argue that there would be a legal right that is violated (under this hypothetical legal system). This corresponds well with our intuitions about how to interpret the example. The most interesting point, however, is not whether Mime violated a right (and thus committed a wrongdoing) or not, but that situations as the one described above may occur. It is our position that once acknowledging this, we should also acknowledge the need for characterizing the class of such situations in order to be able to design systems in a manner that will support this or the opposed behaviour. As designers of agents like Hugin and Mime, we would like to have a means of considering the situations which we should take as prerequisite (or antecedents, if you like) for various actions. In other words, we need a theory for characterizing various (classes of) acts of our agents as obligatory or permitted or not. It is our conviction that having such a theory would contribute to the so–called agent theories. Could such a theory be built by a simple formalization? Our answer is no, but for rhetorical reasons, we will go through an attempt to build a theory this way. We will employ a multi–modal extension to classical propositional logic. Let be a sentence expressing a state of affairs. Let be the (modal) operator of standard deontic logic (i.e. a modal logic of type KD4 ). We read sentences of the form as ‘it is oblig=def : : . We atory that ’. As is usual, we will define the dual of this operator read as ‘it is permitted that ’. Let i be an operator of a minimal action logic in the tradition of [24] and [39], of type ET. We read i as ‘the agent sees to it that ’. We specify one such operator for each agent to be considered (i.e. i , j , k , ). Axiomatizations of KD and ET logics may be found, for instance, in [8]. Combining them is a straightforward matter. Letting stand for the sentence ‘Hugin access the files of Yggdrasil’, we write (1) ); to be read as ‘it is permitted that in symbolic form: (h (i.e. Hugin) sees to it that Hugin access the files of Yggdrasil’. As we saw above (2), we found grounds to argue that it was obligatory that Mime did not see to it that it was not the case that Hugin accessed the files of Yggdrasil. We write this as :(m : ).

A

O

A PA E

A

PA

A

EA

O A

i A E E E :::

P EA

h

O

4

OA

E A

Note that we are using Lemmon codes [30] similar to the one found in [8] for naming the logics.

Are the two formalisations we obtained logically compatible? Recalling that we assumed the logics KD for , and ET for h and m , they are. Are there other such expressions that are compatible with the ones we have already established holds between ) Hugin and Mime? Yes, there are. We see, for instance, that : (h : ) and (m are logically compatible with the two expressions above. How many other such expressions are there that are compatible with the ones we have chosen, and does it matter? First, we would say that it does matter. As we shall see later, the (compatible) expression (m ) causes some problems for the preliminary analysis. Furthermore, as (multi)agent (systems) designers, we would like to have access to all such possibilities, and compare them with existing (pseudo-)legal conventions (e.g. social laws), and then incorporate the ability of our agents to act in particular ways on noticing certain patterns. In other words: we would like a theory that gives us an exhaustive characterization of the various normative relations. A simple formalization as the one above does not supply us with this. Should such a theory be supported by legal or philosophical tradition? We have no nice knockdown arguments in favour of this, only a conviction that if our intuitions can be supported by findings and positions in philosophy or law, so much the better.

O

E

E

O E A

P EA

P EA

5 The theory of normative positions The theory we will forward is called the theory of normative positions. The virtues of this theory and its pros and cons as a theory of rights have been amply discussed elsewhere (c.f. [32], and the more recent [14]). We will only give a brief sketch of it here. At the beginning of the century, a legal theorist, Wesley Newcomb Hohfeld ([19]), outlined a theory of rights by offering a set of concepts which he termed fundamental legal concepts. These concepts were intended to serve as the smallest common denominators in jurisprudential reasoning. Hohfeld described these concepts in terms of norms and action. Unfortunately, Hohfeld did not have access to the tools available to modern logicians, and even though arguably analytically talented (c.f. [2]) he does not offer us a formal tool applicable for studying rights. Some 40 years later, a Swedish logician named Stig Kanger5 gave an explication of the Hohfeldian fundamental legal conceptions in terms of deontic logic and action logic [23]. The idea was that the obligation–operator of a standard deontic logic together with an action operator would enable the expression of all of Hohfeld’s fundamental legal concepts. One such relation, the claim–right of an individual towards another individual that a certain state of affairs obtains, is explicated by Kanger as the obligation that see to it that this state of affairs is obtained. In [25], Kanger expanded his theory to encompass so–called atomic positions. Here, he attempts to achieve a complete formulation of what normative relations (or positions) may hold between two individuals. This work was continued in the doctoral thesis of Lars Lindahl [31]. Apart from discussing the relations between various individuals, Lindahl is also concerned with the positions relating to only one individual. A one–agent, normative position relative to Lindahl’s analysis regulates the act of one person. Two–agent

i

5

i

j

A philosopher which, even if largely unknown outside Scandinavia, is quite renowned. Some authors (e.g. [12]) credit Stig Kanger for publishing the first proposal for the ubiquitous model theory of modal logic usually attributed to Saul Kripke.

positions regulate the (possibly joint) acts of two persons, and so on. Below, we will be concerned with the two–agent individualistic (non-joint actions) positions as a means of formalising the rights relations between two individuals. In [21], Andrew J.I. Jones and Marek Sergot offer an application of the theory of normative positions for modeling the rights of human users of particular computer systems. Ours is the first attempt we know of to apply the theory for analysing purely inter–agent relationships. 5.1

Expressing normative positions

The formal apparatus involved in obtaining the normative positions is the same as we have employed above: a multimodal extension to propositional logic by means of the modal operators m , h , and . A normative position may be considered as a statement about how the acts of one (or more) person(s) are obligatory or permitted respective to one state of affairs; i.e. the obligations and permissions of the person(s) involved regarding this particular state of affairs. The full set of normative positions is considered to give all possible normative positions for the persons involved. Formally, each normative position is considered to be maximal, and all the elements in the full set of possible normative positions are considered to be mutually exclusive. We will use Makinson’s (c.f. [33]) terminology for specifying normative positions. Makinson calls the constituents of normative positions for maxiconjunctions. Let the choice set (+ 0 ) denote the set of sentence resulting from applying confirmation (+) to the sentence , and negation (-) to the sentence , namely f : g. Let the set of maxiconjunctions of a choice set denote the set of sentences we get by considering all possible conjunctions of the members of the initial set, where each element is maximal and all elements are mutually exclusive, and where redundant and inconsistent members are removed. With an element being maximal, we mean that adding a conjunction from any of the other elements will either make the first element inconsistent, or be redundant. + We denote the maxiconjunctions of (+ 0 ) by writing [[(0 ) ]]. The set of maxiconjunc+ tions resulting from the choice set (0 ) are (surprise, surprise): f : g. We see that the criteria of maximality and exclusiveness is satisfied. By prefixing a modal operator to such a set of maxiconjunctions, we denote the set resulting from prefixing each element of the maxiconjunction set with the operator. E.g. 2[[(+0 ) ]] denotes the set f2 2: g. Employing the choice–set / maxiconjunction no+ tation on this again, e.g. [[(+ 0)2[[(0 ) ]] ]], we obtain a set of sentences where each are qualified by 2 or its negation, and each sentence is maximal, and where the sentences are mutually exclusive. The exact nature of the set will depend on which logic governs the operator 2. The set of one–agent normative positions specified by Lindahl [31] may be char+ + acterized in this manner by the expression [[(+ 0) [[(0)i [[(0) ]] ]] ]]. Assuming (as we have before) the logic ET for the operator i , and KD for the operator , there are exactly 7 normative positions. Note that we use the expression i as shorthand for :i ^ :i : . We read i as ‘ is passive with respect to ’.

E E

O

A A

A

A; A

A A

A

A

A; A A

E

EA

A; A

E A

5A i

O

E

O(iEA) O(iE:A)

A 5A A

O

(3) (4)

O(i5A) O(:iEA) ^ P (iE:A) ^ P (i5A) P (iEA) ^ O:(iE:A) ^ P (i5A) P (iEA) ^ P (iE:A) ^ O:(i5A) P (iEA) ^ P (iE:A) ^ P (i5A)

(5) (6) (7) (8) (9)

We note that each element in this set is maximal, in that adding a conjunct from any of the other elements will either be redundant, or make the element inconsistent. We further note that the elements are mutually exclusive. We are not only interested in single agents, but also relations between agents. Indeed, it is the normative relations between two agents we are investigating. For two agents, where their acts are considered as individualistic (i.e. we do not consider joint actions), we find that we may obtain the set described by Lindahl [31] by taking the cartesian product of the two sets specifying their respective one–agent normative positions. Letting express an operation of cartesian product, we find that we may express the set of individualistic two–agent positions by the following expression:

L

+

0

[[( )

O[[(0)iE[[(0)A]] ]] ]] +

+

M

+

0

[[( )

O[[(0)j E[[(0)A]] ]] ]] +

+

(10)

We assume that inconsistent elements are removed from the set specified by (10). Upholding the assumptions about the logics employed, we find that there are exactly 35 individualistic two–agent normative positions. As mentioned above, it is one of the aims of the theory of normative positions to give a principled way of formulating a complete characterization of the various obligations and permissions of the agents involved. The full set of individualistic two–agent normative positions may be found in the appendix. Now is the time to turn back to the example, and find out whether we are occupied with simple combinatorics having no bearing on the problem at hand, or whether we are able to say something interesting about the normative relationships of agents in multiagent systems.

6 Second analysis Let us focus our newly acquired logical microscope with large magnification on the example from section 3. We agreed that there was reason to suppose that Hugin was per)), and that Mime was obliged not to see to it that mitted to see to it that (i.e. (h not– (i.e. :(m : )). It seems like we should be looking for a two–agent normative ) ^ :(m : ). Considering the list of 35 position which contains the conjunct (h positions (cf. the appendix) we see that there are 3 positions which suggest themselves given this constraint (namely (37), (42), and (47)):

A

O

A E A

P EA P EA O

E A

fP (h EA) ^ O:(h E :A) ^ P (h 5A)g ^ fP (m EA) ^ O:(m E :A) ^ P (m 5A)g fP (h EA) ^ P (h E :A) ^ O:(h 5A)g ^ fP (m EA) ^ O:(m E :A) ^ P (m 5A)g fP (h EA) ^ P (h E :A) ^ P (h 5A)g ^ fP (m EA) ^ O:(m E :A) ^ P (m 5A)g

(11) (12) (13)

Which is the correct one? That depends upon the legal system we metaphorically use to explain agent behaviour. Bearing an analogue to English law in mind, we find that from Hugin’s perspective (13) seems to adequately represent the situation. The reason for this

is that Hugin seems to be free (or permitted) to both see to it that it access Yggdrasil, see to it that it does not access Yggdrasil, and be passive with respect to accessing Yggdrasil. What about Mime? As can be seen, the expression characterizing Mime’s actions with respect to , stays the same throughout the alternatives above. The second conjunct of it ( :(m : )) states that it is obligatory that Mime does not see to it that Hugin does not )) access Yggdrasil. This is simply our formalization of (2). The third conjunct ( (m states that Mime is permitted to be passive with respect to ; i.e. it is permitted that Mime does not see to it that Hugin does not access Yggdrasil, and it is permitted that Mime does not see to it that Hugin access Yggdrasil. The first part of this seems reasonable: Mime is permitted to not interfere with Hugin’s access. The second part is reasonable on the assumption that Hugin is permitted not to access Yggdrasil, even when he has paid for his permission to do it. The first conjunct of the (part–)position characterizing Mime’s actions, however, cau)). It states that Mime is permitted to see to it that Hugin ses us problems (i.e. (m access Yggdrasil. Seemingly innocent, the austerity of this dawns on us when considering the case where Hugin changes its mind about accessing Yggdrasil. Consider the case where you go into a fast food restaurant, order a meal, and pay for the meal in advance. You sit down and commence eating until – suddenly – you remember that you have an appointment. Naturally, you get up without finishing the meal, and start to leave the restaurant. The waiter, however, stops you. He points out that since you have paid for the meal, he is permitted to see to it that you eat it, which he informs you that he will. He then starts forcing you to eat. Such a situation is absurd, and quite similar to the situation between Hugin and Mime. Thus it seems that we should not permit Mime to see to it that Hugin accesses Yggdrasil. We are forced to arrive at the conclusion that our preliminary analysis of the situation undertaken in section 4 is insufficient. Let us consider the set of positions in the appendix, again. We find that the only other plausible alternative (the reader is invited to ensure herself of this) is (45), stating about Mime’s actions that it is obligatory that ^ (m )). Note that since Mime is passive regards Hugin accessing Yggdrasil (i.e. (m ) implies :(m : ), our initial analysis was not wrong, it was just not complete. In summary, we hold that the following position (45) correctly characterize the normative relationships between the agents Hugin and Mime with respect to (in terms of Kanger’s and Lindahl’s conceptions, gives the correct rights–relation):

O

A E A

A

P 5A

P EA

O 5A

O

E A

::: O 5A

A

fP (h EA) ^ P (h E :A) ^ P (h 5A)g ^ fO(m 5A)g 7

(14)

Summing up

We argue the feasibility of this last analysis (14) in favor of the analysis performed in section 4 on the basis of the discussion just undertaken. We have been able to point out, by means of the theory, what should be the normative relation which holds between the two agents. Further arguments can be found in the fact that the theory of normative positions enables us to categorize patterns of agent behaviour as being in conflict with, or being coherent with one particular representation of what normative relationships are said to hold between the two agents. Designing agents or multiagent systems, we are in

constant need of cues for action: what should be done when. Some such cues for action should have as prerequisite the detection of finely nuanced patterns of violation or non– violation of norms. It is for this purpose the theory of normative positions may fruitfully be applied in characterizing and subsequently specifying (parts of) multiagent systems and agent behaviour. We find that the amount of normative nuances expressible balances well with the formal simplicity of both the theory itself and its constituents. We hope that this trade–off may invite also others to employ the theory for analysis or specification. A possible criticism of our approach could be that the number of normative nuances is rather large, and that the difference between some of them may seem rather subtle. We will counter this critique by claiming that most of these nuances may turn out to be important. The analysis above, where we argue that Mime should not be permitted to see to it that Hugin access the file, exemplifies such an important subtle nuance. We round up this section, rather poetically, by quoting Dorothy Parker [38, page 103]: When I was young and bold and strong Oh, right was right, and wrong was wrong My plume of high, my flag unfurled I rode away to right the world Come out you dogs and fight, said I And wept there was but once to die But I am old, and good and bad Are woven in a crazy plaid

As multiagent systems are maturing, or growing older, we believe that complex situations may occur as a result of prolonged interplay of agents. We hold that norms governing such situations often will be ‘woven in a crazy plaid’. The theory of normative positions we have described above is a tool to facilitate straightening out such ‘crazy plaids’.

8 Related work We note that the related notions of norm, obligation, and (social) law have recently attracted some interest from the multiagent society. In [43], obligations are analysed as a means of committing an agent to an action. In [44], a dyadic version of deontic logic (the logic of obligation and permission) is discussed as a means of specifying the behaviour of agents. In [45], the same dyadic deontic logic is employed for a similar specification of behaviour in simulations of animal societies. In [1], a time–relativised deontic logic is placed in a rather rich formal setting whose sum is an attempt at forming the basis of a theory of changing attitudes. Social laws are discussed in [42], [34], [41], [6], and [4]. Even though the mark of norms is commonly considered to be violability, social laws as considered in [42] are unbreakable. We find, however, that the authors foresee the need for considering the (in our opinion) more interesting situations where such hard constraints (i.e. necessities) cannot be enforced (c.f. [42, page 281]). This is studied in more detail in [34]. In [4], situations where it is ‘necessary’ to violate a social law in order to achieve legitimate goals, are

discussed. It is perhaps interesting to note that one may view the compliance with social laws in [42] as global heuristics in distributed problem–solving environments, while the violation of social laws seems to play the same function in [4]. A different approach is taken in [41] where social laws are discussed from the point of view of ‘social’ engineering of multiagent systems. A more general viewpoint regarding the relationships between norms and agents are taken by John McCarthy in [36] where he states that if the program is to be protected from performing unethical actions, its designers will have to build in an attitude about that. A similar consideration forms the core of our argument in favour of the use of the term ‘rights’: in order to avoid that one agent violate the ‘rights’ of another agent (or rather the owner of that agent), we need to ‘build in an attitude about that’.

9

Further work

Above, we have applied a rather simple formal apparatus in reformulating the theory of normative positions. We admit to having consciously suppressed aspects about the theory and its constituents which we have spent some time criticising in [15] and [29]. In these publications, we have proposed an enhanced set of deontic logics designed to better handle norms pertaining to one particular individual, and norms directed from one agent towards another. We are convinced that the more sophisticated logics described there may improve the analysis we have performed in this paper. There is a distinction to be made concerning the application of the theory of normative positions within agents, and about agents. With applying the theory within agents, we mean that the agents may use it to reasoning about what they should do concerning going into a ‘contract’ with another agent and thus subsequently establish a ‘rights relation’. A particularly pertinent use of the theory in this context is that an agent may request that another agent confirms whether one particular normative position is the one that should regulate their subsequent (interactive) behaviour. For use about agents, the theory may be employed either as an analytical tool for designers to agree about rules of conduct for their agents, or as a means of determining whether a situation upon which two owners of two agents disagree actually complies or not with implicit assumptions about how the agents should have acted. Above, we have not been clear about the distinction between the use of the theory within or about agents. Making this distinction more explicit, we foresee that it will be easier to incorporate our ideas into other logical frameworks. Another point which calls for further research concerns the representation of norms in a multiagent system. Recall that we considered it feasible to anthropomorphise agents only if the human terms employed correspond to well–defined internal and/or external states of the agents. It seems clear that norms regulating the behaviour of several agents (for instance during their interaction) cannot merely correspond to some internal state. Merely representing an obligation within each agent would mean that the deletion of such an obligation (from e.g. an agent’s obligation-base) would cause the obligation to disappear. Thus, norms regulating the actions of an agent must (at least in part) correspond to some external state. How this external state should be constituted (or viewed as constituted) in a practical realisation of a multiagent system is an open question. In

this paper, we have offered several possibilities, such as a document agreed upon by designers of agents, or normal human legislation (in traditional legal sources such as law books). The ramifications of choosing any one such solution would be an interesting research topic.

Acknowledgments I would like to thank Andrew J.I. Jones, Henning Herrestad, and Lee Bygrave for making themselves available in numerous discussions about these and related matters.

A Two–agent individualistic normative positions

O(h EA) ^ O(m EA) O(h EA) ^ O(m 5A) O(h EA) ^ P (m EA) ^ O:(m E :A) ^ P (m 5A) O(h E :A) ^ O(m E :A) O(h E :A) ^ O(m 5A) O(h E :A) ^ O:(m EA) ^ P (m E :A) ^ P (m 5A) O(h 5A) ^ O(m EA) O(h 5A) ^ O(m E :A) O(h 5A) ^ O(m 5A) O(h 5A) ^ O:(m EA) ^ P (m E :A) ^ P (m 5A) O(h 5A) ^ P (m EA) ^ O:(m E :A) ^ P (m 5A) O(h 5A) ^ P (m EA) ^ P (m E :A) ^ O:(m 5A) O(h 5A) ^ P (m EA) ^ P (m E :A) ^ P (m 5A) O:(h EA) ^ P (h E :A) ^ P (h 5A) ^ O(m E :A) O:(h EA) ^ P (h E :A) ^ P (h 5A) ^ O(m 5A) O:(h EA) ^ P (h E :A) ^ P (h 5A) ^ O:(m EA) ^ P (m E :A) ^ P (m 5A) O:(h EA) ^ P (h E :A) ^ P (h 5A) ^ P (m EA) ^ O:(m E :A) ^ P (m 5A) O:(h EA) ^ P (h E :A) ^ P (h 5A) ^ P (m EA) ^ P (m E :A) ^ O:(m 5A) O:(h EA) ^ P (h E :A) ^ P (h 5A) ^ P (m EA) ^ P (m E :A) ^ P (m 5A) P (h EA) ^ O:(h E :A) ^ P (h 5A) ^ O(m EA) P (h EA) ^ O:(h E :A) ^ P (h 5A) ^ O(m 5A) P (h EA) ^ O:(h E :A) ^ P (h 5A) ^ O:(m EA) ^ P (m E :A) ^ P (m 5A) P (h EA) ^ O:(h E :A) ^ P (h 5A) ^ P (m EA) ^ O:(m E :A) ^ P (m 5A) P (h EA) ^ O:(h E :A) ^ P (h 5A) ^ P (m EA) ^ P (m E :A) ^ O:(m 5A) P (h EA) ^ O:(h E :A) ^ P (h 5A) ^ P (m EA) ^ P (m E :A) ^ P (m 5A) P (h EA) ^ P (h E :A) ^ O:(h 5A) ^ O(m 5A) P (h EA) ^ P (h E :A) ^ O:(h 5A) ^ O:(m EA) ^ P (m E :A) ^ P (m 5A) P (h EA) ^ P (h E :A) ^ O:(h 5A) ^ P (m EA) ^ O:(m E :A) ^ P (m 5A) P (h EA) ^ P (h E :A) ^ O:(h 5A) ^ P (m EA) ^ P (m E :A) ^ O:(m 5A) P (h EA) ^ P (h E :A) ^ O:(h 5A) ^ P (m EA) ^ P (m E :A) ^ P (m 5A) P (h EA) ^ P (h E :A) ^ P (h 5A) ^ O(m 5A) P (h EA) ^ P (h E :A) ^ P (h 5A) ^ O:(m EA) ^ P (m E :A) ^ P (m 5A) P (h EA) ^ P (h E :A) ^ P (h 5A) ^ P (m EA) ^ O:(m E :A) ^ P (m 5A) P (h EA) ^ P (h E :A) ^ P (h 5A) ^ P (m EA) ^ P (m E :A) ^ O:(m 5A) P (h EA) ^ P (h E :A) ^ P (h 5A) ^ P (m EA) ^ P (m E :A) ^ P (m 5A)

(15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) (47) (48) (49)

References 1. John Bell. Changing attitudes. In Michael J. Wooldridge and Nicholas R. Jennings, editors, Intelligent Agents . Springer–Verlag, Berlin, 1995. 2. Nuel Belnap. Backwards and forwards in the modal logic of agency. Philosophical and Phenomenological Research, LI, 1991. 3. Nuel Belnap and Michael Perloff. Seeing to it that: a canonical form for agentives. Theoria, 54:175–199, 1989. 4. Will Briggs and Diane Cook. Flexible social laws. In Proceedings of the Forteenth Internation Joint Conference on Artificial Intelligence, pages 688–693, Montreal, Qu´ebec, Canada, August 20–25 1995. ˚ Bringsværd. Odin. Gyldendal, Oslo, 1991. 5. Tor Age 6. Hans-Dieter Burkhard. On fair controls in multiagent systems. In A. G. Cohn, editor, Proceedings of ECAI’94, pages 254–258, Amsterdam, the Netherlands, 1994. John Wiley and sons. 7. David Chaum. Achieving electronic privacy. Scientific American, August 1992. Electronically available at http://www.digicash.com/publish/sciam.html. 8. Brian F. Chellas. Modal Logic – an Introduction. Cambridge University Press, Cambridge, 1980. 9. D. Eichmann. Ethical web agents. In Second International World-Wide Web Conference: Mosaic and the Web, pages 3–13, Chicago, IL, October 1994. Electronically available at http://rbse.jsc.nasa.gov/eichmann/www-f94/ethics/ethics.html. 10. Encyclopedia Britannica. http://www.eb.com/, 1994. Electronic version of Encyclopedia Britannica available over WWW. 11. European Convention for the Protection of Human Rights and Fundamental Freedoms. Rome, 1950. (Entered into force 3. September 1953). 12. Dagfinn Føllesdal. Stig Kanger’s work in logic. In D. Prawitz and D. Westerst˚ahl, editors, Proceedings from Stig Kanger Memorial Symposium. North Holland Publishing Company, Netherlands, 1995. 13. M.R. Genesereth and S.P. Ketchpel. Software agents. Communications of the ACM, 37(7):18–21, 1994. Electronically available at http://logic.stanford.edu/sharing/papers/agents.ps. 14. Henning Herrestad. Formal Theories of Rights. PhD thesis, NRCCL, University of Oslo, 1995. 15. Henning Herrestad and Christen Krogh. Obligations directed from bearers to counterparties. In Proceedings from ICAIL’95, Washington, May 1995. ACM Press. Electronically available at http://www.uio.no/˜christek/papers/ICAIL95.ps. 16. Risto Hilpinen, editor. Deontic Logic: Systematic and introductory readings. D. Reidel Publishing Company, Dordrecht – Holland, 1971. 17. Risto Hilpinen, editor. New Studies in Deontic Logic. D. Reidel Publishing Company, Dordrecht – Holland, 1981. 18. Jaakko Hintikka. Knowledge and Belief. D. Reidel Publishing Company, Dordrecht – Holland, 1963. 19. Wesley Newcomb Hohfeld. Fundamental legal conceptions as applied in judicial reasoning. Yale Law Journal , 1913. 20. European Human Rights Report , volume 12. 1990. Autronica AG v Switzerland.

21. Andrew J.I. Jones and M. Sergot. On the characterization of law and computer systems: The normative systems perspective. In J.-J. Meyer and R.J. Wieringa, editors, Deontic Logic in Computer Science – Normative System Specification , Chichester, 1993. John Wiley. 22. Andrew J.I. Jones and M. Sergot, editors. Deon 94 – Proceedings from the Second International Conference of Deontic Logic in Computer Science , Oslo, 1994. Complex – TANO. 23. Stig Kanger. New foundations for ethical theory. In Risto Hilpinen, editor, Deontic Logic, pages 36–58. D. Reidel Publishing Company, Dordrecht – Holland, 1971. (Published as a privately distributed pamphlet in 1957). 24. Stig Kanger. Law and logic. Theoria , 38:105–132, 1972. 25. Stig Kanger and Helle Kanger. Rights and parliamentarism. Theoria , 6(2):85–115, 1966. 26. Curtis E.A. Karnow. The encrypted self: Fleshing out the rights of electronic personalities. Journal of Computer and Information Law, XIII(1):1–17, 1994. 27. Martijn Koster. Guidelines for robot writers, 1994. Electronically available at http://web.nexor.co.uk/users/mak/doc/robots/guidelines.html. 28. Christen Krogh. Obligations in multiagent systems. In Proceedings of the fifth Scandinavian conference on AI, Trondheim, May 29 – 31 1995. ISO Press. Electronically available at http://www.uio.no/˜christek/papers/SCAI95.ps. 29. Christen Krogh and Henning Herrestad. Getting personal. In Mark A. Brown and Jos´e Carmo, editors, Proceedings from DEON’96, Sesimbra, Portugal, 1996. Springer Verlag. Forthcoming. 30. E.J. Lemmon. An Introduction to Modal Logic. American Philosophical Quarterly, monograph series vol. 11, Basil Blackwell, Oxford, 1977. Written in 1966, in collaboration with Dana Scott. Edited by Krister Segerberg. 31. Lars Lindahl. Position and Change. D.Reidel Publishing Company, Dordrecht – Holland, 1977. 32. Lars Lindahl. Stig Kanger’s theory of rights. In 9th International Congress on Logic, Methodology, and the Philosophy of Science , pages 1–21, Stockholm, August 7-14 1991. 33. David Makinson. On the formal representation of rights relations. Journal of Philosophical Logic, 15:403–425, 1986. 34. Kei Matsubayashi and Mario Tokoro. A collaboration mechanism on positive interactions in multi–agent environments. In Proceedings from IJCAI’93, pages 346–351, San Mateo, California, 1993. Morgan Kaufmann Publ. 35. O. McBryan. GENVL and WWWW: Tools for taming the web. In O. Nierstrasz, editor, Proceedings of the First International World Wide Web Conference (WWW94), CERN, Geneva, 1994. Electronically published as http://www.cs.colorado.edu/home/mcbryan/mypapers/www94.ps. 36. John McCarthy. What has AI in common with philosophy? Manuscript for talk given at IJCAI’96. The manuscript is electronically published at http://www-formal.stanford.edu/jmc/, 1995. 37. J.-J. Ch. Meyer and R.J. Wieringa, editors. Deontic Logic in Computer Science: Normative System Specification . John Wiley, London, 1993. 38. Dorothy Parker. The Best of Dorothy Parker . Methuen, London, 1952. 39. Ingmar P¨orn. Action Theory and Social Science . D.Reidel Publishing Company, Dordrech – Holland, 1977. 40. J. Pottmeyer. Renegade intelligent agents. In SIGNIDR V – Proceedings of special interest group on Networked Information Discovery and Retrieval, McLean, VA, August 4, 1994 , August 1994. Quotes from presentation slides available at http://www.wais.com/SIGNIDR/Proceedings/SA3/SA3-2.htm.

41. Jeffrey S. Rosenschein. Consenting agents: Negotiation mechanisms for multi–agent systems. In Proceedings from IJCAI’93, pages 792–798, San Mateo, California, 1993. Morgan Kaufmann Publ. 42. Y. Shoham and M. Tennenholtz. On social laws for artificial agent societies. In Proceedings of AAAI–92, 1992. 43. Yoav Shoham. Agent-oriented programming. Artificial Intelligence, 60:51–92, 1993. 44. Geoff Staniford. Multi–agent system design: Using human societal metaphors and normative logic. In A. G. Cohn, editor, Proceedings of ECAI’94, pages 298–293, Amsterdam, the Netherlands, 1994. John Wiley and sons. 45. Geoff Staniford and Ray Paton. Simulating animal societies with adaptive communicating agents. In Michael J. Wooldridge and Nicholas R. Jennings, editors, Intelligent Agents . Springer–Verlag, Berlin, 1995. 46. Nicholas Sturgeon. What difference does it make whether moral realism is true? In Norman Gillespie, editor, Spindel Conference 1986: Moral Realism, pages 115 – 141. The Southern Journal of Philosophy, Volume XXIV Supplement, Department of Philosophy, Memphis State University, 1986. 47. Christophe Trouche. The WWWMM robot, 1994. Electronically available at http://www-ihm.lri.fr/˜tronche/W3M2/. 48. Edward J. Valauskas. Britannica online: Redefining encyclopedia for the next century. Database Magazine , 18(1), 1995. Excerpts from article available at http://www-lj.eb.com/dbmag.html. 49. Georg Henrik von Wright. Deontic logic. Mind, 60:1–15, 1951. 50. M. Wooldridge, J. P. M¨uller, and M. Tambe, editors. Intelligent Agents Volume II — Proceedings of the 1995 Workshop on Agent Theories, Architectures, and Languages (ATAL-95), Lecture Notes in Artificial Intelligence. Springer-Verlag, 1995. 51. Michael J. Wooldridge and Nicholas R. Jennings. Agent theories, architectures, and languages: A survey. In Michael J. Wooldridge and Nicholas R. Jennings, editors, Intelligent Agents . Springer–Verlag, Berlin, 1995. 52. Michael J. Wooldridge and Nicholas R. Jennings, editors. Intelligent Agents. Springer– Verlag, Berlin, 1995.

A X macro package with LLNCS style This article was processed using the LT E

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.