A Multi-Agent Based System for Parallel Image Processing

Share Embed


Descrição do Produto

A multi-agent based system for parallel image processing M. Luckenhaus and W. Eckstein Forschungsgruppe Bildverstehen (FG BV) Chair IX, Institute of Computer Science, Munich University of Technology, Orleansstr. 34, D-81667 Munchen, Germany

ABSTRACT

Parallelization of image analysis tasks forms a basic key for processing huge image data in realtime. At this, suitable subtasks for parallel processing have to be extracted and mapped to components of a distributed system. Basically, this task should be done by the processing system and not by the user, as automatical parallelization allows a exible resource management and reduces time for developing image analysis programs. This paper describes a multi-agent based system for planning and performing image analysis tasks within a distributed system. It illustrates a method for modeling image analysis tasks under the viewpoint of parallel processing and explains the special design requirements for parallelizing agents. Furthermore, we describe concepts for agent cooperation and for using the agent's ability of learning to allow long term improvement of its planning and scheduling strategies. The presented image analysis system allows an architecture-independent parallel processing of image analysis tasks with an optimized resource management. Keywords: parallel image processing, multi-agent system, architecture independence, automatic parallelization, task modeling

1. INTRODUCTION

Image analysis requires large amounts of memory and a high cpu performance especially when processed under time constraints. To cope with this problem, tasks may be parallelized. There have been several e orts in the past to nd parallel algorithms for particular operators (e.g. Fourier transform1 or threshold operator2) or to examine parallelization of special tasks such as circle detection3 or contour tracking.4 But parallelization should not only a ect the low level and should not be restricted to special tasks. This is particularly important when considering program development. Developing image analysis programs is a complex task, as the user has to experiment with di erent operators and di erent parameters to solve a given problem. This interactive technique is quite time intensive especially when developing parallel programs, as this requires additional e ort for nding a suitable distribution of program modules and data. So there is the need for high level concepts of architecture-independent, parallel image processing methods. In this paper we present a multi-agent based system for automatic parallelization of image analysis tasks. Basically, it can be structured into three layers: at top level, an application interface allows to call particular operators or to pass image analysis programs for parallelization. A multi-agent system forms the second layer. The agents perform two tasks: planning the parallel processing and performing it. At this, they make use of modules of the third layer { the object management containing an image operator and image object data base. The main focus of this paper lies on the design and concepts of the multi-agent system. The whole system provides an environment for developing and processing image analysis tasks within a distributed system. Several methods of parallel processing (task-/data-parallelism and pipelining) are implemented by this system so that they all can be exploited likewise via one interface by an application. The decision which method to use is left to the agent. The user need not to bother about it, but may propose a certain concept. The agents reside on the various nodes of the distributed system. Any node must be represented at least by one agent so that tasks and data may be eciently distributed within the whole system. Other author information: M.L.: Email: [email protected]; Telephone: +49 89 48095 114 Fax: +49 89 48095 203; Web-Page: http://wwwradig.informatik.tu-muenchen.de/people/lueckenhaus/lueckenh/lueckenhaus e.html W. E.: Email: [email protected]; Telephone: +49 89 48095 117 Fax: +49 89 48095 203; Web-Page: http://wwwradig.informatik.tu-muenchen.de/people/eckstein.html

2. AGENT-BASED IMAGE PROCESSING

\Agent" has become one of the most popular catchwords in computer science within the last years. Unfortunately, dependent on the speci c context, the term is used with quite di erent meanings { from intelligent5 and social6 agents in the eld of distributed arti cial intelligence, market-oriented agents7 for computational economies, agents for resource management in distributed systems8 to mobile agents for network management9 or internet applications10 (the latter also known as \spiders", \robots"). Therefore, it is necessary to specify our understanding of agents before talking about the motivation for using them.

2.1. De nition of the term agent

An agent is a software module with the following properties11:

 an agent cycle (e.g. observing environment { reasoning { planning further steps { executing { observing ...)

de nes its basic functioning,  an individual knowledge base contains the agent's knowledge about other agents and its environment in an explicitely represented form,  comprehensive capabilities of communication (e.g. di erent data exchange protocols, broadcasting, voting) help the agent to solve problems in a cooperative manner,  the agent can interprete and change its role (e.g. coordinator of negotiation, executing module) and is able to model its role as well as the roles of other agents.

This properties presume the agent's ability of reasoning about facts and drawing conclusions from it. Thus the agent acts not only in a reactive way, but plans its actions and may learn from errors. Note that for our aim the agent need not to be mobile, as it is more ecient to delegate a task from one agent to another than to migrate an agent from one node to another. Now, a multi-agent system is a community of agents that cooperate to solve common problems (here: the parallelization of an image analysis task).

2.2. Motivation for agent technology

Using agent technology for parallel image analysis has got some evident advantages, especially in our case:

2 The concurrent, cooperative working technique of a multi-agent system ts immediately to the distributed program-

ming model used for parallel image processing. A multi-agent system consists of several individual modules running on a distributed system and cooperating to solve common problems. During parallel processing of an image analysis program several indivudal image operators work in a distributed system and communicate via data streams to perform a common task. So it seems obvious to coordinate and perform operators by agents. 2 Agents are suitable for ecient, distributed planning.12 Thus we will use them for planning the parallel processing of image analysis tasks. 2 Agents can be used for management of (distributed) resources.13 Therefore, we will use agents for eciently distributing data and gathering results. 2 Agents are suitable for parallel processing of image analysis tasks (e.g. object tracking14). So the agents not only plan, but also perform the parallel processing. 2 Agents are exible and dispose of the capability of learning. The exibility can help to implement dynamic load balance strategies for an optimal distribution of data and tasks. In addition, the ability of learning allows a long term improvement of the agent's planning and scheduling strategies. As we see, there are several reasons for using a multi-agent based approach. But using agents has got drawbacks, too. The most important one is the additional overhead of the agents e.g. caused by their communication, the management of their knowledge bases or computation time for learning. We will see later how to reduce this overhead.

3. AGENT-BASED PARALLELIZATION

The rst step of processing tasks within a distributed system is to choose the method of parallelism.

3.1. Methods of parallel processing

Generally, there exist the following basic concepts of parallelism: task parallelism, data parallelism, pipelining (cf. gure 1) and combinations of those. Any high-level approach for parallel image processing should support all the concepts likewise. This allows to choose the most appropriate method for parallelizing a given task. For example, data parallelism should be used to parallelize lter operations working in the spatial domain, whereas pipelining subtasks is often an appropriate method in image sequence analysis. Moreover, di erent concepts of distributing tasks or data must be supported. This is particularly true of data parallelism, where the eciency of parallel algorithms often strongly depends on the chosen data distribution.15 Task parallelism

Data parallelism

Task

Task

CPU

CPU

CPU

CPU

Data

CPU

CPU

CPU

Data

Pipelining Task

CPU

CPU

CPU

Sequence

CPU

CPU

Result

Figure 1. Methods of parallel image processing.

3.2. Modeling image analysis tasks

To plan the parallel processing of a task it must have been speci ed before. This speci cation consists of the image analysis program (e.g. written in a common programming language, like C, or speci ed per drag-and-drop via an graphical user interface) and information about dependencies of subtasks. The latter forms the basis for planning the parallel processing. The program code must be speci ed by the user, whereas the dependencies are derivated automatically from the program speci cation by the system. This process leads to an internal representation (model) of the image analysis task used for its parallelization. Generally, image analysis tasks consist of a set of operators connected by data streams. Thus, the task model consists of two main elements: operators and an image analysis task graph. . Operator: An operator forms the smallest unit of an image analysis program and implements one process step. Resource requirements of di erent operators depend on their functionality that may be simple tasks (e.g. \read image from le") or complex processes (e.g. \estimate state by Kalman ltering"). Operators are active objects within the image processing system. Their behaviour is modeled by a stream processing function, de ning the transformation of input stream elements into elements of the output stream (cf. gure 2). Elements of the stream belong to one of the following object classes:

 Object parameters : including image objects (area of de nition + one or more matrices of gray values), region objects and polygon objects; they contain the data for the image processing.  Control parameters : including integers, oating-point numbers and strings; they control the processing and contain result values not belonging to the object parameter-class, e.g. \number of pixels within a region" or \maximum of all gray values".

control parameters

input

object parameters

static attributes:

- name - source code (defines functionality)

"whole"

- functionality class (e.g. io-function, image filter etc.) - average computation time - average resource requirements

Operator "invert"

output

- probable successor dynamic attributes:

- parallelization - relations to other operators - applying agent - mapping (cpu identifier)

Figure 2. Stream processing function modeling an operator. Moreover an operator is characterized by its attributes that can be subdivided into static and dynamic ones. Static attributes are common to all operators of the same type (operators are of the same type, if their names and source codes are equivalent), whereas dynamic ones depend on the individual call { the \instance" { of an operator (cf. gure 2). The multi-agent system uses these attributes for mapping operators to hardware nodes (cpus). A set of operators forms the basis of an image analysis task. . Image analysis task graph: an image analysis task graph models an image analysis program in respect of parallel processing. It consists of an amount of operators and their dependencies on each other. Dependencies between operators are modeled by relations over resources. Figure 3 shows the class hierarchy of resources as well as the typical relations between operators. This results in a data ow graph with operators as nodes connected by relations. resources for parallel image analysis image analysis objects

system objects

object parameters

control parameters

hardware modules

images

integers

processing node (cpu)

software modules source code

region data polygon data

floating point numbers strings

i/o node local memory block

files (content)

synchronization points

shared memory block

typical relations between operators simple data transfer

data splitting data

operator1

data

operator2

operator1

operator2

data merging

pipelining

operator1 data

operator2

operator1

data

operator2

data

Figure 3. Resource objects and relations between operators. . Image analysis subtask: image analysis tasks can be splitted into subtasks. Those are represented by image

analysis task subgraphs describing the subset of operators and their dependencies on each other. Under an abstracted \blackbox"-viewpoint a subtask can be seen as a complex operator. Therefore, it can also be modeled by an stream processing function with the additional attribute \task subgraph" representing the internal structure of the subtask

and \order". The order of a subtask indicates the level of abstraction. A subtask of order 0 contains only operators, a subtask of order 1 contains at least one of maximum order 0. The general rule is: a subtask of order i+1 contains at least one subtask of maximum order i. Allowing subtask abstraction reduces the complexness of task representations as it allows to hide details.

3.3. Distributed planning

The parallelization of tasks is done by simultaneously building up and working on the according image analysis task graph during a distributed planning process. . Initialization: an agent receives the task speci cation and performs the rst coarse segmentation by splitting it into at least 2 subtasks (sub1 ..subn) according to the following strategy: if the user has already speci ed at least one subtask, these user-de ned subtasks are assigned to sub1.. subn?1, whereas the rest of the program speci es subn. Elsewhere the agent has to build up two subtasks: rst, the agent extracts all m operators of the task that may immmediately be started (an operator is ready to start, when all necessary input resources are available). These de ne the beginnings of m independent subtasks. The agent chooses one and delegates the remaining m ? 1 to other agents for further subtask generation (\subtask mapping"). After that, beginning with the chosen start operator osi the subtask is built up as follows: the agent examines all output relations of osi :  If there is only one successor ooj connected by simple data transfer and ooj can be processed at the local node, the subtask is expanded by ooj . If ooj shows any hardware dependencies that determine another processing node, the current subtask is closed and ooj becomes the start operator of a new subtask subj that is delegated to an agent on the speci c node.  If an successor ooj is joined by data splitting, the current subtask is closed and ooj de nes the start operator of a new subtask subj that is connected to the current one by the relation data splitting. The agent continues working on the new subtask and delegates the generation of further subtasks beginning with the remaining successor operators of osi to other agents.  If successor ooj is connected by data merging, the current subtask is closed and ooj becomes the start operator of a new subtask subj . If there is already an agent working on ooj (i.e., generation of subj has already been started via another data path), subtask generation is completed. Otherwise the agent continues with the generation of subj .  If there is no successor of osi , the current subtask is closed and the subtask generation stops. Whenever an operator is added to an actual subtask the agent starts the operator re nement. This allows to begin the processing of operators already during the task graph generation. . Re nement of subtasks: whenever a user de ned subtask is adopted it has to be re ned. This is done just in the same way as generating subtasks during the initialization step as described above. If there is no further possibility of subtask-re nement as the subtask is of order 0 and all contained operators show sequential relationship, the agent starts the operator-re nement. . Re nement of operators: the agent uses its knowledge about the attributes of an operator to parallelize the operator's processing. The \functionality class" determines the applicable methods of parallel processing, e.g. i/o-functions must be processed local at the i/o-node without any parallelization, most image lters in the spatial domain can be processed by data parallelism. If an operator instance (actual call of an operator) is suitable for further re nement, the agent parallelizes it by derivating new operator instances with modi ed input parameters. At this point, the agent decides which data distribution to use by considering the suggestion of the operator attribute \parallelization" and information about the accessable hardware. For example, an average lter called with an image of 512512 size is transformed into 4 new calls of the average lter with 4 image segments of 128128 sized stripes (on a system with 4 processing nodes). Derivated operator calls are mapped to other agents in order to process them. Of course, it is also possible that the agent decides to process one or more of them locally and not to delegate them (\operator mapping").

3.4. Scheduling and parallel processing

As we have seen in the section before, planning the parallel processing and performing the processing are done simultaneously. Therefore, scheduling strategies must already be considered during the planning phase. At this, we distinguish two cases { operator mapping and subtask mapping. Operator mapping becomes necessary whenever the re nement of an operator is completed and the operator has to be processed. This is done again with the help of the operator attributes. First, the agent checks the

operator's \functionality class" in regard to hardware dependencies (e.g. with operators for in-/output). Second, the agent veri es whether another agent applies for the processing of the operator (attribute \applying agent"). Every agent controlling the processing of an operator checks the attribute \probable successor". This attribute determines propabilities for potential successing operators and de nes resources that are shared by predecessor and successor (a typical example: an operator gen psf defocus generates a lter mask and returns it with the result image imagepsf ; another operator wiener filter uses imagepsf as input to perform a Wiener ltering). The agent registers himself as applicant for the potential successing operators. If another agent works on the parallelization of one of these operators, it recognizes the application and delegates the processing of the operator directly to the applicant. The advantage of this method: resources shared by di erent operators need not to move within the distributed system, if a successing operator is processed on the same node as its predecessor. After verifying the attribute \applying agent" the agent compares the attributes \average processing time" and \average resource requirements" to the corresponding hardware information (load and performance of processing nodes, available amount of local memory) in order to decide the mapping of the operator to processing nodes. If this results in the local node the agent keeps performing the operator, otherwise the processing is delegated to another agent on an appropriate external node. Subtask mapping always appears when an agent delegates the generation and therefore also the processing of a new subtask to another agent. The agent has to choose a partner that resides on a cpu that is suitable for the processing. To deside this, the agent needs knowledge about the subtask, such as its average computation time, dependencies to speci c hardware or relations to other subtasks. At this stage, only the rst (start) operator of the subtask is known. Entire information about the subtask could only be evaluated by completing the subtask generation what is too time consuming in many cases. On the other hand, the information of the start operator often is not enough to decide an ecient mapping. The solution is a compromise by using a \look ahead" of n operators. The agent examines at least n successor operators without processing them to get more information about the subtask. After this, the agent decides the mapping by the following criteria:  hardware dependencies determining the processing cpu,  agents-applicants for the processing (applying agents are preferred),  relations to other subtasks that restrict the choice (e.g. if the subtask subj is connected to another subi by an data merging relation over a resource that is time-consuming at transfer, such as image data, subj is mapped to a neighbouring or the same cpu of subj ),  presumable computation time of the subtask and current load of the nodes. image analysis task graph subtasks

operator1

data

operator2

data

operator3 data

data

operator5

data

operator6

operator4

actual task mapping agent 1 on cpu 1

agent 2 on cpu 2

agent 3 on cpu 3

agent 4 on cpu 4

agent 5 on cpu 5

agent 6 on cpu 6

processes

refines

processes

data parallelized

cooperates with

operator1

operator3

operator5

operator6,

agent 4, processes agent 4, processes

processes it

operator6

1

2

3

4

5

6

6

cooperates with operator6

6

Figure 4. Example snapshot of time-shared planning and processing.

6

When processing an image operator the agent makes use of an operator data base that is available at every node. Figure 4 summarizes the concept of time-shared planning and processing.

3.5. Improvement by learning

Agents dispose of the capability of reasoning about actions and results. This can be used to improve the agents scheduling and planning strategy in two respects: improve a given strategy or choose between di erent strategies. The time-shared planning and processing strategy described above can be improved in the following way: Whenever the planning component of the agent has to make a decision it makes use of heuristics based on control values and weights. Control values represent elements of information about the agent's environment. Typical elements are: performance of single processing nodes (cpus), reliability of the interconnection network, data rate of an interconnection between two cpus, evaluation of another agent's will to cooperate. Weights represent the importance of di erent control values for a decision. Both { control values and weights { may be changed dynamically to re ect the agent's experiences, e.g. if an agent observes that an interconnection network is less reliable than assumed the associated control value is adapted. Weights can be trained by repeatedly parallelizing the same task with slightly alternating the weights. In any case the agents gather information during the parallelization and rate the ecience with every completion of a task to adapt the content of their individual control values and weights. Furthermore, every agent provides information that is of common interest, such as the performance of processing nodes, to others via the blackboard. Another method of improvement by learning supposes agents with variable planning methods, e.g. a statical method where the complete image task graph is generated before processing it and a dynamical one as described above. The user starts the same task several times with the agent system alternating between scheduling strategies with every parallelization. The agents evaluate the di erent running times and rate the di erent strategies. By repeating this experiment with di erent representative tasks the agents can give valuable hints which strategy to use for a speci c class of tasks (exemplary classes may be: Tasks mainly using operators with data parallelism, tasks suitable for pipelining, tasks with or without i/o-operators etc.).

4. DESIGN OF THE MULTI-AGENT SYSTEM

Whenever developing a technique to parallelize tasks, one has to keep in mind the additional overhead. Time needed for performing the parallelization may not exceed the time gained by the faster parallel processing. Thus the multi-agent system must show certain characteristics to ful ll these time constraints.

4.1. Agent design { lean agents

To cope with the problem of overhead we have introduced the concept of \lean agents" that bases on two concepts: reduced capabilities (connected with delegation of functionality) and encapsulated object information. Basically, all members of the multi-agent system have got all properties postulated before (agent cycle, knowledge base, exible roles, extended communication abilities). But, in order to minimize computation time, the agents are only equipped with a minimum of capabilities that they need to perform their actual tasks. If needed, extended functionality may be delegated to an agent. A minimal functionality reduces the amount of potential actions in a situation and therefore allows to implement a lean and fast planning component. Another concept that helps to reduce the agent's complexity uses encapsulated object information. The agent collects all information about its environment and tasks (especially about image operators and subtasks) in a private knowledge base. The maintenance of the knowledge base { particularly guaranteeing the consistency of dynamic information { causes an problematic overhead. To reduce this, the agent only keeps information exclusively concerning itself, such as its tasks or its evaluation of other agent's will to cooperate. Any information about an object that may be of interest to more than one agent is represented externally and maintained by object managers. Thus, we can say that information is encapsulated by the associated object. When working with an object the agent can get information about it by special services of the associated object manager. Attributes of operators can be retrieved via an external operator data base, information about image objects via an image object data base. Sharing object information amongst agents reduces the extend of the single agent's knowledge base and eases preserving data consistency, as any information is stored only once. The drawback of this concept: the processing environment of the agents must support the concept of object encapsulated information. The agent become more dependent on its environment and less autonomous.

4.2. Cooperation concepts

In contrast to communities where agents work together representing di erent users and therefore may have di erent plans and interests, all members of our multi-agent system have one common goal (parallelize an image analysis task in respect of eciency) and cooperate to achieve this. Therefore, the only con icts between agents that may occur concern shared resources, such as memory or cpu time. Cooperation mainly appears when delegating tasks to other agents or while gathering information about tasks. Since the multi-agent system is designed quite homogeneously, all agents have got the same basic functionality, i.e., an agent may principally solve a given problem alone without the help of other agents. Therefore, the number of agents that cooperate to solve a problem may vary with the problem size. This marks a fundamental di erence to heterogeneous approaches (as they exist e.g. for planning in manufacturing systems16 ) where the cooperation is principally necessary to solve the common problem (e.g. as a task can only be performed by an agent that represents a speci cally equipped robot). Within the multi-agent system for parallel image processing cooperation is only used where it leads to more ecient solutions. The cooperation uses a mechanism similar to the contract net protocol. Common contract net protocols follow the scheme \announce task", \collect bids for task" and \grant for task". This is now extended by the option of \application" where agents may apply for the processing of a task as described above. Thus, before announcing a task an agent looks on a blackboard whether there is already an appropriate applicant for it (cf. gure 5).

Agent

Blackboard application list

protocol

apply

Agent

search for applicant yes

task announcements

applicant? no

read

announce task evaluate bids

Agent

make bid accept

grant bid

Figure 5. Communication paths and protocol.

5. IMPLEMENTATION ASPECTS

The concepts described so far are currently implemented by integrating an parallelizing multi-agent system in the image analysis system HORUS . In the following section we will take this exemplary implementation to illustrate how to design a multi-agent based image analysis system. The current prototype of the multi-agent system follows the shared memory programming model that allows a fast and easy access to data. The implemented inter-agent communication library makes use of this advantage by transferring compact messages containing only addresses of data instead of transferring the whole message content.

5.1. The image analysis system

The multi-agent based system can be structured into the following layers:

 The application layer contains any application that bases on image analysis tasks or operators. This may be a program in a high level programming language making use of an image operator library. It also may be a development tool such as Hdevelop in our case { a tool with a graphical user interface for interactive development of image analysis programs.17 Finally, any tool used to specify image analysis tasks runs within this layer.

 The agent interface layer is necessary to allow programs in di erent programming languages and tools of the

application layer a consistent access to components of the system. This layer contains modules for transforming data representations used in the application layer into internal ones used by the agents and vice versa. Moreover, it contains all centralized modules, namely the blackboard mechanism, a name-service for the inter-agent communication system and an agent manager that gathers information about the state of the agents.  The multi-agent system receives task descriptions and operator calls from the interface layer in order to perform the parallelization. At this, it makes use of modules of the next layer.  The object management layer contains all objects necessary for parallel image processing together with their managers. This includes an image object data base for managing all image objects (images, regions, polygon data) and an operator data base containing a library with more than 600 operators and information about operators. Moreover, managers for system resources (e.g. memory) reside within this layer that provide the agents with information about the actual resource allocation and system load.

All layers are designed modularly to allow easy extentions of the system.

5.2. Scalability of the system

Scalability is an important property of distributed systems and one of the big advantages of multi-agent based systems due to the following facts: 2 The size of a multi-agent system may easily changed by adding or deleting single agents. This can be done dynamically. If more agents than currently available are necessary to solve a problem, new ones may be created. I.e., if an agent needs help of others, but all accessable agents work on other tasks and can not help, it may initialize new agents, pass its knowledge and delegate tasks to them. On the other hand, if an agent is idle for a long time, it may decide to terminate. So the agent signals the termination via a broadcast message, provides its knowledge to all others on the blackboard and stops working. 2 Agents are exible to system extensions due to their capability of learning. If new members join the agent community or if the system hardware is extended the agent may dynamically learn the changes without the need of recon guring and restarting the whole system. New members only need to check in to the communication system. All other agents update their list of external agents by comparing it from time to time to the central list of the agent name service. If the hardware con guration is extended, only a central con guration le must be changed. All agents update their knowledge about the hardware con guration by checking this le regularly.

6. CONCLUSION

In this paper we described concepts for a general purpose image analysis system based on a multi-agent system for automatic parallelization of image analysis tasks. We showed how image analysis tasks can be modeled in respect of their parallelization. The model uses two basic modules { operator and image analysis task graph { for a data- ow representation of image analysis tasks. Furthermore, we illustrated a multi-agent based concept for time-shared planning and performing of tasks. The speci cally design of the multi-agent system re ects the requirements of an ecient parallel image processing. It bases on lean agents to reduce the agents overhead and works with fast, uncomplex cooperation schemes. The agents use knowledge about image analysis tasks and operators (e.g. probable successors or predecessors of operators, average computation time of operators) to create an ecient schedule and to decide the mapping of operators and subtasks in respect of the actual system load. The implementation of an accordant multi-agent system is currently in progress. The agents extend the already implemented image analysis system HORUS that provides a library of more than 600 image operators, resource managers for image and memory objects, interfaces to common programming languages and a tool with a graphical user interface to develop image analysis programs. HORUS currently works sequential. By integrating the multiagent system it becomes possible to automatically parallelize image analysis tasks and speed up the processing. At the moment, only an agent-based re nement of operators using data parallelism is supported. Future works will extend the multi-agent system to allow the parallelization of complete tasks by using di erent task and data distributions and further methods of parallel processing, such as pipelining.

REFERENCES

1. N. Jungclaus and M. Nolle, \Ecient implementation of t-like algorithms on mimd systems," in Proceedings of EUSIPCO-94, 7. Europ. Sign. Proc. Conf., M. Holt et al., eds., vol. III, pp. 1625{1628, EURASIP, Lausanne, September 1991. 2. L. Cinque, N. Levialdi, and A. Rosenfeld, \Fast pyramidal algorithms for image thresholding," Pattern Recognition 28(6), pp. 901{906, 1994. 3. S. Kumar, N. Ranganathan, and D. Goldgof, \Parallel algorithms for circle detection in images," Pattern Recognition 27(7), pp. 1019{1028, 1994. 4. A. Ferreira and S. Ubeda, \Ultra-fast contour tracking, with applications to thinning," Pattern Recognition 27(7), pp. 867{878, 1994. 5. P. Maes, \Modeling adaptive autonomous agents," Arti cial Life Journal 1(1, 2), 1994. 6. J. S. Jurgen Muller, \Structured social agents," in Verteilte kunstliche Intelligenz und kooperatives Arbeiten, 4. Internationaler GI-Kongre Wissensbasierte Systeme, W. Brauer and D. Hernandez, eds., pp. 42{52, SpringerVerlag, Berlin, October 1991. 7. T. Mullen and M. P. Wellman, \Some issues in the design of market-oriented agents," in Intelligent Agents: Theories, Architectures, and Languages, vol. II, Springer-Verlag, Berlin, 1996. 8. A. Chavez, A. Moukas, and P. Maes, \Challenger: A multiagent system for distributed resource allocation," in Proceedings of the International Conference on Autonomous Agents '97, February 1997. 9. M.-A. Mountzia, \Intelligent agents in integrated network and systems management," in Proc. of the EUNICE'96 Summer School, September 1996. 10. D. Eichmann, \The RBSE spider - balancing e ective search against web load," in First International Conference on the World Wide Web, May 25-27 1994. 11. M. Luckenhaus, \Konzepte zur agentenbasierten Parallelisierung in der Bildanalyse," in Graduiertenkolleg Kooperation und Ressourcenmanagement in verteilten Systemen { Arbeits- und Ergebnisbericht zum ersten Fortsetzungsantrag im Fruhjahr 1997, No. TUM-I9707 in technical report, pp. 67{75, Munich University of Technology, Munich, Germany, March 1997. 12. E. Ephrati and J. S. Rosenschein, \Multi-agent planning as the process of merging distributed sub-plan," in The National Conference on Arti cial Intelligence, pp. 115{129, August 1994. 13. S. Groh and M. Pizka, \A di erent approach to resource management for distributed systems," in Proceedings of PDPTA'97, International Conference on Parallel and Distributed Processing Techniques and Applications, June 1997. 14. C. L. Tan, C. M. Pang, and W. N. Martin, \Transputer implementation of a multiple agent model for object tracking," Pattern Recognition Letters 16(11), pp. 1197{1203, 1995. 15. M. Nolle, G. Schreiber, and H. Schulz-Mirbach, \Ecient parallel algorithms for the extraction of image features with adjustable invariance properties," No. 5/94 in internal report, Technische Universitat Hamburg-Harburg, Technische Informatik I, Hamburg, Germany, May 1994. 16. P. Levi and S. Hahndel, \Modeling distributed manufacturing systems," in Tutorial 'Task oriented Agent Robot Systems' at the 4. International Conference on Intelligent Autonomous Systems, pp. 25{32, 1995. 17. W. Eckstein and C. Steger, \Interactive data inspection and program development for computer vision," in Visual Data Exploration and Analysis III, G. G. Grinstein and R. F. Erbacher, eds., Proc. SPIE 2656, 1996.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.