Comparing two approaches to modelling decentralised manufacturing control systems with UML capsules

June 12, 2017 | Autor: Martyn Fletcher | Categoria: Control system, Object Oriented Software Modeling, Real Time, Discrete Event Simulation
Share Embed


Descrição do Produto

Comparing Two Approaches to Modelling Decentralised Manufacturing Control Systems with UML Capsules R.W. Brennan1, S. Olsen1, M. Fletcher2, and D.H. Norrie1 1 Department of Mechanical and Manufacturing Engineering, University of Calgary 2 Agent Oriented Software Ltd, Mill Lane, Cambridge, CB2 IRX, United Kingdom 1 [email protected], [email protected] Abstract In this paper, we describe two orthogonal methodologies for building decentralised manufacturing control systems. The fundamental building block of these control systems is the UML capsule stereotype that can be used to design object-oriented software systems that are open, agile, have capacity to manage real-time tasks and so can provide varying degrees of ‘intelligence’ within a 21st century manufacturing environment. We also evaluate the merits of these two viewpoints using a discrete-event simulation experimental test bed.

1. Introduction Recent advances in distributed computing have made it possible to move away from the traditional centralised, scan-based architecture of current industrial controllers (e.g., programmable logic controllers (PLC)) towards realising the goal of holonic and multi-agent control in the real-time control domain. Although industrial computer hardware and communication technology has been advancing at a blinding pace in recent years, software systems that can be used to take exploit the advantages of this technology in this domain has been lagging behind. In this paper, we evaluate the use of an emerging distributed process control model in combination with a wellestablished software engineering model to address this problem. We begin with a brief overview of these two models in section 2 then propose two orthogonal methodologies for building decentralised manufacturing control systems in section 3. Next, we describe our simulation-based approach for evaluating these two approaches in section 5, and conclude with a discussion of our future research direction in section 5.

2. Real-time distributed control systems In this section we describe two promising models for real-time distributed control. The first, described in section 2.1, emerged from the International Electrotechnical Committee’s (IEC) open PLC language

specification, IEC 61131-3, to address complex distributed systems using function blocks. The second, described in section 2.2, is an extension of the wellestablished software engineering model, the Unified Modelling Language (UML) [1], to manage objectoriented software development.

2.1 IEC 61499 The new IEC 61499 architecture for industrial process control and measurement systems [5] is receiving concerted attention from the academic and vendor communities. This architecture focuses on function blocks and how they operate within an open environment containing distributed hardware controllers upon which the function blocks can execute their control/measurement algorithms. Moreover, the architecture provides a solid foundation to construct the systems needed by manufacturing businesses for 21st century production where batch sizes are decreasing, response times are getting shorter and the need to minimise wasted resources is even more paramount than today. With IEC 61499, the function block can be thought of in terms of an “enhanced” object. Like recent objectoriented and agent-based models for manufacturing system control, the IEC 61499 function block shares many of the characteristics of the traditional objects and agents used to develop these applications. For example, a traditional object focuses on data abstraction, encapsulation, modularity and inheritance, while agents concentrate on artificial intelligence, modelling each other and inter-agent cooperation. The function block is enhanced through its recognition of two very specific kinds of messages: data messages (which one would expect of a traditional object) and event messages (which are used to schedule the execution of a function block’s algorithms). The resulting focus on process abstraction and synchronisation makes this approach particularly suitable for control of an “intelligent” real-time manufacturing environment that is concurrent, asynchronous and distributed. Further details on IEC

Proceedings of the 13th International Workshop on Database and Expert Systems Applications (DEXA’02) 1529-4188/02 $17.00 © 2002 IEEE

61499 and its relationship to object- and agent-oriented approaches can be found in [3].

2.2 Real-time UML Unfortunately as yet, no coherent analysis and modelling philosophy exists to underpin the building of decentralised control systems required for the new breed of manufacturing addressed by IEC 61499. One proposal to redress this imbalance is to model the control system at a conceptual level using well-established object-oriented technologies (such as UML) and then map these concepts onto function blocks to facilitate execution in the realworld environment [4]. The Real-Time Unified Modelling Language (RTUML) was developed to deal with software systems that are characteristically time-critical, complex, event-driven and distributed. When using RT-UML to develop this type of manufacturing control system, a key concern is the architecture (or structural and behavioural framework) of the software operating within this system. Recently, when confronted with the problem of designing such a system with object-oriented or agent-based philosophies, objectoriented analysis and design (OOAD) in combination with the UML has proven to provide an efficient methodology to support system development. The RT-UML can be thought of as an extension of UML, or in other words, it provides a library of applied UML concepts that can be used for modelling the next generation of decentralised manufacturing control systems [7]. To support the design of such systems, the RT-UML adds three new concepts to UML. The first, RT-UML capsules, represent a concurrent, active software entity that can display location transparency across a number of hardware controllers in a manufacturing environment. Each capsule has associated member functions (behaviours) and attributes, with varying degrees of scope, to facilitate interaction. Capsules interact with both each other and the controlled manufacturing processes (probably in their local vicinity) through one or more signal-based boundary objects called ports. A private state transition machine is used to handle faults and manage the execution of a capsule’s functionality. Note that a capsule is an existing UML stereotype with suitable extensions to support real-time behaviour. The second, RT-UML ports, represent an object that implements a specific interface into and out of the capsule. A port mediates a capsule's interaction with the outside world, and there can be several ports per capsule depending on the distinct interaction roles the capsule has with external beings. Finally, RT-UML connectors are abstract communication channels that connect two ports and

provide flexible mechanisms to “glue” together capsules into a dynamic structure. Via these concepts, the environment and internal configuration of the capsule are decoupled from how the capsule is used. This leads to a higher degree of re-use for the capsules during software development cycles. The connections between ports illustrate how one capsule can affect others via direct communication. Recursive subcapsules are possible, so a hierarchy of capsules can be used to model individual function block algorithms, execution control states or data values. Thus capsules solve many of the problems faced during the development of our particular genre of real-time software [7] by combining: (i) UML as a general-purpose software and business system analysis and design approach, (ii) a language for visually representing software elements and how they interact in real-time, and (iii) role modelling to represent communication and design patterns between the software entities in a distributed process control system, e.g. role modelling helps specify collaboration along related connectors, and requirements for timing /sequencing among capsules. In order to illustrate how RT-UML can be applied to function blocks, we provide an example of a standard “count-up” basic function block (E_CTU) as illustrated in Figure 1. This figure shows the interface and execution control chart for the IEC 61499 standard’s E_CTU function block. This function block has the same basic functionality as the “count up” function block used in common PLC ladder logic: i.e. the CU event causes the COUNT algorithm to increment CV by one (CV: = CV + 1) and set Q to TRUE if PV is reached (Q: = (CV ≥ PV)). CU CUO RO

CU R

PV

START

R Q

1

CV R

(a) Function Block Interface

CUO

1

CU & CV < 65535

E_CTU

COUNT

1

CU & CV = 65535

RESET

RO

NULL

(b) Execution Control Chart

Figure 1: An IEC 61499 E_CTU Function Block An equivalent specification of the E_CTU function block, written in RT-UML notation, is shown in Figure 2. Figure 2(a) depicts the capsule’s class name and its public interface in terms of its attributes, ports and behaviours. In Figure 2(b), state action properties are specified by an event condition, a slash (“/”), and an action list. As well, entry and exit actions for nested states are also specified.

Proceedings of the 13th International Workshop on Database and Expert Systems Applications (DEXA’02) 1529-4188/02 $17.00 © 2002 IEEE

For example, the transition from state “START” to state “CU” involves first reading the current value of PV (“START” exit action), then executing COUNT (transition action). Before the state returns to “START”, CUO is set (“CU” exit action). We now explore two design methodologies to map function blocks to capsules.

capsule denotes the function block body (i.e. the combination of the algorithms and hidden data). Here, there are two algorithms (RESET and COUNT) and no hidden data. The state machine models the ECC in Figure 1(b). end port (events)

Exit: CUO

> CapsuleClass FB CTU Body

> CapsuleClass FB

CU

> CapsuleClass FB

Attributes

Ports CU: EVENT R: EVENT CUO: EVENT RO: EVENT PV: UINT Q: BOOL CV: UINT Behaviors COUNT() RESET()

CU & CV < 65535 / COUNT()

1

> CapsuleClass FB Body > CapsuleClass FB Body

Exit: PV R / RESET()

relay port (data)

START 1

1 CU & CV = 65535 R

NULL

(a) Class Diagram

(b) Collaboration Diagram

Figure 3: Methodology #1 (Coarse-grained)

Exit: RO

(a) Sub-capsule Interface

(b) Capsule State Machine

Figure 2: RT-UML Equivalent of E_CTU

3. Design methodologies The IEC 61499 and RT-UML modelling concepts share many similarities, and there is a clear correspondence between IEC 61499 concepts and RTUML capsules. Also, IEC 61499 data and event interfaces and RT-UML ports are similar, as are IEC 61499 execution control charts and RT-UML state transition machines. This resemblance leads us to the conclusion that function blocks (at the execution level) and capsules (at the abstract level) are analogous. In this section we describe a pair of methodologies for designing decentralised manufacturing control systems based on mapping RT-UML to IEC 61499. Our first methodology, illustrated in Figure 3, considers the function block to be equivalent to a capsule. For example, Figure 3(a) shows a class diagram of how a function block is modelled using capsules. A function block is composed of a function block (FB) capsules and a FB Body sub-capsule. Using RT-UML notation and in the context of our E_CTU worked example, a function block is modelled as follows. End ports represent event connections (i.e. ports that connect to a capsule’s state machine). In our scenario, there are two input events (CU and R) and two output events (CUO and RO) – each assigned to an end port. Relay ports denote data connections (i.e. ports that connect to a sub-capsule). For E_CTU, there is one data input (PV) and two data outputs (Q and CV). A sub-

For our second methodology, we model each function block as a UML component that encapsulates several independent capsules (each representing the constituent elements of a function block) and provides a suitable interface to other components and the hardware environment. A component acts as an encapsulation of its subordinate objects so that objects inside a component cannot have their state queried or changed by an object from outside the component. The component also provides all the appropriate interfaces to other related components. Component diagrams illustrate organizations and dependencies among the software components associated with the decentralised manufacturing control system. Function blocks are a suitable technology for constructing components. Hence intra-component activities are represented using the IEC 61499 syntax. However for increased semantic expression, we propose that capsules be used (at a conceptual echelon) as a complementary modelling philosophy. This means that a mapping is needed between function blocks and the component/capsule model. We postulate that, for our second design methodology, a function block can be adequately modelled (at an initial level of decomposition) as three capsules: (i) Head Capsule to represent the execution control chart of the function block (i.e. its states and transitions), (ii) Body Capsule to denote any private/protected/public member methods within the function block, and (iii) Data Capsule to represent the function block's private knowledge (encoded as internal data variables). The equivalent organization of capsules in a function block (i.e. component) is illustrated in Figure 4.

Proceedings of the 13th International Workshop on Database and Expert Systems Applications (DEXA’02) 1529-4188/02 $17.00 © 2002 IEEE

requirements (these will be described in more detail below). When a capsule is sent to a resource, it is intended to run there indefinitely. In other words, when a task finishes, it sets a new start time and a new deadline. For periodic tasks, we add a fixed period, T, to the current start time to determine a new start time. In the simulation, the capsule would be delayed by this “schedule delay” until it is ready to run again. For aperiodic tasks, the period T is not fixed, but is sampled from a negative exponential distribution. This logic is shown in Figure 5(b). Resource 1 Resource 2

Figure 4: Methodology #2 (Fine-grained)

“Resource”

Capsules Resource 3

Processor

?

4. Evaluating the methodologies The general objective of the research reported in this paper is to evaluate the two different software modelling methodologies described in the previous section, and in particular, to gain insights into how these approaches perform in an environment consisting of distributed heterogeneous resources. For example, when distributing capsules across resources, each resource may have different capabilities (e.g., analogue-to-digital converters (ADC), digital-to-analogue converters (DAC), Ethernet, Controller Area Network (CAN), etc.); as well, each capsule will have specific processing requirements (e.g., the need to access ADC, DAC, Ethernet, etc.). As a result, the choice of how the capsules are deployed will depend on matching requirements with capabilities in addition to judging if the resource has the capacity to handle the processing requirements. In this section, we describe the approach followed in this paper to evaluate this type of system.

4.1 The simulation model The experimental system used for this paper is developed using the Arena discrete-event simulation package [6]. Arena provides an integrated development environment that supports full model specification, execution and statistics analysis. At the start of the simulation a fixed number of capsules enter the simulation and are sent to the resources. Since capsules have different requirements and resources have different capabilities, we introduce some decision logic to determine where the capsules are sent for processing. This logic is shown in Figure 5(a). As noted previously, the decision as to where a capsule is sent is based on the I/O requirements and admission

Schedule Delay Resource NR

(a) Entrance Logic

(b) Resource Logic

Figure 5: Entrance & Resource Logic Finally, the resource’s CPU schedules tasks using “Round Robin” scheduling. CPU’s have a fixed time quantum, q, and the ready queue is treated as a circular queue: i.e., the CPU scheduler goes around a FIFO ready queue, allocating the CPU to each task for a time interval of up to q. It was noted previously that part of the decision concerning which resource allocation is based on an admission requirement. The capsules “decide” which processors to go to based on a maximum processing time calculation. Simulated capsules are assigned a fixed processing time, TP, and I/O time TI/O. Since each resource will also have a given clock speed and I/O access time, the actual processing time and I/O time for the capsules will vary from CPU to CPU. This CPU “burst time”, B, is determined as follows: BP = .P*TP BI/O = .I/O*TI/O B = BP + BI/O

(1) (2) (3)

where .P and .I/O are constants proportional to the resource’s processing and I/O access speed respectively. The “burst time” B represents the actual delay that the capsule experiences at the resource it is running on. To determine whether or not a capsule can be executed at a given resource, we must determine what the maximum processing time will be for the task (given the number of

Proceedings of the 13th International Workshop on Database and Expert Systems Applications (DEXA’02) 1529-4188/02 $17.00 © 2002 IEEE

tasks already being processed at the resource) then determine if the task’s deadline can be met. At worst, all of the other tasks at a potential resource are as long or longer than the task. As a result, it will take n*B (where n is the number of tasks at the resource plus one – i.e., plus the task that we want to introduce). If the task is to be admitted, the following policy must then be met: TD ” TSTART + n*B

(4)

Where TSTART is the scheduled start time for the task and TD is the task’s deadline.

In order to evaluate the two methodologies described in section 3, we have chosen to simulate a simple PID feedback control application (details of this application are provided in [2]). This application can be developed using three basic function block types (E_CTU, E_SWITCH, E_DELAY), one composite function block type (PID) and four service interface function block types (IO_WRITER, IO_READER, PUBLISHER, SUBSCRIBER). The corresponding I/O and processing requirements for the two RT-UML representations of these function block types are given in Figure 6. In this figure, “Comms” indicates that the capsule requires access to a communication channel (e.g., Ethernet, CAN).

E_SWITCH*

E_DELAY*

PID

IO_WRITER IO_READER PUBLISHER SUBSCRIBER

Method #1 #2 FB Head Body Data FB Head Body Data FB Head Body Data FB Head Body Data FB FB FB FB FB FB FB FB

Resource R1 R2 R3 R4 R5

I/O Capability Comms DAC/ADC DAC/ADC/Comms

αp

αi/o

1 0.5 0.5 0.5 0.25

0.9 0.9 0.8 0.8 0.8

Figure 7: Resource Capabilities

4.2 Experiments

Function Block E_CTU*

In order to evaluate the two approaches on a variety of resources, five resource types are specified as shown in Figure 7. In this figure, the αI/O parameters for R1 and R2 correspond to I/O such as lights and switches (mounted on the resource).

I/O Req. DAC ADC Comms Comms

Tp (msec) 5 1 4 2 1 1 2 1 1 10 2 8 5 5 5 5

Ti/o (msec) 1

1 1

1 1

1 5

5 10 10 10 10

Schedule** A A A A A A A A A A A A P P P P A A P P

* Can be combined to create a composite E_TRAIN function block E_DELAY can also be used to create an E_CYCLE function block ** A = aperiodic, P = periodic

Figure 6: Capsule Requirements

For the experiments we are interested in evaluating the system’s response to increasing load on the resources (i.e., by increasing the number of capsules) in an environment where random failure of resources may occur. As a result, four different loadings are evaluated for each methodology. Each methodology is first run with 10 of each function block type (i.e., 80 methodology #1 capsules and 160 methodology #2 capsules). Next, this variable is increased to 50, 100, then 250 of each capsule type. For each simulation run, resource mean-timebetween-failures (MTBF) and mean-time-to-repair (MTTR) are set at 10 minutes and 1 minute respectively. When a resource failure occurs, a reconfiguration process occurs where any capsules at a failed resource are reallocated to an operating resource. In terms of the simulation, these capsules are re-introduced on a firstcome-first-serve basis to the simulation via the decision logic shown in Figure 5(a). To evaluate the performance of each methodology, CPU utilisation, number of unschedulable tasks, and average tardiness measures are collected for each resource. The results of the simulation runs are provided in the next section.

4.3 Results For the results reported in this section, 5 simulation replications of 1 hour each (i.e., 3,600,000 msec) were used in order to provide sufficiently tight 95% confidence intervals on the output measures. In Figure 8 we evaluate the average utilisation across all 5 resources. As can be seen in this figure, the finergrained capsules of the second methodology take better advantage of the resources as the number of capsules is increased: i.e., the second methodology’s CPU utilisation increases at a higher rate than the first methodology’s. This result would be expected since the finer-grained capsules of methodology #2 have an easier time “fitting” onto a resource when they must be rescheduled. A

Proceedings of the 13th International Workshop on Database and Expert Systems Applications (DEXA’02) 1529-4188/02 $17.00 © 2002 IEEE

consequence of this however is that the course-grained capsules (methodology #1) have a higher rate of rejection: i.e., we found that at 250 capsules per function block type, methodology #1 resulted in 600 rejections compared with 500 rejections for methodology #2. 48%

Average Utilisation

46%

Methodology #1

Methodology #2

44%

42%

40%

38% 0

20

40

60

80

100

120

No. of Capsules per FB Type

Figure 7: Average CPU Utilisation We found that when the number of capsules is increased significantly beyond 100 capsules per function block type however, that an interesting interaction occurs between these two metrics. One would expect the average utilisation to increase then “level-out” (i.e., at a maximum CPU utilisation value). Since resources are failing however, and the alternative resources (i.e., resources that could take over tasks when a resource fails) are already heavily loaded, capsules at failed resources get rejected (i.e., are unschedulable). As a result, rather that observing a levelling-out of resource utilisation, we observe a decrease in resource utilisation when the number of capsules is increased excessively high (e.g., 500 capsules per type). An alternative reconfiguration strategy could be used to avoid this problem. For example, reconfiguration agents could assess the situation, using a more intelligent reconfiguration policy than our simple first-come-firstserve policy. One possible policy would be to reallocate all of the high-priority tasks at the failed resource, thus forcing lower-priority tasks at running resources to wait longer for their processing (and perhaps miss some of their deadlines until the failed resource is repaired). Finally, we found that the difference between average tardiness results for the two methodologies is not large, though methodology #2 does show an improvement over methodology #1 as the number of capsules is increased. When considering this result, it is important to look at average tardiness in the context of average CPU utilisation and average number of unschedulable jobs. For example, at 250 capsules per FB type methodology #2 shows an improvement in its average tardiness results despite the fact that more jobs are running in the system (i.e.,

methodology #1 results in more unschedulable jobs) and average utilisation is higher.

5. Discussion The paper has presented two orthogonal methodologies to mapping UML capsules to IEC 61499 function blocks. These entities are the fundamental building blocks of the next generation of decentralised manufacturing control systems that will allow manufacturing businesses to be more agile, i.e. produce goods in smaller batch sizes, deliver them to market quicker and make more efficient use of available resources. It was our intention to evaluate these two methodologies using a realistic simulation in order to gain insights into the reconfiguration process that could be used for our current and future work on real industrial computer platforms. Based on the results reported in the previous section, it appears that successful reconfiguration of real-time distributed control systems will be very much dependent upon the capabilities of reconfiguration agents. For example, if we are to ensure that the system’s timeliness constraints are to be met (e.g., task tardiness is minimised), the impact of task reallocation on the overall system performance must be considered. As well, configuration agents will play an important role in ensuring system robustness (i.e., the system’s ability to continue to run in the presence of failures): e.g., configuration agents will have to determine if, under specific system loadings, task reallocation is possible.

References [1] [2]

[3]

[4]

[5]

[6] [7]

G. Booch, Object-Oriented Analysis and Design with Applications, Second Edition. Addison-Wesley, 1994. R.W. Brennan, M. Fletcher, and D.H. Norrie, “Reconfiguring real-time holonic manufacturing systems,” Twelfth International Workshop on Database and Expert Systems Applications, pp. 611-615, 2001. R.W. Brennan and D.H. Norrie, “Agents, holons and function blocks: distributed intelligent control in manufacturing,” Journal of Applied Systems Studies Special Issue on Industrial Applications of Multi-Agent and Holonic Systems, 2(1), pp. 1-19, 2001. M. Fletcher, R.W. Brennan, and D.H. Norrie, “Design and evaluation of real-time distributed manufacturing control systems using UML Capsules,” 7th Int. Conference on Object-oriented Information Systems, pp. 382-386, 2001. IEC TC65/WG6, Voting Draft – Publicly Available Specification - Function Blocks for Industrial Processmeasurement and Control Systems, Part 1-Architecture, International Electrotechnical Commission, 2000. W.D. Kelton, R. P. Sadowski, and D. A. Sadowski, Simulation with Arena. New York: McGraw-Hill, 1998. A. Lyons, “UML for real-time overview,” Technical Report of ObjecTime Ltd, 1998.

Proceedings of the 13th International Workshop on Database and Expert Systems Applications (DEXA’02) 1529-4188/02 $17.00 © 2002 IEEE

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.