A timeliness paradigm for mesosynchronous real-time systems

May 29, 2017 | Autor: E. Douglas Jensen | Categoria: Real Time Systems
Share Embed


Descrição do Produto

A Timeliness Paradigm for Mesosynchronous Real-Time Systems E. Douglas Jensen [email protected] http://www.real-time.org

Updated 7/8/03

In mesodynamics, the prefix meso refers to the middle ground between classical physics and quantum physics ‰ By “mesosynchronous” real-time systems we mean those

that are in the middle ground between

• totally synchronous – in the sense of having only static, periodic, time-driven, activities

• totally asynchronous – in the sense of having only dynamic, aperiodic, event-driven, activities

‰ The real world has many important real-time systems at

various points in this mesosynchronous middle ground

‰ The derivation of “mesosynchronous” from

“mesodynamics” also reflects that

• synchronous real-time computing, like classical physics, is well understood

• asynchronous real-time computing, like quantum mechanics, requires a paradigm shift

Jensen - 2

Conventional real-time computing concepts and technologies have very limited applicability for scheduling most mesosynchronous systems ‰ They give preferential treatment to synchronous activities

• presume all synchronous activities to be “hard” real-time in

the sense that they have deadlines that must never be missed

• some consider sporadic activities but none consider asynchronous ones

• usually presume that all sporadic activities are “soft” realtime in the sense that completing them can be deferred or denied in favor of synchronous activities

‰ These presumptions are realistic in certain cases but

• not in general • not for any of the real-time systems that I work on

Jensen - 3

Traditional real-time concepts and techniques have a binary view: “hard” vs. “soft” = “not hard” ‰

That is

• • ‰

but not expressive enough for most purposes and systems

It is no more useful than dichotomizing colors as

• • ‰

technically (and tautologically) correct

black not black

Traditional real-time concepts and techniques (cyclic execs, rate groups, RMA, etc.) focus on the special case of “hard”

Jensen - 4

There is a of real-time computing, in which genuine “hard” is a rare special case ‰ Real-time computing is about

• satisfying time constraints ¾

acceptably well

¾

with acceptable predictability

• according to application- and situation-specific acceptability criteria

• given the current circumstances ‰ Hard real-time is a special case about always meeting all

hard deadlines (not all deadlines are hard)

‰ Real-time computing is not about being “real fast”

• time scales (of time constraints, execution durations, etc.) are application-specific

• and may range from microseconds to megaseconds Jensen - 5

The hard real-time computing model is often used as a Procrustean bed ‰ For many (perhaps most) time-critical applications –

where timeliness is part of the application’s logic, not just a matter of performance – classical hard real-time is an

• over-simplified • fragile • expensive

model ‰ For those applications,

the hard real-time computing model is a snare and delusion that impedes rather than helps the programmers who must cope with the inherent dynamics and exigencies of (most of) the real world Jensen - 6

In most of the real world, dynamic uncertainties require flexible and adaptive scheduling ‰ Most time constraints are not intrinsically hard deadlines

• general (“soft”) deadlines • general time constraints that are not deadlines at all but specify a thread’s utility = f(completion time)

‰ Not all time constraints can always be perfectly satisfied,

so there has to be a compromise – e.g.,

• meet the intrinsically hard deadlines, plus … • examples for the other deadlines include ¾

minimize the number of missed deadlines

¾

minimize the maximum tardiness

¾

minimize the mean tardiness

¾

maximize the accrued utility to the system

• an example for the general time constraints

Jensen - 7

An action time constraint is a lexically scoped attribute of a thread thread thread is “non-real-time” begin_time_constraint (t) tc scope

sequencing event

thread is “real-time”

end_time_constraint

sequencing event

thread is “non-real-time” code

Jensen - 8

The real-time computing community historically focuses primarily on hard deadlines ‰ The traditional interpretation of an action having a hard

deadline at time td is that

the action's completion either meets or misses its deadline

meet

miss t = completion time deadline = td

Jensen - 9

Hard vs. soft deadlines are not defined with respect to the consequences of meeting them ‰ The semantics of a deadline

• i.e., the specific way in which system timeliness depends on whether any particular deadline is met, such as whether a miss constitutes a failure

is not the distinction between a hard and soft deadline, contrary to popular misconception in the real-time computing community ‰ These semantics are properly specified as part of the

sequencing optimality criteria (i.e., how deadlines are used for managing resources)

Jensen - 10

Timeliness with respect to general deadlines is not binary ‰ When the term “deadline” is used without the qualifier

“hard,” it refers to the general case of a deadline

• a soft deadline, of which a hard deadline is a special case • the action is either more or less timely, depending on what its completion time is with respect to its deadline

more timely

less timely td

t = completion time

Jensen - 11

“More or less” timely with respect to a deadline is measured in terms of lateness ‰ Lateness = completion time – deadline ‰ Assuming arrival time = 0, the best case is completion time

= 0 and lateness = -td

+ t = lateness 0,0 -td

td

t = completion time

Jensen - 12

Tardiness is positive lateness ‰ Tardiness = max[0,lateness] ‰ The best case is tardiness = 0

+ t = tardiness 0,0 -td

td

t = completion time

Jensen - 13

Earliness is negative lateness ‰ Earliness = max[0,-lateness] ‰ Assuming arrival time = 0, the best case is completion time

= 0 and earliness = +td

+td

+ t = earliness 0,0 -td

td

t = completion time

Jensen - 14

Compact depiction of lateness, tardiness, earliness ‰ Lateness = completion time - deadline

• tardiness = max[0,lateness] • earliness = max[0,-lateness] u t i l 0 i t -t y

+td

t=

tardiness earliness lateness

td

t = completion time

d

Jensen - 15

Most time constraints in the real world are not naturally deadlines ‰ A classical deadline is limited in expressiveness by its

singular inflection point and linear timeliness metric (lateness)

‰ To overcome that, we have been developing a more general

and expressive model for time constraints

‰ This model encompasses much of the mesosynchronous

space in a uniform way

‰ The model expresses time constraints with what were

initially (in 1977) called “time/value functions”

• then “benefit functions” • and currently “time/utility functions”

‰ The model uses scheduling optimality criteria based on

utility accrual

Jensen - 16

Time/utility functions allow arbitrary relationships between completion time and utility to the system ‰ In this model, an action time constraint expresses the utility

(either reward or penalty) for completing the action as a function of when the action is completed

u t i 0 l i t = action completion time t y now

Jensen - 17

"Now" on the x (time) axis is the current time ‰ Temporarily, we will assume for simplicity that a time-

constrained action's release time (i.e., when it becomes eligible for execution) is "now"

‰ We generalize to arbitrary release times when we discuss

sequencing

Jensen - 18

Utility may be cumulative as a function of the progress of (e.g., execution time received by) the action c u m u l 0 a t i v e

u t i l i t y

cumulative execution time

‰ But here we confine ourselves to the case where the utility

is established only when the action completes

‰ More generally, utility may be a multi-variable function of

other factors in addition to time (power, duty cycle, etc.)

Jensen - 19

The time at which a function ceases to be defined is its terminal time ‰ When the terminal time of a ready action's time/utility

function is reached

• the corresponding thread • and any threads that depend on it (e.g., by precedence constraints)

are removed from the ready queue ‰ If the terminal time of an executing action is reached, that

action receives an exception –

which usually will, but need not, result in it being aborted

Jensen - 20

For simplicity, here we will limit the discussion to functions that are either non-convex or constant ‰ A non-convex function has no local minima that are not

global –

i.e., if a function decreases, it must not increase ‰ Constant functions are neither convex nor non-convex

Jensen - 21

Constant time/utility functions represent non-timeconstrained actions ‰ Their utilities are one way of expressing their relative

importance

‰ This has the advantage of allowing both time-constrained

and non-time-constrained actions to be scheduled coherently by the same algorithm u t i 0 l i t = action completion time t y now

Jensen - 22

A hard deadline is a binary unit-valued downward step at the deadline time ‰ The function has a singular positive utility value "1" if the

deadline is met

‰ Otherwise the function has the singular utility value "0" if

the deadline is missed

u meet ≡ u = 1 t i l i miss ≡ u = 0 t y now

t = completion time td

Jensen - 23

“Deadline” without the qualification “hard” means the general case of deadline = soft deadline ‰ A deadline is expressed functionally in either of two ways:

• a downward step function (a hard deadline is the special case of a binary unit-valued downward step function)

• linear functions based on lateness

Jensen - 24

Step function deadlines ‰ A deadline (unlike a hard deadline) function may have any

utility values on each side of the deadline time

‰ These utility values are one way of expressing the relative

importance of actions U1

u U2 t i U3 l 0 i t U4 y

td1 td2 t = completion time

Jensen - 25

Lateness function deadline ‰ When a deadline is expressed functionally in terms of

lateness (as is usual in scheduling theory)

it has a slope of +1 and is inconsistent with the convention of the time/utility function model that larger utility values correspond to better timeliness

0

τ ∝ -td

u t i l i t y

td

t = completion time

now Jensen - 26

Timeliness can be based on inverting lateness ‰ –Lateness = deadline – completion time ‰ Assuming arrival time = 0, the best case is completion time

= 0 and –lateness = +td

+td

+ t = -lateness 0,0 -∞

td

t = completion time

Jensen - 27

Inverted lateness deadline function ‰ Then a deadline can be expressed functionally in terms of

the inverted lateness with a slope of -1,

consistent with the convention of the time/utility function model that larger utility values correspond to better timeliness

τ ∝ td u 0

-∞

t i l i t y

td

t = completion time

now

Jensen - 28

All time constraint functions are special cases of a general time constraint function ‰ A general time constraint is any functional relationship

between

• the time when the thread execution point reaches the end of the time constraint scope

• and the utility to the system of when it does u t i 0 l i t y now

t = completion time

Jensen - 29

Utility functions can reduce system complexity (perhaps contrary to initial impressions) ‰ Any system and its environment has inherent complexities

and anomalous situations

‰ The model for designing and using the system should

facilitate understanding and managing those complexities and anomalies, not obscure them and add more of its own

‰ Time/utility functions may appear more complicated than

simpler-looking conventional real-time models (cyclic executives, priorities)

‰ But the conventional real-time models often are so poorly

matched to the realities of a system and its environment – using them can be a primary source of complexities and anomalies – and thus costs and undependability

Jensen - 30

Generally a real-time computer system has more than one thread that may execute time-constrained actions ‰ A subset of these threads (along with other threads not

currently time-constrained) may be ready to execute with real or virtual concurrency, and hence compete for access to sequentially shared, or conflicting, resources u t i 0 l i t y now

t = completion time

Jensen - 31

Contention may occur for a variety of hardware and software resources ‰ The best-known example of an exclusive or sequentially

shared resource is a processor, but there are numerous other physical and logical ones – e.g.,

• networks • synchronizers (e.g., locks) • application resources

‰ For brevity and familiarity, here we will focus on thread

access to a processor

‰ But coherence of resolving contention for all shared

resources is critical to minimizing the occurrence and duration of action timeliness anomalies – "priority inversion" being a simple special case

Jensen - 32

Resolving contention by actions for shared resources is called sequencing ‰ There are two forms of sequencing

• scheduling • dispatching

Jensen - 33

Scheduling establishes a sequence – a schedule – for all threads ready at that time ‰ First, determine the appropriate figure of merit with which

to discriminate among possible schedules – e.g.,

• meet all deadlines • minimize the mean tardiness

‰ Then determine a discipline that creates an acceptable, if

not optimal, schedule according to the figure of merit

‰ Rate monotonic scheduling is a well known example of a

static scheduling discipline in the real-time computing field

‰ Earliest deadline first (EDF) is a well known example of a

dynamic scheduling discipline

Jensen - 34

Scheduling may be performed statically or dynamically ‰ Scheduling may be performed

• statically (prior to execution time) ¾

manually by programmers or users

¾

automatically by software

¾

manually by users or application software

¾

automatically by system software (OS, middleware, etc.)

• dynamically (at execution time)

Jensen - 35

Dispatching (without scheduling) establishes a thread resource access sequence one thread at a time as needed ‰ If scheduling is used, dispatching is vestigial (occurs in

schedule order)

‰ Scheduling is not always needed or computationally

feasible – dispatching alone may be sufficient

‰ Dispatching, with or without scheduling, is dynamic (takes

place at execution time)

‰ Whether scheduling or dispatching alone is used, either

may use the same

• figure of merit • algorithm

Jensen - 36

Dynamic sequencing takes place when sequencing events occur ‰ Examples of sequencing events include when threads

• arrive • unblock • block • terminate • receive or change time constraints • contend for exclusively shared resources

Jensen - 37

Some kinds of actions are usually not managed by the thread sequencing facility ‰ Some kinds of actions are not implemented as threads and

thus not sequenced the same way as are threads

• e.g., interrupt service routines and certain OS services

execute either when invoked or automatically as needed

• these do not have time constraints – instead, their timeliness may be characterized in terms of (tight) upper bounds on completion time

‰ Real-time practitioners (users, vendors) most commonly

characterize OS and system timeliness in terms of nonsequenced actions (e.g., interrupt response times)

‰ Real-time principles are primarily in terms of sequenced

actions (e.g., meeting deadlines)

‰ This dichotomy results in practitioner misunderstandings

Jensen - 38

The sequencing figure of merit is called the optimality criterion or objective function ‰ Sequencing optimality ideally should take into account

either a number of factors in the criterion, or multiple criteria – e.g.,

• any action time constraints • relative importance of actions or threads • precedence constraints among actions or threads • resource dependencies among actions or threads

‰ In theory and practice, the number of factors or criteria

exponentially increases the computational complexity of sequencing – so application-specific heuristics are often employed

‰ Here for simplicity we will focus on time constraints

Jensen - 39

An obvious figure of merit for schedules based on time/utility functions is in terms of accrued utility ‰ Utility function time constraints are used for scheduling but

not for dispatching

‰ In some cases, simply maximizing the accrued (e.g.,

summed) utility may be inappropriate

‰ A schedule might complete (say) two actions yielding

utilities that sum to more than the utility that could have otherwise been obtained from another action –

• completing more actions may be preferable to completing fewer actions

• completing actions having higher maximum utilities may be preferable to maximizing the overall sum

• actions with higher maximum utility may be more “important” regardless of the overall sum

Jensen - 40

A deadline is the most commonly used time constraint for both hard and soft real-time sequencing ‰ Well-known examples of sequencing optimality criteria for

deadline time constraints include

• meet all hard deadlines (the hard real-time criterion) • minimize the number of missed deadlines soft real-time • minimize maximum tardiness criteria • minimize mean tardiness

Jensen - 41

Sequencing criteria and sequencing algorithms generally are not one-to-one ‰ There may be more than one algorithm – either scheduling

algorithm or dispatching rule – for a given criterion

• the hard real-time criterion “meet all hard deadlines” is satisfied by ¾

earliest deadline first (EDF)

¾

least laxity first

¾

among others

‰ A specific algorithm may be suitable for different criteria



EDF satisfies the hard real-time criterion

• and also the soft real-time criterion “minimize the maximum lateness”

• among others

Jensen - 42

No commercial OS products directly support timeconstraint (e.g., deadline) based sequencing ‰ Almost no commercial operating system (or other run-time

software) products explicitly support any real-time – i.e., application time constraint based – scheduling disciplines or dispatching rules, such as EDF

‰ Users whose applications have deadlines are expected to

• map the deadlines onto operating system priorities • and possibly then manipulate those priorities

‰ Such mappings are complicated in all the but simplest,

smallest, and least dynamic systems – for example

• priority assignments are not modular – they require global knowledge of all other priority assignments

• the granularity of deadlines is much finer than that of priorities • semantics are associated with priorities inconsistently Jensen - 43

The sequencing optimality criterion for timeconstrained actions has two dimensions ‰ Satisfying the eligible actions' time constraints (and other

factors or criteria) acceptably well is measured in two dimensions

• optimality of thread sequencing ¾

i.e., how well all the time constraints of eligible actions are satisfied according to the optimality criterion

• predictability of optimality of thread sequencing ¾

i.e., how much can be known á priori about the optimality of thread sequencing

‰ An application or system is real-time to the degree that

these two dimensions are

• part of the application or system's logic • and thus correctness properties (vs. performance properties)

(note that this applies as much to soft as to hard real-time) Jensen - 44

Real-time practitioners and researchers use the term “predictability” but never define it ‰ Practitioners use the term for timeliness of non-sequenced

actions such as latencies of interrupt response and system services

• they look for least upper bounds and sometimes distributions

‰ Researchers use the term for timeliness of sequenced

actions such as meeting deadlines

• they usually presume the hard real-time case that ¾

the sequencing criterion timeliness optimality factor is simply to meet all deadlines

¾

the predictability of that factor is limited only to knowing whether or not all the deadlines will always be met

Jensen - 45

Informally, something is predictable to the degree that it can be known in advance ‰ Making a prediction involves using information with

various kinds and degrees of uncertainties, to either deductively or inductively draw conclusions having various kinds and degrees of uncertainties

‰ In computing systems having time-constrained actions, the

information needed for sequence timeliness predictions is about

• the actions’ properties, resource requirements, and dependencies

• various aspects of the system's structure and behavior • the characteristics of the system’s execution environment ‰ The conclusions are used for resource management to

achieve the desired timeliness

Jensen - 46

Prediction is based on some particular model of the reality of interest ‰ The information used is generally incomplete and

inaccurate and hence modeled in some way, and the predicting is performed according to the rules of the model

‰ The accuracy of the prediction depends on how well the

modeled information and prediction process reflect the reality of interest

‰ The most common formal models for making predictions

under uncertainty use classical probability theory to represent and manipulate uncertainties both in the observed or presumed information and in the predictions

‰ For example, consider analytical models for predicting

performance (e.g., throughput) of non-time-constrained systems

Jensen - 47

The non-trivial nature of probability-based predictability is easily glimpsed by a simple example ‰ The deterministic (or constant) distribution (p = 1 at X = k) is

obviously the most predictable one

‰ Thus, it might seem intuitive that the least predictable

distribution would be the uniform distribution (p = P for all X)

‰ That intuition has been formally recognized since the 18th

century as the Principle of Indifference

‰ But it leads to insoluble paradoxes which reveal it to be a

heuristic that may be useful for suggesting hypotheses, rather than a logical principle

‰ However, the classical interpretation of probability does

require that the Principle of Indifference be a logical principle

‰ This contradiction led to the subjective and frequentist

interpretations of probability (and subsequently others) Jensen - 48

An alternative intuitive perspective on the least predictable distribution in the context of classical probability theory ‰ Predictability of a probability density function is inversely

proportional to its variability – as measured, for example, by its coefficient of variation Cν = variance/mean2

‰ From that perspective, the deterministic distribution,

whose Cν = 0, is still the most predictable

‰ But the standard form of the uniform distribution has a

relatively low Cν = 0.58 and thus relatively high predictability

‰ Many distributions exist that have higher Cν’s and are thus

less predictable –

for example, the extreme mixture of exponentials distribution has an arbitrarily large Cν and is thus arbitrarily unpredictable

Jensen - 49

A misconception in the field of real-time computing is the confusion of “predictability” with “deterministic” ‰ Predictability is a continuum

• maximum predictability – called deterministic – is one endpoint

• the rest of the continuum is degrees of predictability (or nondeterminism)

• minimum predictability (or maximum non-determinism) is the other end-point

‰ A measure for predictability (or non-determinism), and

characterization of the minimum predictability (maximally non-deterministic) end-point, depend on the particular predictability model – an example measure in probability models is coefficient of variation

Jensen - 50

A sequencing criterion's timeliness optimality factor is orthogonal to its timeliness optimality predictability factor ‰ These generally must be traded off against one another

according to application-specific requirements

‰ For example, it may be necessary to choose between

• one sequence that results in better optimality (e.g., fewer

missed deadlines), but with worse predictability (e.g., higher coefficient of variation of the number of missed deadlines)

• another sequence that results in worse optimality (e.g., more missed deadlines), but with better predictability (e.g., lower coefficient of variation of the number of missed deadlines).

Jensen - 51

A variety of utility accrual algorithms have been and are being devised ‰ The first public ones were

• Locke, C.D., “Best-Effort Decision Making for Real-Time

Scheduling,” CMU-CS-86-134 (Ph.D. Thesis), Department of Computer Science, Carnegie Mellon University, 1986

• Clark, R.K., “Scheduling Dependent Real-Time Activities,” CMU-CS-90-155 (Ph.D. Thesis), Department of Computer Science, Carnegie Mellon University, 1990

‰ Subsequently there have been others – e.g.,

• Li and Ravindran’s GBS •

(disclosure: I am Li’s external advisor and funding source)

but we used Locke’s and Clark’s in the two worked examples presented here

Jensen - 52

Three TUF utility accrual algorithms ‰ Locke’s algorithm

• almost arbitrary non-convex functions (once a function decreases, it must not increase)

• stochastic parameters (execution times, etc.) • no dependencies ‰ Clark’s algorithm

• step functions • deterministic parameters • dependencies

‰ Li’s algorithm

• almost arbitrary convex and non-convex functions • stochastic parameters • dependencies Jensen - 53

Simplified procedural description of Clark’s DASA utility accrual algorithm ‰ Create an empty schedule ‰ Determine dependencies among actions ‰ Calculate potential utility density for each action ‰ If deadlock detected, resolve it ‰ Sort actions by potential utility density ‰ Examine each action in order of decreasing potential utility

density

• tentatively add the action, and its dependencies, to the

schedule in EDF order (considering dependency deadlines)

• test the feasibility of the schedule • if feasible, make tentative changes, else discard them • apply optimizations to reduce schedule if possible

Jensen - 54

Clark’s DASA algorithm provides greater utility than Locke’s, EDF, and static priority algorithms

Jensen - 55

Clark’s DASA algorithm meets more deadlines than Locke’s, EDF, and the static priority algorithms

Jensen - 56

The time spent executing the DASA algorithm is cost-effective in many situations of interest to us ‰ Under low processor loads, there are usually enough

excess processor cycles to support the algorithm without missing application time constraints

‰ Under high processor loads, there is a significant benefit to

be gained from expending some processor cycles to use a more complex scheduling algorithm like DASA

‰ A straightforward (unoptimized) procedural version of

DASA is

• O(N log N) in time (worst case) • O(N ) in space (worst case) 2 2

where N is the number of threads to be scheduled ‰ Appropriate implementation engineering trade-offs are

required for application-specific scalability

Jensen - 57

Time/utility function paradigm worked example 1: AWACS air surveillance mode tracker is an airborne radar system with varied missions, including air surveillance

‰ AWACS

‰

Surveillance missions generate aircraft tracks for BM/C2

‰

Too many sensor reports can overload the system, which causes sectors of the sky to “go blank”

‰

(Currently, operators have knowledge-intensive manual work-arounds for certain overload situations) cartoon Jensen - 58

Our solution: use time/utility functions to dynamically apply resources to the right tracks at the right times ‰ Design a multithreaded tracker that utilizes a separate

thread for each computation to be performed

‰ Dynamically apply computational resources to threads to

adapt to

• changes in physical environment ¾

tracks and sensor reports

¾

updated operator preferences

¾

modified mission goals

• changes in processing capability (overloads, failures) ‰ During overload, data processing order is based on relative

thread utility

‰ For brevity, here we focus on the tracker’s most demanding

computations – association of reports to tracks

Jensen - 59

The AWACS sensor properties imply a general utility function for the association computation ‰ There are two sensors (radar and IFF) sweeping 180º out of

phase with a 10S period,

which suggests the utility function has

• a “critical time” at the 10S period length • at least two distinct non-zero utilities before the critical time • a third distinct, lower, utility after the critical time u

U1 ? t U2

?

U3 ?

i l i t y action completion time

critical time (sweep period length) Jensen - 60

Determine the thread time/utility function shape prior to the critical time ‰ Prior to the critical time

• processing a sensor report for one of these tracks in under five seconds (half the sweep period) would provide better data for the corresponding report from the out-of-phase sensor so the utility decreases with time

• the utility function had to decrease linearly due to an

implementation artifact in this experimental system – the OS (OSF/RI’s MK7.3A) time/utility function scheduling algorithm allowed only one critical time

• the slope was arrived at empirically

Jensen - 61

Determine the thread time/utility function shape after the critical time ‰ After the critical time

• utility is zero, because newer sensor data has probably arrived

• if the processing load in one sensor sweep period is so heavy that it couldn’t be completed,

probably the load will be about same in next period, so there will be no capacity to also process data from the previous sweep

• a tracker that could process older as well as current data would ¾

be significantly more complex

¾

probably delay the track update

Jensen - 62

That established the time/utility function shape for the tracker’s association threads ‰ A critical time at the sweep period length ‰ Linearly decreasing utility until the critical time ‰ Zero utility after the critical time ‰ Next, the utility values U1, U2, and U3 had to be determined

u ? U1 t i ? U2 l i t ? U3 y

action completion time

critical time (sweep period length) Jensen - 63

Three QoS metrics that could quantify track processing were chosen that fit the project’s scope ‰ Quality – 0 to 7

• traditional measure of the amount of recent sensor data incorporated in a track record

• incremented or decremented after each radar scan ‰ Accuracy – high or low

• a measure of the uncertainty of the estimate of a track’s position and velocity

• derived from traditional Kalman filter processing ‰ Importance – high or low

• traditional operator-identified based on geography, threat, and other characteristics

Jensen - 64

Tracker domain experts’ preferences in terms of track QoS metrics imply the thread utility values ‰ Don’t drop tracks, because they are expensive to re-create ‰ User-identified “important” tracks receive preference ‰ User-identified “important” geographic regions receive

preference

‰ Maneuvering tracks need to be updated more frequently

than non-maneuvering tracks

‰ Potentially high threat tracks receive preference ‰ High speed tracks receive preference ‰ Tracks with poor state estimates receive preference

Previously, tracker domain experts didn’t have incentives to use their knowledge to understand and express behavioral options in the face of dynamic uncertainties (i.e., gracefully handling overloads) that plague current trackers

Jensen - 65

The initial utility U1 of an association for a track report is derived from track QoS metrics by gedanken experiments Track state poor Track state marginal Track state OK

Track Quality

Hi Im gh/ po Lo rta w nc e

Low 1-2 Medium 3-4 High 5-7

Track Accuracy Low

High 5500

6000 910

700

1000 800

30 53

40 65

10

20

E.g., completing an association for a high importance, low accuracy, low quality track yields 600 times more utility than for a low importance, high quality, high accuracy track

What are the relative utilities of these 12 cases of tracks? Jensen - 66

The association (and other) threads are scheduled based on their utility functions ‰ For the association threads, the tracking application

selects the established utility function from the OS scheduler’s library of shapes

‰ The tracking application does a look-up in the utility U1

table for each association thread before calling the OS scheduler

‰ A utility-based processor-scheduling policy in the OS

schedules threads according to a heuristic that attempts to maximize total accrued (summed) utility

Jensen - 67

Results with fixed priority scheduling policy

Jensen - 68

Results with time/utility-based scheduling policy

Jensen - 69

The utility-based scheduling policy performed best FIFO 12

Track Quality

Utility Based

12 10

12

8

6 4

6 4

T ra c k s

10 8

T ra c k s

Dropped Tracks

10 8

Fixed Priority

2 0

2 0

2 0

6 4

>11 10 9 8 7 6 5 4 3 2 1

>11 10 9 8 7 6 5 4 3 2 1

>11 10 9 8 7 6 5 4 3 2 1

8

8

8

6

6

6

4

4

4

2

2

2

0

0

0

>11 10 9 8 7 6 5 4 3 2 1

>11 10 9 8 7 6 5 4 3 2 1

X-axis represents increasingly constrained system

>11 10 9 8 7 6 5 4 3 2 1 More important Less Important

Jensen - 70

The AWACS tracker showed that the utility function model can provide significant cost-effective improvements ‰ This AWACS experiment produced an adaptive, distributed

surveillance tracker that was directly driven by applicationlevel QoS metrics

‰ Time/utility functions encouraged domain experts to use

their knowledge to understand and express behavioral options in the face of dynamic uncertainties (i.e., gracefully handling overloads) that plague current trackers

‰ The utility model used only three of many possible tracker

metrics to achieve significant tracker performance

‰ The tracker was demonstrated to USAF AWACS operators

and tracker designers at Boeing, and received strong supportive feedback, including about its QoS metrics and its behavior under overload

Jensen - 71

We also implemented the AWACS tracker using the Real-Time Specification for Java (RTSJ)

‰

‰

Exercise RTSJ capabilities dealing with areas such as threads, scheduling, and memory management Compare results to Java and MK/C++ environments Feed back issues to JSR-1 Expert Group and TimeSys

8 7 6 Track Quality

‰

5 4 3 2

‰

‰

Current RTSJ implementations adhere to “First, do no harm” RTSJ performance results were similar to those of C++

1 0 >11

10

9

8 7 6 5 4 Association Capacity

3

2

1

RTSJ Java more important

RTSJ Java less important

Java more important

Java less important

Jensen - 72

Java Remote Method Invocation (RMI) is slow ‰ Our AWACS tracker is a distributed real-time system



here we have discussed only the association computations that take place on one node

‰ Our RTSJ implementation of the tracker used Sun’s Personal

Java RMI ‰ Performance: 100

mS for RMI versus 1.35 mS for ILU

‰ Possible reasons

• most of RMI is written in Java and is interpreted • serialization (marshaling) uses runtime inspection • each RMI call results in creation of 38 objects • new thread created for each incoming call

Jensen - 73

Time/utility function paradigm worked example 2: battle management for coastal air defense ‰ Hostile cruise missiles, cruise missile carriers, and manned

bombers approach the U.S. coast at varying courses, altitudes, and velocities

‰ The system tracks and identifies threats using ground-

based radars and AWACS aircraft

‰ The system can intercept threats using medium range

guided missiles, and manned aircraft with heat seeking missiles

Jensen - 74

The system’s primary functions are tracking, mission planning, and weapons control

Jensen - 75

The coastal air defense battle management system threads

Jensen - 76

An example of distributable threads in a battle management/C2 system

Plot Sensors Correlator

Track DB

Track Hdlr

Track ID

Threat Assessor

ID Data

Threat Data

Stores Mgr

From: An Example Real-Time Command, Control and Battle Management Application for Alpha (1988)

Weapon Ctlrs

Jensen - 77

Timeliness requirements for the plot correlation and track database threads ‰ The plot correlation and track database maintenance

threads have critical times corresponding to the radar frame arrival rate

‰ In both cases, it is best if the processing is completed

before the next frame of data arrives

‰ it is acceptable for the processing to slip as much as one

additional time frame under extreme overload situations

‰ The plot correlation activity has a much greater utility to the

system under overload conditions

Jensen - 78

The timeliness requirements for the missile and interceptor control threads vary over the course of an engagement ‰

After launch, the guidance control threads must issue timely aperiodic course updates to ensure a successful intercept

‰

The required timeliness of these updates, and the importance of completing the course corrections at the desired time, change as the distance decreases between the weapon and the target, and the target and the coastline

u TUFmidcourse t i TUF launch l i t t = completion time y

TUFintercept

‰ This effect is very difficult to achieve by manipulating priorities

Jensen - 79

The TUF’s for the missile and interceptor control updates change dynamically during the mission ‰ 1. Variable critical (best) times –

course corrections are needed more often as the distance between target and interceptor decreases ‰ 2. Variable “hardness” – it

becomes more important to use the most recent position information as the distance between target and interceptor decreases ‰ 3. Dynamic maximum – the value

of successfully completing an intercept corresponds to the perceived threat of the target being intercepted

u t i l i t y

3 2 1

tcritical t = completion time They also have variable importance depending on the threat potential of the target, independent of their timeliness

Jensen - 80

This battle management system was implemented both conventionally and with TUF’s ‰ There was a learning phase for the battle management

domain experts to begin thinking in terms of this timeliness paradigm instead of priorities

‰ Then they found TUF’s much easier to use than priorities

for expressing the dynamic timeliness requirements

‰ The software development costs for the TUF version were

significantly lower than for the priority based version

‰ The experimental computing system interfaced to actual

battle management facilities

‰ It was demonstrated to USAF C2/battle management users

and operators, and received strongly supportive feedback

Jensen - 81

The time/utility function paradigm – past, present, and future

Jensen - 82

Time/utility functions were devised for scheduling a classified radar system by Jensen in 1977 ‰ U.S. Army Safeguard

Command's AN/FPQ-16 Perimeter Acquisition Radar Characterization System

‰ Located at what is now the

USAF Space Command's

Cavalier Air Force Station, in Cavalier (“Concrete”), North Dakota

‰ Radar performance with

TUF scheduling deemed to violate SALT II treaty

‰ The results are still

classified

Jensen - 83

Research on time/utility functions and utility accrual algorithms resumed at CMU in 1983 ‰ Part of Jensen’s Archons project ‰ Locke and Clark did the first Ph.D. theses ‰ Locke’s algorithm was implemented in the Alpha

distributed real-time OS kernel

‰

CMU and General Dynamics developed the experimental

coastal air defense battle management system, using Alpha

Jensen - 84

Subsequent work on this paradigm has been done by a number of institutions since then ‰

CCUR’s second generation of Alpha included Clark’s DASA

algorithm ‰

SRI did a B3 multilevel secure version



TUF’s used to trade off covert timing channel bandwidth (B3 required
Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.