Scheduling parallelizable tasks to minimize average response time

July 5, 2017 | Autor: Uwe Schwiegelshohn | Categoria: Optimal mine design and scheduling, Parallel Computer
Share Embed


Descrição do Produto

Scheduling

Parallelizable

John IBM

T.J.

Tasks

Turek

Watson

Yorktown

to Minimize

Walter

Research Heights,

Center

Unwemzty

NY

Compute.

Lisa

Fleischer

Science

operations

Research Ithaca,

L.

Resea7ch

Yorktown

Department

Wolf

Watson

Heights,

Jason

Tiwari

Watson

Yorktown

Department

T.J.

ReseaTch He%ghts,

Glasgow

Ismel

NY

Institute

of Technology

Hazfa,

for

Philip

Schwiegelshohn

Unzverstty Information Dortmund,

of Dovtmund Technology

Center

NY

Technton

Center

NY Uwe

Institute

T.J.

Joel IBM

Time

WI

Prasoon IBM

Unwerstty

Response

Ludwig

of Wisconsin-Mad~son Madmen,

Cornell

Average

IBM

T.J.

Systems

Israel

S. Yu

Watson

Yorktown

Research Hetghts,

Center

NY

Germany

1

Abstract

Introduction

Consider A parallelizable (or malleable) task is one which can be run on an arbitrary number of processors, with a task execution time that depends on the number of processors allotted to it. Consider a system of M independent parallelizable tasks which are to be scheduled without preemption on a parallel computer consisting of P identical processors. For each task, the execution time is a known function of the number of processors allotted to it. The goal is to find (1) for each task i, an allotment of processors ~,, and (2) overall, a non-preemptive schedule assigning the tasks to the processors which minimizes the average response time of the tasks. Equivalently, we can minimize the flow tnne, which is the sum of the completion times of each of the tasks.

a multiprocessor

cessors dent

and

a task

tasks

i c

to

be

{1, . . . . M},

processors

be

allotted

a task

are

without to

required

i are

later

completion

think

of the

and

although

explicitly tasks

malleable.

i, and

the

total

for

all

number

number

t.The

finding

the

given

average

measure

by ~ = in

active

problem

the

In

minimizes

r,

(61,

other

we are concerned

that

task

refer

the

or i,

>0.

that

cannot

of a (We

of task

. . . . BM).

sense

processors

of processors.

a schedule

each time

allotment of

We

o~ paralleksm

to be

a pToa task

paralleltzable

for

degree

a

along

to

contiguous.

a starting

legal

within

along

allotted

consist,

and

informally

place

stretches

be

same

stretches

as either

will ,&

is required

t the

to

to @ as the

we denote

time

The

schedule

refer

schedule

tzme

required

allotment

sometimes

~;

the

i at some

can

h (~,)

processors

interchangeably

A

processor

height

and

allotted

at task

One

to

unison

task

i as taking

width the

in

complete ).

of pro-

allotted

processors

the

7, + t,(~, of task

whose

&

i, of

taskezecutzon number

task

start

then

whose

cessor not

will

ttme

a tmn.e axis are

200

They

is, the to

task

number

processors

that

pro-

indepen-

each

its

of the

of the

That required

that

that

execute

execution

axis,

and

All

to

rectangle

to such

SPAA 94-6194 Cape May, N.J, USA 0 1994 ACM 0-89791-671 -9/94/0006..$3.50

all

tzrne r,.

malleable

Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.

it.

M

an arbitrary

function

preemption.

task

star-tmg

In this paper we tackle the problem of finding a schedule with minimum average response time in the special case where each task in the system has sublinear speedup. This natural restriction on the task execution time means simply that the efficiency of a task decreases or remains constant as the number of processors allotted to it increases. The scheduling problem, with sublinear speedups has been shown to be ,f P-complete in the strong sense. We therefore focus on finding a polynomial time algorithm whose solution comes within a fixed multiplicative constant of optimal. In particular, we give an algorithm which finds a schedule having a response time that is within 2 times that of the optimal schedule and which runs in 0( M(it42 + P)) time.

to

of

Assume

. . . . P}

E {1,

of P identical

consisting

allotted

tmne t~(L?; ) is a known cessors

T

scheduled. can

~;

consisting

system

at

) A any

exceed

words,

with

is that

average

response

by

response

in computer

time

is an important

performance.

Note

and the

standard

completion

of

times

of all

pletion makespan

count

(as is the

equally,

Modulo

be removed

problem,

not

case in the

problem).

obviously the

tasks

time

the

without

just

the

last

well-studied factor

affecting

we are attempting

com-

say t~. In other

minimum

&,

which

the

solution

ters

can to

to minimize

of the

words,

task

We refer

to this

response

time

case

the

system,

allotments

rather

problem

an execution

parame-

of the

(for

NMRTS

because time

part

as NMRTS

scheduling).

of MRTS,

are fixed

than

problem.

non-malleable

is indeed

we can

define

for

a special

each

task

i

function

(1)

i=l This

quantity

is typically

For

simplicity,

we shall

tion

to the

terms by

minimum

of the

the

to this

time

of the

[Cof7 of

time

6].

The

a solu-

problem

function

Equation (for

time

paper

response

in

as MRTS

jlow

this

objective

formula

problem

the in

average

value

flow

called speak

problem

Turek,

in

NMRTS

the

1. We

malleable

shall

refer

The

rest

time

tion

2 we briefly

response

of

present

sake

of simplicity

time

of the

number

this

ti(~~)

is not

time

actually

~,

allotted

a restriction:

t,(~;)

time

assume

gives

function

that

the

i is a non-increasing

of processors

function

cution

we shall

of task

to

exe-

function

to it.

However,

task

execution

Any

rise

defined

the

a new

a surrogate

task

the

three

MRTS-SS.

exe-

which

by

less

optimal the

number

of

remaining

We

processors

processors

do

simply

the

the

make

work,

and

In this

paper

we focus

of the

means

tasks

that

task

i

is

allotted

the

our

number

umrk

attention

function

~;.

function

Inother

in which

Formally,

this

=~itt(~i) for

az(/3i)

a non-decreasing

processors

on systems speedup. of

words,

the

foralll

of

< i < M,

task

the bound

possible

to

tasks

in

Section

to

be 5.

used

optimal

one.

ponents

of

as input bound

initial the

(Geometrically,

of course,

striction the

on the

efficiency

as the refer

task

of

number to this

which

work

malleable

execution

a task

restricted

decreases

for

MRTS

or

of

re-

simply

remains

2

Previous

remark

zncreaszng

function

time

is minimized

each

task,

work. not

that

in passing

and

Of

of /3, for trivially

ordering

course,

typically

if,

each

the

such

We

The

corresponding

minimum

special

and

a, (~,)

the

strong

the

strong

optimal paper for

case

task

i,

then

the

in order

has

to

of increasing assumption

is

been

studied

malleable

and

sense.

Thus

sense

as well.

flow

time

we present

the

problem

and

which

The

current

runs

Let

X; for

will

to

refer

in 0(M(M2 makes

entire have

is ;\ ’P-complete

in

and

the

system

which + P))

Shachnai in

denote

having

by

be .\-’P-complete

task

algorithm

MRTS-SS

paper

studied

to

MRTS-SS

solution an

was shown

cost T.

finds

flow

time

of the In

use of a new

We

com-

order

also

of

outline

a schedule

with

by

{’Ti + t.(~, )},

in

a schedule XT

<

these

BCR80,

algorithm

for

the

as in

to the

case

of optimal. CGJT80, TWPY92,

201

MMS

to

the in

time

Sle80].

[CMM67,

For

MMS

WC91,

minimizing tasks(in other

latest of the

problems strong

sense

have whose

focused solution

multiplicative

see, for

LLKS92] of

efforts

algorithms a fixed

NMMS

TWY92,

the

example,

con[GG75,

see [BB90, WC92, that

in we

NMMS, the

duration

makespan

paper,

both analogy,

and

corresponds

.~”’P-hard

is within

lution to the problem time for n.on-paraliel

By

minimum

current

For

literature,

represents

be

polynomial

worst

It is well-known

i has a fixed deexecution time,

thus

These

shown

finding

KM90,

and

the

as

makespan

schedule.

the

in

versions.

problems

The

been

stant

2F~

extensively

time

thus,

on

this

time.

special case of MRTS where each task gree of parallelism, say ~. and a fixed

reverse

of finding

defined

non- malleable

respectively.

and

(in claim.

the

all three

flow

P processors

a superlinearity

of MRTS-SS [SG93]

7, we put

over

l

Equation

Lemma joralli

some

On

/3;

height.

when The

all

a matter

of increasing

of the

oj M

to a task.

where

is merely

“narrow”.

this

bound

case

solution

problem

some

be allotted

special

2AT(6)

can

+

rewrite

HT(@)

-

2WT(~).

Equation

1 defining

the

flow

as M

M

i=l

By

;=1

definition

fti(~i).

HT(~) =

i=l

Therefore,

to prove

to show

the

bound

in

the

lemma

it

suffices

that

i=l

Using bound the

Equation 3 (which computes the squashed by horizontal integration) and the definition

normalized

work

bound,

this

is equivalent

area of

to show-

ing

5 ~i% 2P’>(”’) -5 “iy’i) i=l

We for

show the

show

r;

i=l

this

ith

by

j=l

proving

summand

[+1

i=l

in

the

corresponding

each

term.

In

inequality

other

words,

Figure

we

4: Histogram

of Useful

Work

that

~

x, i

2Pjtj

(@j)

‘pit;

_

(Di) p

= E

j=l

To

see this,

the

number

consider

Figure

of processors

after

scheduling by

the

these i is also

used

first

tasks

4, which by

i – 1 tasks.

is shaded.

indicated.

)

Note

LIST (The

total

The

starting

that

by

work (associated ~i cannot exceed

with the

work total

time ~i. But 2/, in turn, cannot exceed of work done by the first i – 1 tasks.

figure, shaded

W

is the

area

unshaded

below

area

Ti. ) In other

the

r;

before time the amount

bound

corollary

Ti, while

4.1

of M

S

[$1

specified

in Lemma

to Lemma

3.2 we

4.1

Let

tasks

T

be a non-malleable

with

jized

allotment

system $

such

conthat

for

The running be dominated

the idle processors) amount U of useful

below

lower

following

the all z c {1, . . . . M}. Then, applying LIST algorithm to 7 yields a schedule for that task sys< 2.F~ where .F~ is the flow time tem with flow time FT of the optimal schedule for T.

~;

construction,

W of wasted before time

the

sisting

work time

at any point r: 1. Thus

the

of

algorithm

the number of idle processors ~; must always be less than

before amount

the

Corollary is a histogram

the

applying

get (8)

j=l

done of task

2%(0’)

By

tasks

and

time of LIST by the time

thus

is

O(M

algorithm required

can be shown to to initially sort the

log M).

the (In

5

/,4 is

words,

Restricting

an

Allotment

of

Pro-

cessors In the previous section we saw that given a nonmalleable task system T and the condition that no more

j=]

205

than

[~1

rithm

processors

LIST

bound

of 2.

leable

that

this

it

even

when

to

only

~T

< 2-4T(@ does

~

i. Given MRTS-SS

appropriate

initial

this

Given

initial

an

+

not

the

obey



the

result becomes . allotment P.

allotment

allotment

in

~,

~ such

Lemma

4.1.

This

that

~,

show

P

~ for an

how

allotment

is even.

speedup.

the

lelism

to

i.

Call

~ the

schedule

skinny

LIST shows

allotment

to that

presented

Lemma

5.1

Let

the tasks

have

subitnear

of

processors skinny

rtthm flow

T

be a malleable

to tasks

speedup. for

~

allotment.

to

the

task

Then

allotment

~

to

system

such

P/2.

The

a flow

time

have

a flow

applying

get

the

skinny of

time

2-4~(@)

+ ~7(@

-

bound.

will

task

have

allotment

hnear is P,

degree

results

the

has

of parallelism

of paral-

in

original

a schedule

schedule

will

of 1. Therefore

T

+ HT(F)

the

– W7(F)

5.1

Let

T

have

= 2 = 3T,

skznny

2FF,

3.2 we

system

Let

~

such

be any

al-

T and let ~ be the cor-

Then ~ yzelds

where

for

schedule thas bound

creation

for

allotment.

FT

task

speedup.

to tasks

flow

<

in Lemma 5.1.

be a malleable

to the allotment

tzme

spedied to Lemma

sublznear

algortthm

The

WT(6).

bound

of processors

opttmal

wtth

lower

corollary

tasks

responding

allotment

for

single

degree

2, while

case

consider a task system that tl (@l ) = &, where

the

initial

allotment

following

the

more,

s

worst



By

lotment

that

and let ~ be the corresPondapplyzng the LIST algoa schedule

words,

the

skinny

Corollary

4.1.

time 77

of the

as required.

~. The on the

in Lemma

Let @ be any

yzelds

proof

~ by

corresponding

to T with the allotment that the worst case bound

is similar

follows

other

with

that

mg

In

2AT(15)

each

new

the

Assume

so that

(9) for

result

completes

setting

~. We will apply following lemma

~ o.

In

section.

a new

becomes

– ~] ‘P’2’

To see that this bound is tight consisting of a single task such

will

next

we define

The

Specifically,

finding a schedule a matter of finding

the

side

t*([P/21)[1

W-r(d),

We

right-hand

~ results

requirement

this

the

aIlot-

allotment in

while

case

a mal-

irutial

coefficient.

~T(~)

algo-

sublinear

allotment

presented final

given the

arbitary

a new

with

that

in the

($1 fOI all the problem

find

T

to

similar

differs

an

the worst

that

obeying

generate

LIST

a bound

fact,

easily

tasks

given

task,

a provable

we show

with

and

to each

with

section T

restriction

applying

allotted

a schedule

system

. /3, we can

ment in

In

task

speedup

were

yields

F;

applying

the

a schedule

for

fiOW fime

aS the

‘T gauen the allotment

/3.

LIST 7

wzth

Of the

Further-

M ttght.

of a skinny

allotment

can

be done

m 0(M)

time. Furthermore,

Proofi

thzs bound

By Lemma Fr

4.1

suffices

A7(T)

we know

2AT(7)

s

Due to the sublinear that ~i < p, for all ii! that

ts tzght.

<

+ ff~(~)

Thus,

Finding Processors

6

– 2VV~(59.

speedup constraint and the fact follows (after a minor reordering)

AT(P).

to show

that

to

prove

the

lemma,

It

follows

to find

from

an to

Initial Tasks

Lemmas

an initial

5.1 ~nd

allotment

3.2

~ such

that

of

if we are

able

that

it

that 2AT(L7j+HT(&WT(/3

HT(7)



2WT(7)

<

~T(p)

-

we need

to show

that

for

all

i,

we

&

will

with L(-h)[l



2;1

< L(Pt)[l

(lo)

– $1

this

is an have

In

case,

this

hand

y, = ~,

and

Equation

10 holds

Therefore

the

all

the

that

of finding

side

-y, =

[$1.

of Equation

source

left-

10 becomes

– 2ql

al

and

edges the

(i, s + 1) and

than

.+1

matching.

Thus

initial

allotment

7

not

is of minimum (j, s + 1) with the

reduce

the

weight

we have

s)+

full

t:+

we have

with

problem

three

1)$+

time

M2

O(M(M

+

problem

method

overall

is O(M(

paper,

[Kuh55]

required

can

in time

to find

the

P)).

+

of the

sections,

a new

polynomial

suboptimality Because

algorithm

time

bound

we have

over

we recapitulate

algorithm

the

each

of 2 for

presented

course of the

the

of the steps

last

of the

here:

Given

a task

cessors tion

(13)

developed

a provable

MRTS-SS.

components

1 -

Hungarian the

time

matching

Conclusion

the

of

equivalently,

(2(L!f

the

Therefore

algorithm

c; +C;+l < (7:+1+C;, or,

using

).

takes

bipartite

(12)

matching (i, s) and

(j, s) will

to edges

weighted

0(M3

In this Furthermore, because the weight, replacing the edges

of costs

the

the

work

is, a:

assignment

F’)),

be solved

= t;(b:)

ordered

The

T

system

to tasks

that

find

the

allotment

minimizes

2AT(~)

the

+ HT(B)

~ of pro-

objective

func-

– W’T(~)

.+1

(2(M

-9)

-

1)+

This

+t;+’

can

be done

algorithm

FIN

in 0(M(M2

D-MIN

+ F’))

presented

time in

using

Section

the 6.

a:+l

(2(M–

<

s)–

(2( M-g)

In

addition,

ment

the

fact

of processors

task

i given

ordering

that

it

implies

+t;+’

+

2.

Define

a new

b< is defined

task

is the

i that

to

be the

minimizes

Sth task

in

the

the relative

as described

of

O(M)

work

in

+t;+l

~ (2( M–s)–l)$+t:

Find

(14)

a schedule

This

and

The l)#+t;

< (2( M-S)+

l)$+$+’.

that + t;+

~[(M

– ~ – ~)(aj+l -

<

t; + t;+l

<

t:+l

s + ~)(aj

_

aj)+

(~– the first

s — -)(a:+l :

-

S+

inequality

inequality

from

–d)],

follows

from

a~~a3,

a legal

— a;+l)+

~)(aj

( 13).

the

4 to

the

done

algorithm

skinny

is

clearlv

dominated

the

first

gives

7.1

Let

tasks

with

flow

time

the

ovtimal

the

a~gorithm

step.

our

T

have

in

be a malleable

~ 23-_~,

bv

the

time

all

the

com-

system

such

task

speedup.

The

ytelds

where

F;

Furthermore.

is 0(M(M2

~.

time.

result.

sublmear

schedule.

LIST

allotment

Putting

m thzs paper

~~

for

further

From

restricting

further

(14) this

work

three

a schedule is the flow

the

+ P)).

and

(15),

runnma

step for

‘T

time

of

tzme

of



the

sion can worst

.+1

rithm fore,



208

the

maximum

LIST

following

degree

algorithm

items:

of parallelism

to

a value

other

may be possible to somewhat improve presented in this paper. This needs

attention.

It would

and

we conclude

include

the

in

than [:1 it the bounds

contradicts (12). Thus FIN D-MIN returns the in order of increasing work, and therefore produces matching.

applying

time

descrzbed

By

that

which tasks

be

– aj+’)]

+ t; +

;[(M

second

can

log M)

of a task

the

This

in O(A4

algortthm

Areas

where

5.

be done

together

the

by

Section

perform

Theorem

we have

(~

can

to

ponents

(15)

in

runniruz

needed

t;+l

Section

that

S)-1)$

Therefore,

~ by setting

time.

as defined

(2( M-$)+

allotment

allotcost

3 (2( M-

skinny

+l):+t;.

that to

1)*

of

be interesting the

response

be shown case

that

to consider time

if preemption

suboptimality

will not any work

ifying

the

paper

in order

bound

be bounded along these

scheduling to

the

problem.

allow

on-line Note

ver-

that

is not

allowed

of any

given

it the

algo-

by a constant. Therelines must involve mod-

paradigm some

employed form

in

of preemption.

this



Our

need

for

restriction

sublinear

of

required

the

by

mance

the

than

LIST

that

by[TSWY94a,

without with

of

by

that

the

the

[Kuh55]

[LLKS92]

the

allow

time

us

formulas

case bounds

H.

gisttcs

given

would

Kuhn,

The

Assignment

restricting

execution

worst

the

perfor-

algorithm

without

task

on

parallelism

Better

parallelism

arbitrary incurring

of

algorithm.

provided

degree

handle

is based

degree

TSWY94b]

maximum to

speedups

maximum

Quarterly,

Lawler,

E.,

D.

ing:

Algorithms

and

Lo-

Rinnooy

Kan,

and

Research

Sc8ence,

Planning

the

schedul-

complexity.

Operations

agement

algorithm.

A.

Sequencing and

for

Research 1955.

Lenstra,

Shmoys.

of

Method Naval

2:83-97,

J.

and book

associated

Hungarian

Problem.

Volume

IV:

Inventorg.

In

Hand-

and

Man-

Production

North-Holland,

1992. ●

We

have

considered

is

no

to

a task.

same

task

relationship It

would

problem

meshes,

in

systems

among for

the

in

be interesting

systems,

which

to

such

physical

which

processors

there

explore

as hypercubes

proximity

[LT94]

allotted

may

Ludwig, ing

W.

Tasks.

In

and

posium

on Discrete

be

im-

VA, [PS82]

January

mate

In

nattonal

Shachnai, the Flow

the

tems.

%oceednzgs

Baker,

B.,

Coffman,

J.,

G. Rinnooy

Kan.

C. ahd,

Two

J.

5:11-24,

1983.

Coffman,

E.,

K.

[TSWY94a]

9(4):846-

and

A.

Classification

H, [TSWY94b]

com-

Mathematics,

Garey,

D.

Performance

Johnson,

Bounds

SIAM

Journal

9(4):808-826,

November

Chen,

T.

for

Level-

on

[TWPY92]

Algo-

Jobs

on

Computer

Hypercubes. on

Sc2ence,

Conway,

R.,

Theory

of

Scheduling

W.

Indepen-

Average

ings

of the

In

Proceedings

Theoretical

pages

Aspects

273-280,

Maxwell,

and

Scheduling.

[TWY92]

of

E.,

Editor.

Scheclrding

Garey,

M.

Theory

and

R.

Multiprocessor mg,

cation MIT

T.

N.

Krishnamurti, imation

R. Partition

Multiprocessor RC 15900, 1990.

with

[WC92]

Resource

on

Report

Turek,

J., J. Wolf,

Resource

294,

Allo-

Algortthmtc

Approaches.

and

An

E. Ma.

for

Scheduling

ApproxTasks

Sizes in Partitionable

Systems. IBM

Research

Wang, SIAM

1975.

Katoh.

Technical Division,

RC

February

Shelf.

J.

Algorithm.

19422,

IBM

K.

Report July

209

for

a

Tech-

Research

Pattipati,

and

Tasks:

DiP. Yu.

Putting

it

of the ACM

Newport,

Wolf, In

and

RI, pages

P. Yu.

AnAlgorithms CA, pages

1992. H.

Cheng.

List

March

Processing 1991.

Cheng.

A

Tasks

and

on

Scheduling

Information

H.

1992.

Paralleliz-

of the 4th

Parallel San Dtego,

on

Parallel

Journal

Approxi-

Scheduling

Proceedings

Tasks. and

and

for

37:291-297,

April

and

Bound

1992.

June

Q.

1994.

J. Wolf,

In Proceedings

Algorithms

Q.

Dzscrete

January

Smart

Conference,

Tasks.

on

Smarter

Parallelizable

J.,

In Proceed-

Times. VA,

and

to Mini-

1994.

June

Scheduling

Comput-

ln-

J. Wolf, Tasks

Schwiegelshohn,

Smarter

Letters,

for

Algo-

10(1):37-40,

Symposzum

vision,

Wang,

Optimal

Letters,

Response SIAM

A Significantly

of Parallel

1976.

1988.

Algorithm

on Varying

[WC91]

Job-

Bounds

Journal

June

and

Problems: Press,

Wiley,

Scheduling

4(2):187–200,

and

Graham.

SIAM

Constraints.

Ibaraki,

Computer

Technion

Dimensions.

Parallel

nual Symposmm and Architectures,

Addison-Wesley,

Minimizing Task Sys-

Schwiegelshohn,

P. Yu.

able

Miller.

Times in Two

J., U.

mate

1988. L.

2.5

Turek,

Turek,

and

1982.

TR790,

Arlington,

323-332,

Coffman, Shop

[KM90]

mize

225-236,

1967.

[IK88]

Scheduling

all on the

1980.

Lai.

of the Conference

[GG75]

P. Yu.

Sigmetrics G. and

dent

[Cof76]

J., U.

Scheduling

Computmg,

Com-

1980.

Turek,

nical

and

Packing

A

Processing

Slightly M.

Hall,

Report

Packing

Algorithms,

to re-

and

for

February

Rivest.

Steiglitz.

1993.

D.

}ormation

subject

Apphed

Sleator,

1990.

K.

H. and J. Glasgow. Time for Parallelizable

Technical

rithm

R.

Lenstra,

Two-Dimensional

rithms.

[Sle80]

Dimensions.

Scheduling

Dzscrete

Oriented

lnter-

Sym-

Arlington,

Algorithms

Prentice

November

Processing,

Computmzg,

constraints:

R. Tarjan.

IIT,

1980.

Blazewicz,

plexity.

and

in

on

November

source

August

Packings

Journal

855,

[CMM67]

oj the 1990

72–75,

Par-

Scheduling

of Parallel

I, pages E.

for Task

volume

SIAM

[CL88]

Algorithms

Conference

Orthogonal

[CGJT80]

Banerjee.

Independent

Problem.

[BLK83]

P.

Scheduling

titionable

[BCR80]

and

Algomthms,

Optimization:

Approxi-

[SG93] K.

Parallel

of the SIAM

1994.

Papadimitriou, Compiezity.

Belkhale,

Schedul-

Tiwari.

Nonmalleable

Proceedings

bmatorzal

References

P.

and

the

portant.

[BB90]

and

Malleable

Computing,

Heuristic it

Analysis. 21(2):281-

of

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.