A parallel priority queueing system with finite buffers

August 5, 2017 | Autor: Aliakbar Haghighi | Categoria: Distributed Computing, Parallel & Distributed Computing, Priority Queue
Share Embed


Descrição do Produto

J. Parallel Distrib. Comput. 66 (2006) 379 – 392 www.elsevier.com/locate/jpdc

A parallel priority queueing system with finite buffers Aliakbar Montazer Haghighi∗ , Dimitar P. Mishev Department of Mathematics, Prairie View A&M University, P.O. Box 4189, Prairie View, TX 77446, USA Received 17 December 2004; received in revised form 30 June 2005; accepted 12 October 2005 Available online 10 January 2006

Abstract In this paper we pose and analyze a novel model of a priority queueing system that represents a conveyor model. What is novel in this model is that priority is designed to minimize the idleness of the system. Thus, we discussed a general parallel finite buffer multi-server priority queueing system with balking and reneging and analyze a two-station case of it. An algorithmic solution for the general two-station case and closed-form solutions for special cases of this model are given. We use block-matrix method to find distribution of the queue length, probability of the system idleness, probability of loss to the system, probability of stations being busy, and the first two moments. Numerical examples are given for comparison with Disney’s model [R.L. Disney, Some multichannel queueing problems with order entry, J. Ind. Engng. 13 (1962) 46–48; R.L. Disney, D. Kögin, Queuing networks: a survey of their random processes, SIAM Rev. 27 (3) (1958) 335–400] and discussion of properties of the model. © 2005 Elsevier Inc. All rights reserved. Keywords: Parallel; Queue; Priority; Buffer; Balking; Reneging; Length; Loss; Idle; Busy

1. Introduction Conveyor theory studies methods of storing, retrieving and transporting materials. The materials might be raw materials, inprocess goods, finished goods, solid waste, information, mail, or even people. A conveyor system consists of mechanical equipment which are some materials transporting equipment and stations for operational processes (loading and unloading). Models and methods used to describe and analyze the operation of conveyor systems may be classified with different criteria. Some common criteria are conveyor arrangement, conveyor used (delivery and storage) loading and unloading stations (single and multiple), material flow in and out of station (continuous time, discrete time, deterministic, and stochastic). Analytical models are designed to shed light on the fundamental relationships among important model’s parameters. For a conveyor model these parameters are, for example, conveyor speed, length of forward and return paths, spacing of carriers, number and spacing of work stations, amount of in-process storage, service discipline and material flow rate. ∗ Corresponding author. Fax: 936 857 2019.

E-mail addresses: [email protected] (A.M. Haghighi), [email protected] (D.P. Mishev). 0743-7315/$ - see front matter © 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.jpdc.2005.10.003

The mechanical design of conveyors has been thoroughly studied and well-analyzed by engineers. However, the operational process of a conveyor did not receive much attention until late 1970s. It was then when a great deal of attention was directed to the problem of prescriptive queueing models. It seems that the characterization of a conveyor as a queueing model is a good way for analytical discussion of the problem. Among authors using this characterization we see those of [5–8,10,11,16,22,23], for example. Disney discusses a parallel finite-buffer multi-channel queue with ordered entry (priority) that arises in the area of conveyor design in a technical note [5]. However, he solves the problem in a special case when there are only two service channels. He then extends his study in another technical note [7] and discusses the equations governing the case but he does not offer a solution. He mentions in [7] that his research is not complete and thus only discusses numerical solutions for the case of a finite intermediate. Gupta [13] tries to give a solution for the system of equations posed by Disney [5,7]; however, an explicit form of the solution is not given; rather, he describes how the solution can be obtained in principle. Others like in [8,15] attack the problem analytically but solve special cases numerically as well. Numerical analysis of an ordered-entry three-channel queueing system with finite and infinite external sources with a

380

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

finite buffer at each server using a computer programming has been considered by in [8,9]. Balking and reneging are two important features of a queue and many authors have addressed them. Multiple-server queues with feedback, balking and reneging have been studied in the past four decades. See, for instance, [17–19]. Balking and reneging in operating the orders in a production system could be interpreted as cancellation of order before order being placed or after. Minimizing the number of losses in a particular model is considered in [12]. However, they show how the operative should be ordered; they do not calculate the number of losses. Priority queues have been studies extensively and started in the early days of queueing theory. See [4,14,25], for example. A fresh look at priority queues has appeared in the literature recently. See, for instance, [2,3,21,25]. From what is available in the literature, it is almost clear that standard available methods to solve multi-server queues will not result in a closed form solution for the problem described above. Queueing theory and simulation are used in military to quantifya the operational performance of the Cormorant, for example, “to obtain the best-, average-and worst-case analysis of the expected steady state time a distress incident spend in the queue’’, Nguyen and Ng [20]. The notion of priority, balking and reneging queue is applicable almost in all areas of parallel computing systems wherein buffers are finite and/or queue length is too long. It is also applicable in the military when shipments of troops, goods, and ammunition take place under enemy’s attack and minimal loss is desirable. Balking and reneging in the military could be thought of as the case that an aircraft needs to land on a ship for emergency needs while all spaces of the ship are occupied or cargo must leave a location that is under bombardment of the enemy. “The supercarrier, too valuable to risk losing, and able to hurl its ordnance beyond the horizon, is made to stand away from battle, sometimes hundreds of miles away.’’ “A Nimitz-class carrier operates with a crew of more than 3200 trained and specialized young men and women enlisted and officers’’, Robbins [24]. In operating the orders in a production system, balking and reneging could be interpreted as not receiving an order because the manufacturer is backlogged or cancellation of order before order being placed or after. Consideration of zero-balking rate may be desirable since a vessel that does not enter the warfare area fails its mission. Thus, consideration of the model without balking will help decision making during the war time. Buffer-size is also important and needs to be considered. In practice, the buffer sizes could be very large. This is why there has been increasing interest in developing parallel algorithm for Markov chain computations. Benzi and T˚uma [1] consider use of a parallel preconditioned iterative method for large, sparse linear systems in the context of Markov chain computations, for instance. As it can be seen later from graphs, despite the intuition that larger buffer size may prevent balking and, thus, reducing the loss due to balking, the buffer size does not have a significant effect on the loss due to balking and reneging. Perhaps increase of service rate and shortening of the reneging times are more effective in a warfare situation. Knowledge of quantities such as estimation of the mean service time, mean

busy period, and losses due to balking and reneging (or escape) will help management planning for an uncertain process of communication by ground, air, water, or electronic. In this paper, we pose and analyze a novel model of a priority queueing system that represents a conveyor model. What is novel in this model is that priority is designed to minimize the idleness of the system. The main purpose of this research is to expand Disney’s model [5,7]. There are three main differences between the model we pose and the one Disney does in [5,7]. First, Disney’s model is a twostation model with a single-server in each station, while ours is a two-station with multi-server units in each station. Second, the types of priorities are different. Disney’s is a standard priority, i.e., arrivals must go to the first station unless it is full, while in our case, we do not allow a server in a station to be idle if there are tasks waiting in buffer of the other station. Third, Disney’s model is a regular two-server priority queue, while ours includes two very important additional features, balking and reneging. From what is available in the literature, it is almost clear that standard available methods to solve multi-server queues will not result in a closed form solution, see [26], for instance. However, an objective of the research is to try again to see if this barrier can be broken. We will do that by offering an algorithmic solution for the general and closed form solution for special cases. Thus, in Section 2 we will discuss the general case of S multi-channel stations, but in Sections 3 and 4 we will concentrate our works on S = 2, although the algorithms we offer are quite general. Numerical examples are given for comparison of our solutions with Disney’s and to show workability of the algorithm. 2. Description of the general model The system to be considered in general is as in Fig. 1. There are S service stations, each consists of one or more identical server(s), say c1 , c2 , c3 , . . . , cS 1 in stations 1, 2, 3, . . . , S, respectively, set in parallel. Thus, the total number of servers in the system will be C = c1 + c2 + · · · + cS . Each station has a waiting room, called buffer. Buffer sizes are Mi − ci , i = 1, 2, 3, . . . , S, where Mi − ci 0. Thus, the capacity, Mi , i = 1, 2, 3, . . . , S, of each station is the size of its buffer plus the number of its servers. Total capacity of the system is M. Units arrive to the system from an infinite source probabilistically with Poisson distribution with mean . The service times at each server in each station are i.i.d. random variables having negative exponential distributions with mean 1/i , i = 1, 2, 3, . . . , S, where i s are positive real numbers. At each station, units will be served on FIFO bases by one of the servers in that station. Potential arrivals give a measure of the virtual load that the system should be prepared for. Since units may balk or renege, the system may lose some of its potential arrivals. The balking and reneging units, coupled with those who attend and finally obtain service, constitute potential arrivals. If an arriving unit finds all servers busy at the time of its arrival, it will join the system with a constant probability p, 0 p 1, i.e., it will balk

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

c1 M1

1

381

Note that if c1 = c2 = 1, 1 = 2 , 1 = 2 = 0, and p = 1, then the model described is the same as Disney’s [5,7] except for the fact that in our system we allow filling in the station that is idle from the other, if it has a task in its buffer.

c2

(p) M2

2

cS MS

S

Fig. 1. A parallel multi-station multi-server system.

with probability q = 1 − p. While a unit is waiting for service, it may decide to leave the system, i.e., it may renege. The length of time a unit may wait before it reneges is assumed to be a random variable having negative exponential distribution with mean 1/i , i = 1, 2, 3, . . . , S, where i ’s are positive real numbers. Reneging cannot occur for the unit in service. Thus, the average number of reneged units from the system while servers are busy will be 1 + 2 + · · · + S . If a unit leaves the system due to balking, reneging or completion of its service and rejoins the system, it will be considered as a new arrival, independent of the previous balking, reneging or being served. Service stations are ordered such that station 1 has the first priority of receiving the arrivals. That is, if a unit arrives, it must join a queue before station 1 unless there are already M1 in that station, in that case it will go before station 2 unless there are already M2 in station 2, and so on. If all stations are full, then the arriving unit will be considered lost. Thus, loss to the system will be due to balking, reneging, or both stations being full. Due to the priority restriction, it is possible that at some point any server at each station becomes idle and naturally the buffer of that station must be empty while there is a unit in another buffer waiting to be served. Thus, we have two options: (1) let the system be as described and not bother with this idleness, and (2) avoid this situation and let the next unit be served at the immediate lower indexed station with idle server. If there is no idle server at any of the lowered index stations, then immediate upper indices will be checked. Hence, if at any time there are C or more units in the system, none of the servers will be idle. Note that this case is different from jockeying. For example, for a system with two stations and a single-server case, states (2,0) and (0,2) are the same as (1,1), and (3,0) and (0,3) are the same as (2,1) and (1,2), respectively. In general, for a two-station case we will have (c1 + i, c2 − j ) ≡ (c1 + i − 1, c2 − j + 1), where 1 i M1 − c1 , 1j c2 , and (c1 − i, c2 + j ) ≡ (c1 − i + 1, c2 + j − 1), where 1 i c1 , 1j M2 −c2 . For a multi-server system with two stations, state (c1 + 1, c2 − 1) is the same as (c1 , c2 ), for example.

3. Analysis Let Pm1 ,m2 denote the stationary probability of having m1 units in the first station and m2 in the second. Note that based on our assumptions, we will have Pm1 ,m2 = 0 when 0 m1 c1 −1 and c2 + 1 m2 M2 and Pm1 ,m2 = 0 when c1 + 1 m1 M1 and 0 m2 c2 − 1. 3.1. Two parallel single-server stations with equal service rates In order to compare numerical values of our model with Disney’s [5,7], we first consider two special steady-state cases, M1 = M2 = 1 and M1 = 1 and M2 = 3, similar to Disney’s [5]. These are the cases where c1 = c2 = 1, 1 = 2 = , 1 = 2 = 0, and p = 1. In the first case, i.e., M1 = M2 = 1, as mentioned earlier, both models are equivalent and, thus, the same solutions as in [5] hold for both. For convenience we repeat the solution here: Let  = /, the so-called traffic intensity, and S1 = 2 + 2 + 2. Then 2 2 , P0,1 = , S1 S1 (1 + ) (2 + ) 2 = . and P1,1 = S1 (1 + ) S1

P0,0 = P1,0

In the second case, i.e., M1 = 1 and M2 = 3, Disney’s solution is as follows: 4( + 1)( + 2) 2 (3 + 4) , P0,1 = , S3 S3 (2 + 8 + 8) 2 ( + 1)( + 4) = , P1,1 = , S3 S3 23 ( + 1) 3 ( + 1)( + 2) = , P1,2 = , S3 S3 4 ( + 1) 4 ( + 1)2 = and P1,3 = , S3 S3

P0,0 = P1,0 P0,2 P0,3

where S3 = 6 + 45 + 84 + 132 + 20 + 8. For our case, the system of difference equations will be P0,0 = P1,0 + P1,0 , (1 + )P0,1 = P1,1 , (1 + )P1,0 = P1,0 + P1,1 , (2 + )P1,1 = P0,1 + P1,0 + 2P1,1 , (2 + )P1,2 = P1,1 + 2P1,3 , 2P1,3 = P1,2 , P0,0 + P0,1 + P1,0 + P1,1 + P1,2 + P1,3 = 1.

(1)

382

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

Ec1 +1 is a (c2 + 1) × (c2 + 1) bi-diagonal matrix with additional last full row with elements as: ec1 +1 (i, i + 1) = i2 , i = 1, 2, . . . , c2 , ek+1 (i, i) = −[ + (k − 1)1 + (i − 1)2 ], i = c2 + 1, and ec1 +1 (i, j ) = , i = c2 + 1, j = 1, 2, . . . , c2 − 1; and Fk , k = 1, 2, . . . , c1 , is a (c2 + 1) × (c2 + 1) diagonal matrix with diagonal elements as: fk (i, i) = k1 , k = 1, 2, . . . , c1 .

The solution for system (1) is as follows: 8(1 + ) 42 8(1 + ) , P0,1 = , P1,0 = , D3 D3 D3 42 (1 + ) 22 (1 + ) = , P1,2 = , D3 D3 4 (1 + ) = , D3

P0,0 = P1,1 P1,3

 P(1) = P0,0 , P0,1 , . . . , P0,c2 , P1,0 , P1,1 , . . . , P1,c2 , . . . , T Pc1 ,0 , Pc1 ,1 , . . . , Pc1 ,c2 .

where D3 = 5 + 34 + 63 + 162 + 16 + 8. 3.2. Two parallel multi-server stations The system of balance equations for the two stationary parallel systems, one with a multi-server in each station and one with a single-server in each station, is dependent; however, it will reduce to an M1 · M2 + 3 equation with M1 · M2 + 3 unknowns. In fact, this dependence is part of a problem that needs to be addressed because the matrix of coefficient is a so-called generator, i.e., it is a singular matrix. To reduce the number of units in the system we either have to have a service completed or an occurrence of a reneging. The system of balance equations in this case consists of 17 equations, 5 are from states (0,0), (1m c1 − 1, 0), (c1 , 0), (0, 1 nc2 −1), (0, c1 ), and 12 from states that are from pairs (m, n), where 1mc1 − 1, c1 and c1 + 1 m M1 − 1, M1 , and 1 nc2 − 1, c2 and c2 + 1 n M2 − 1, M2 , excluding pairs with one component corresponding to an idle server in one station and nonempty buffer in the other station. We split the system into two parts P(1) and P(2) . Thus, the system of balance equations for the model is the totality of P(1) and P(2) and the normality equation as follows: KP(1) = 0,

(2)

QP(2) = 0,

(3)

M2 M1  

Pm1 ,m2 = 1,

(4)

m1 =0 m2 =0

where K, P(1) , Q, and P(2) are defined below. K is a (c1 + 1) × (c1 + 1) tri-diagonal block matrix: ⎡ ⎤ E1 F1 ⎢ D E 2 F2 ⎥ ⎢ ⎥ ⎢ ⎥ D E F 3 3 ⎢ ⎥ K=⎢ ⎥, . . .. .. ⎢ ⎥ ⎢ ⎥ ⎣ D Ec1 Fc1 ⎦ D Ec1 +1 where: D is a (c2 + 1) × (c2 + 1) diagonal matrix with diagonal elements as: d(i, i) = , i = 1, 2, . . . , c2 + 1; Ek , k = 1, 2, . . . , c1 , is a (c2 + 1) × (c2 + 1) bi-diagonal matrix with diagonal elements as: ek (i, i + 1) = i2 , k = 1, 2, . . . , c1 , i = 1, 2, . . . , c2 , and ek (i, i) = −[ + (k − 1)1 + (i − 1)2 ], k = 1, 2, . . . , c1 , i = 1, 2, . . . , c2 ;

The matrix Q is an (M1 − c1 + 1) · (M1 − c1 + 1) tridiagonal block matrix: ⎡ ⎤ B1 C 1 ⎢ A B 2 C2 ⎥ ⎢ ⎥ ⎢ ⎥ A B3 C3 ⎢ ⎥ Q=⎢ ⎥, .. .. ⎢ ⎥ . . ⎢ ⎥ ⎣ A BM1 −c1 CM1 −c1 ⎦ A BM1 −c1 +1 where: A is an (M2 − c2 + 1) · (M2 − c2 + 1) diagonal matrix with diagonal elements as: a(i, i) = p, i = 1, 2, . . . , M2 − c2 + 1; Bm , m = 1, 2, . . . , M1 − c1 , are (M2 − c2 + 1) · (M2 − c2 + 1) bidiagonal matrices with respective elements as: b1 (i, i + 1) = c1 1 + i2 + c2 2 , i = 1, 2, . . . , M2 − c2 , ⎧ i = 1, ⎨ −p, b1 (i, i) = −(p + c1 1 ⎩ +(i − 1)2 + c2 2 ), i = 2, 3, . . . , M2 − c2 , and bm (i, i + 1) = i2 + c2 2 , i = 1, 2, . . . , M2 − c2 , m = 2, . . . , M1 − c1 , bm (i, i) = −[p + (m − 1)1 + c1 1 + (i − 1)2 + c2 2 ], i = 1, 2, . . . , M2 − c2 , m = 2, . . . , M1 − c1 ; BM1 −c1 +1 is an (M2 − c2 + 1) · (M2 − c2 + 1) tridiagonal matrix with elements as: bM1 −c1 +1 (i + 1, i) = p, i = 1, 2, . . . , M2 − c2 , bM1 (i, i) = −[p + (M1 − c1 )1 + c1 1 + (i − 1)2 + c2 2 ], i = 1, 2, . . . , M2 −c2 , bM1 (i, i) = −[(M1 −c1 )1 +c1 1 +(i − 1)2 + c2 2 ], i = M2 − c2 + 1, and bM1 (i, i + 1) = i2 + c2 2 , i = 1, 2, . . . , M2 − c2 ; Cm , m = 1, 2, . . . , M1 − c1 , are (M2 − c2 + 1) · (M2 − c2 + 1) diagonal matrices with diagonal elements as:

m1 + c1 1 + 2 , i = 1, cm (i, i) = . i = 2, 3, . . . , M2 − c2 + 1 m1 + c1 1 ,  P(2) = Pc1 ,c2 , Pc1 ,c2 +1 , . . . , Pc1 ,M2 , Pc1 +1,c2 , Pc1 +1,c2 +1 , T . . . , Pc1 +1,M2 , . . . , PM1 ,c2 , PM1 ,c2 +1 , . . . , PM1 ,M2 . Note that different combinations of cases M1 = 1, 2 and M2 1 and M1 1 and M2 = 1, 2 can be obtained from (2)–(4) with some modifications. We will consider one of these special cases below.

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

To be able to solve system (2)–(4) without ambiguity, we temporarily change all P ’s to X’s for system (2) and Y’s for system (3) and at the end we will go back to our normal notations P ’s. This is because since we are not solving the complete system at once, P ’s are not probabilities until we normalize them. Solution of P(1) : From system (2) we solve Xm1 ,m2 , 0 m1 c1 −1, 0 m2 c2 −1, in terms of Xc1 ,c2 . To solve for Xm1 ,m2 , we let X = [X0 , X1 , · · · Xc1 ]T , T stands for the transpose of a matrix, where X0 = [X0,0 , X0,1 , . . . , X0,c2 ], X1 = [X1,0 , X1,1 , . . . , X1,c2 ], . . ., and Xc1 = [Xc1 ,0 , Xc1 ,1 , . . . , Xc1 ,c2 ]. We also let X˜ = [X0 , X1 , . . . X˜ c1 ]T , where X˜ c1 = [Xc1 ,0 , Xc1 ,1 , . . . , Xc1 ,c2 −1 ]. Thus, X˜ will be the solution of P(1) . Letting Xc1 ,c2 = 1, we solve X˜ in terms of Xc1 ,c2 (in case that one does not want to let Xc1 ,c2 = 1, the solution will be multiplied by the value of Xc1 ,c2 ). We use the following ˜ algorithm consisting of four steps to solve for X. ˜ Algorithm to solve for X: AX.1. Delete the last row of K and call the remaining matrix as K1 . AX.2. Choose the last column of K1 , multiply it by (−1) and call this matrix as K3 . AX.3. Delete the last column of K1 and call the new matrix as K2 . AX.4. X˜ = K2−1 · K3 . Solution of P(2) : The system of equations using Q will be ⎧ B1 Y1 + C1 Y2 = 0, ⎪ ⎪ ⎪ ⎪ ⎨ AYj −1 + Bj Yj +Cj Yj +1 = 0, (5) j = 2, 3, . . . , M1 − c1 , ⎪ ⎪ + B AY ⎪ M1 −c1 M1 −c1 +1 ⎪ ⎩ ×YM1 −c1 +1 = 0, where Y1 = Xc1 = [Xc1 ,c2 , Xc1 ,c2 +1 , Xc1 ,c2 +2 , . . . , Xc1 ,M2 ], Y2 = Xc1 +1 = [Xc1 +1,c2 , Xc1 +1,c2 +1 , Xc1 +1,c2 +2 , . . . , Xc1 +1,M2 ], . . . , YM1 −c1 +1 = XM1 = [XM1 ,c2 , XM1 ,c2 +1 , XM1 ,c2 +2 , . . . , XM1 ,M2 ], and [Y1 , Y2 , . . . , YM1 −c1 +1 ]T = X = [Xc1 , Xc1 +1 , . . . , XM1 ]T . To solve system (3) we apply the following 6-step algorithm: AY.1. Solve system (5) recursively as follows: From the first equation of (5) we have Y1 = −B1−1 C1 Y2 = −S1−1 C1 Y2 = G1 Y2 , where S1 = B1 and G1 = −S1−1 C1 . From the second equation of (??) we have (B2 + AG1 )Y2 + C2 Y3 = 0 or S2 Y2 + C2 Y3 = 0, where S2 = B2 + AG1 . Thus, Y2 = −S2−1 C2 Y3 = G2 Y3 , where G2 = −S2−1 C2 . Continuing this way we will have (Bj + AGj −1 )Yj + Cj Yj +1 = 0 or Sj Yj + Cj Yj +1 = 0, where Sj = Bj + AGj −1 . Thus, Yj = −Sj−1 Cj Yj +1 = Gj Yj +1 , where Gj = −Sj−1 Cj . AY.2. Substitute YM1 −c1 in the third equation of (5), i.e., (BM1 −c1 +1 + AGM1 −c1 )YM1 −c1 +1 = 0 or DYM1 −c1 +1 = 0,

(6)

where D = BM1 −c1 +1 + AGM1 −1 . D is an (M2 − c2 + 1) · (M2 − c2 + 1) singular matrix with rank M2 − c2 , as mentioned above. Now solve (6). This equation

AY.3. AY.4. AY.5.

AY.6.

383

has infinitely many solutions. Therefore, to solve it, let YM1 −c1 +1,1 = 1, for example, and find the rest of entries of YM1 −c1 +1 . From (5) find Yj , j = M1 − c1 , M1 − c1 − 1, . . . , 2, 1. Multiply the solution X˜ from Part 1 by Xc1 c2 and call it X˜ 1 .  1 M2 Find S1 = M m1 =0 m2 =0 Xm1 ,m2 . Note that Xm1 ,m2 = 0, 0 m1 c1 − 1 and c2 + 1 m2 M2 , and Xm1 ,m2 = 0, c1 + 1 m1 M1 and 0 m2 c2 − 1. Divide each of Xm1 ,m2 ’s from steps 3 and 4 by S1 and Xm

let Pm1 ,m2 = S11 2 . This will make the sum of all new Pm1 ,m2 ’s equal to 1. Thus, thePm1 ,m2 ’s form a distribution of length of the queue. ,m

3.3. Special case, M1 = 1, M2 3, c1 = 1, c2 = 1, i.e., no buffer at station 1 The system of balance equations in this case will be P0,0 = 1 P1,0 + 2 P0,1 , ( + 2 )P0,1 = 1 P1,1 , ( + 1 )P1,0 = P0,0 + 2 P1,1 , (p + 1 + 2 )P1,1 = P0,1 + P1,0 + (1 + 2 + 2 )P1,2 , [p + 1 + (m2 − 1)2 + 2 ]P1,m2 = pP1,m2 −1 + (1 + m2 2 + 2 )P1,m2 +1 , 2 m2 M2 − 1, (1 + (M2 − 1)2 + 2 )P1,M2 = pP1,M2 −1 , P0,0 + P0,1 +

M2 

P1,m2 = 1.

(7)

m2 =0

Note that for M2 = 1 and M2 = 2, system (7) holds except it needs to be modified. For M2 = 1, the term p on the left, the last term of the fourth equation and the entire fifth equation should be deleted. When M2 = 2, the entire fifth equation should be deleted. The solution in these cases can be obtained trivially. The solution for a special case M1 = 1 and M2 = 3 is given in Section 3.1. We use the 2-part algorithm to solve the special case M1 = 1, M2 3, c1 = 1, c2 = 1. System (7) reduces to ⎧ −pP1,1 + (1 + 2 + 2 )P1,2 = 0, ⎪ ⎪ ⎪ ⎪ ⎨ −[p + 1 + (m2 − 1)2 + 2 ]P1,m2 + pP1,m2 −1 +(1 + m2 2 + 2 )P1,m2 +1 = 0, ⎪ ⎪ 2 m2 M2 − 1, ⎪ ⎪ ⎩ −[1 + (M2 − 1)2 + 2 ]P1,M2 + pP1,M2 −1 = 0.

(8)

AY.2 does not apply in this case and it should be ignored. The Q matrix in this case is an matrix of numbers. It is not a block ˜ AX.1–AX.4. matrix. Thus, we apply algorithm for solving X,

384

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

The elements of Q are as follows: ⎧ ⎪ ⎪ q(i, i + 1) = ci , i = 1, 2, . . . , M2 − 1, ⎨ q(1, 1) = b1 , q(i, i) = bi , i = 2, 3, . . . , M2 − 1, q(M2 , M2 ) = bM2 , ⎪ ⎪ ⎩ q(i + 1, i) = a,

=

Let U0 = x1 = X1,1 = 1, u =  p+ . Note that u > 0. Then 1 2 ∞

where ci = 1 + i2 + 2 , i = 1, 2, . . . , M2 − 1; b1 = −p; bi = −[p + 1 + (i − 1)2 + 2 ], i = 2, 3, . . . , M2 − 1; bM2 = −[1 + (M2 − 1)2 + 2 ]; and a = p. The system of equations using Q will be ⎧ ⎨ b1 X1 + c1 X2 = 0, aXj −1 + bj Xj + cj Xj +1 = 0, j = 2, 3, . . . , M2 − 1, ⎩ aXM2 −1 + bM2 XM2 = 0.

G(x) =

U1 = X1,2 , . . . ,

UM2 −1 = X1,M2 .

Using (7), system (8) can be rewritten as ⎧ −pU0 + (1 + 2 + 2 )U1 = 0, ⎪ ⎪ ⎪ ⎪ ⎨ pUm2 −2 − [p + 1 + (m2 − 1)2 + 2 ]Um2 −1 +(1 + m2 2 + 2 )Um2 = 0, ⎪ ⎪ 2 m2 M2 − 1, ⎪ ⎪ ⎩ pUM2 −2 − [1 + (M2 − 1)2 + 2 ]UM2 −1 = 0.

if |ux| < 1 or |x| <

n p Un = u = , 1 +  2 n = 0, 1, 2, . . . , M2 − 1, 

n

 X1,n = Un−1 = u

n−1

Un x ,

|x| < 1.

(11)

It is clear that G(0) = 1. Thus, without the loss of generality we may assume that x  = 0. Using (11), Eq. (10) can be rewritten as

− +

∞  n=0 ∞ 

0 n M2 − 3.

X1,n ,

(15)

1 2 (2+1 +2 ) 2 (+2 ) 2 (2+1 +2 ) X1,1 with (+2 )

where X0,0 , X0,1 , and X1,0 are defined by X0,0 = 

× X1,1 , X0,1 = +1 X1,1 , and X1,0 = 2 X1,1 = 1 and X1,n are defined by (14). Therefore, P0,0 =

1 X0,0 , S2

1 1 X0,1 , P1,0 = X1,0 S2 S2 n−1  1 p = , S 1 +  2

P0,1 =

1 X1,n S2 n = 1, 2, . . . , M2 ,

and

P1,n =

where S2 is given by (15).

(12)

3.3.1. 2 = 0, i.e., there is no reneging from station 2 We need to consider this case for theoretical purposes. From (11) and (12) we have

1 + 2 − px  + 2 G(x) = 1 U0 . 2 x 2 x

(16)

From the first equation of system (9) it follows that U1 = p 1 +2 +2 . Thus, from (16) and letting U0 = X1,1 = 1 we will have G (x) + p(x)G(x) = q(x),

We consider two possibilities: 2 = 0 and 2  = 0.

1 +  2 U0 (1 + 2 ) − px

M2  n=1

G (x) +

n=0

G(x) =

, (14)

S2 = X0,0 + X0,1 + X1,0 +

[p + 1 + (n + 1)2 + 2 ]Un+1 x n pUn x n = 0,

n−1

3.3.2. 2  = 0 From (11) and (12) we will have

[1 + (n + 2)2 + 2 ]Un+2 x n

n=0

p 1 + 2

Now let

n=0

∞ 

=

n = 1, 2, . . . , M2 .

(9)

We define the generating function of Un as G(x) =

(13)

which is a solution of system (9). Hence

[1 + (n + 2)2 + 2 ]Un+2 − [p + 1 + (n + 1)2 + 2 ] (10) ×Un+1 + pUn = 0, 0 nM2 − 3.

n

1  + 2 = 1 . u p

Therefore, from (11) and (13) we will have

Let n = m2 − 2, 0 nM2 − 3. Then the second equation of (9) can be written as

∞ 

 1 = un x n 1 − ux n=0

To obtain an explicit solution, from (8), we let U0 = X1,1 ,

1 U0 . 1 − (p/(1 + 2 ))x

(17)

where 1 + 2 2 1 + 2 q(x) = 2

p(x) =

1 p − x 2 1 · . x ·

and (18)

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

The solution of the linear first order differential equation (17) is     G(x) = e− p(x) dx q(x)e p(x) dx dx + C , (19)

3.3.2.2. 1 + 2 = (m + 1)2 , m = 1, 2, 3, . . . From (20) we will have G(x) =

where C is the constant of integration. From (18) and (19) we will have G(x) = x



1 +2 2



p

e 2

x

 + 2 × 1 2



1 +2 −2 2 x

·e

p − x 2

385

p x x −(m+1) e 2



 (m + 1)

x ·e m

p − x 2

 dx + C ,

m = 1, 2, 3, . . . .

(24)

By induction on m and using integration by parts, it is easy to show that





dx + C .

p

− x x m e 2 dx

(20)

2 m − p x x e 2 p ∞  m(m − 1) · · · (m − k + 1) + (−1)k (−p/2 )k+1 k=1

=− The indefinite integral on the right-hand side of (20) can be  + − evaluated by elementary functions in a closed form if 1 22 2 is a non-negative integer. To evaluate it, we consider two cases: one with 1 + 2 = 2 and another with 1 + 2 = (m + 1)2 , m = 1, 2, . . . . 3.3.2.1. 1 + 2 = 2 From (20) we will have p



G(x) =

1 2 − x x G(x) = e 2 − e 2 + C x p   ∞ 2 1 1  1 p n n =− x . +C p x x n! 2 n=0

(21) If C = p2 , then from (21) we will have ∞ 2 1  1 G(x) = p x n!

m = 1, 2, 3, . . . .

(25)



−(p/2 )x (−(p/2 )x)2 + 1! 2! x m+1 m (−(p/ )x) 2 + · · · + (−1)m m!   (m + 1)! m × C + (−1) (−p/2 )m+1 ∞  (−p/2 )m+1+i i (−1)m+1+i x. +C (m + 1 + i)! 1

1−

(26) Letting C = (−1)m+1 (−p(m+1)! from (26) we will have / )m+1 2

(22)

G(x) =

n=0

1 p n! 2 n = 1, 2, . . . , M2 .

∞  j =0

(m + 1)! (m + 1 + j )!



p 2

j

xj .

(27)

Thus, from (11) and (27) we will have

Therefore, from (11) and (22) we will have:

 n (m + 1)! p Un = , (m + 1 + n)! 2 n = 0, 1, 2, . . . , M2 − 1.

n−1 , (23)

(28)

It can be easily seen that Un , n = 0, 1, 2, . . . , M2 − 1, found in (28) is a solution of (11). Therefore

Then 1 1 1 P0,0 = X0,0 , P0,1 = X0,1 , P1,0 = X1,0 , S2 S2 S2   1 1 1 p n−1 P1,n = X1,n = , S2 S2 n! 2 n = 1, 2, . . . , M2 ,

(m + 1)! (m + n)! n = 1, 2, . . . , M2 .

X1,n = Un−1 =

and

Thus, we have P0,0 =

X1,n

are

given

by

(15)

and

(23),

1 S2 X0,0 ,



p 2

P0,1 =

n−1 , (29)

1 S2 X0,1 , P1,0  n−1 1 (m+1)! p , n S2 (m+n)! 2

=

and P1,n = S12 X1,n = = 1, 2, . . . , M2 , where S2 and X1,n are given by (15) and (29), respectively. 1 S2 X1,0 ,

where S2 and respectively.

,

n

p xn 2 n=1  n ∞  1 p = xn. (n + 1)! 2

X1,n = Un−1 =

2

i=1





− p  x

From (24) and (25) and some algebra we will have



p

×x m−k e

386

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

Table 1

3.4. Moments Having found the joint distribution of the queue lengths, we can find all moments, particularly the first two, of the queue length, at each station and in the system. Let the random variables L1 and L2 denote the queue lengths at stations 1 and 2, respectively. Then, L = L1 + L2 will denote the total number of units in the system, including those in the servers. Based on the assumptions of the model we 1 M1 M2 will have: E(L1 ) = cm=1 mPm.0 + m,n , m=1 n=1 mP c2 M1 M2 c1 E(L2 ) = n=1 nP0,1 + m=1 n=1 nPm,n , E(L) = m=1 2  1 M2 mPm,0 + cn=1 nP0,n + M , E(L21 ) = m=1 n=1 (m + n)Pm,n   c1 c2 M1 M2 2 2 2 P1,0 + m=1 n=1 m Pm,n , E(L2 )= n=1 n2 P0,n m=1 m   1 2 M1 2 + m=1 M n2 Pm,n , E(L2 ) = cm=1 m2 P1,0 + cn=1 n2 M2 M1 n=1 2 P0,1 + m=1 n=1 (m + n) Pm,n . 3.5. Loss/attendance of units Potential arrivals give a measure of virtual arrival load that the system should be prepared for. The losses to the system may be due to balking, reneging, or all stations being full. The balking, reneging and those units that finally complete service constitute the potential arrivals. Probability of the system being idle is P0,0 . Probabilities c1 of the stations being busy are P (Station 1 busy) = m=1 c2 M2 M1 n=0 Pm,n + m=c1 +1 n=c2 +1 Pm,n , P (Station 2busy) = c1 c2 M1 M2 m=0 n=1 Pm,n + m=c1 +1 n=c2 +1 Pm,n . Let  denote the average number of losses due to balking and reneging during a unit service time within a busy period. As indicatedin [18],probability that all servers in the system M2 1 are busy is M m=c1 n=c2 Pm,n . Since probability of balking when servers are busy is q, the  probability of losing a unit M2 1 due to balking will be q M m=c1 n=c2 Pm,n . Also since the mean of arrival is , the mean loss  1 M2 due to balking during a busy period will be q M m=c1 n=c2 Pm,n . On the other hand, 1 . Thus, since the mean service time in the system is c1  +c 2 2 1 the mean loss due to reneging from the system is 1 + 2 , we   M1 M2 1 will have  = c1  +c2  1 + 2 + q m=c1 n=c2 Pm,n . 1 2 Letting A to be the mean attended during a unit service time within a busy period, we will have A=

1 c 1 1 + c 2 2  c c  M1  M2 1  2   ×  Pm,n − Pc1 ,c2 + p Pm,n . m=0 n=0

m=c1 n=c2

Note that when 1 = 2 = 0, 1 = 2 ≡ , and p = 1, then  = 0 and A = 2 for all values of M2 . 4. Numerical examples  We let, 1 =  , 2 =  , and s = c1  +c denote the 2 2 1 2 1 traffic intensity (work load) in station 1, station 2 and in the

Both

S1

P00 = Pi

P01

P10

P11 = Pl

P1b

P2b

=1 =2

5 10

.4 .2

.1 .13

.3 .27

.2 .4

.5 .67

.3 .53

Table 2 Disney [6] P00 = Pi P01 P10 P11 P02 P12 P03 P13 = Pl P1b P2b

=1  = 1.2 =2

.324 .250 .087

.095 .230 .135 .054 .081 .027 .054 .097 .203 .146 .067 .108 .040 .089 .073 .101 .130 .087 .174 .087 .261

.500 .556 .546 .548 .667 .812

Table 3 Ours

P00 = Pi P01

P10

P11

P02 P12 P03 P13 = Pl P1b

P2b

=1  = 1.2 =2  = 5.2  = 11.8  = 12.4

.113 .277 .045 .005 .000 .000

.113 .111 .090 .026 .004 .003

.056 .067 .090 .066 .022 .020

0 0 0 0 0 0

.127 .161 .299 .699 .920 .927

.028 .030 .030 .011 .002 .002

.028 .040 .090 .173 .130 .126

0 0 0 0 0 0

.014 .024 .090 .449 .766 .780

.211 .241 .358 .714 .922 .929

system, respectively. To compare our model with Disney’s [5,7] we consider three special cases: M1 = M2 = 1, M1 = 1, M2 = 3, and M1 = 1 and M2 = 1, 2, . . . , 25 with c1 = c2 = 1. Other data values are  = 05, 15, 25, 35, 45, 55, 65, 75, 85, 95 and 100, 1 = 0.5, 1, 2, . . . , 10, 1 = 2 = , 1 = 2 = , 1 = 2 = 0, and p = 1. If necessary we may increase 1 to as much as we like, we have gone up to 40. For the presence of balking and reneging, we choose 2 = 5 and p = .6. For the case of heterogeneous service rates, we choose two cases of 1 = 22 and 2 = 21 . In graphs we use 1 only. 4.1. Special cases In case of M1 = M2 = 1 with no balking or reneging and homogenous service rates, denoting P1b (= P10 + P11 ), P2b (= P01 +P11 ), Pi (= P00 ) and Pl (= P11 ) as probabilities of station 1 being busy, station 2 being busy, system idleness and loss to the system, respectively, we will have Table 1. Note that for  = 2, values of P1b and P2b are .67 and .53, respectively, and not .65 and .50 as in [6]. For case M1 = 1, M2 = 3, under the same conditions as for M1 = M2 = 1, with P1b (= P10 + P11 + P12 + P13 ), P2b (= P01 +P11 +P02 +P12 +P03 +P13 ), Pi = (P00 ), and Pl (= P13 ) as probabilities of station 1 being busy, station 2 being busy, system being idle, and loss to the system, respectively, we will have Table 2 for Disney and Table 3 for our model. Again note that for  = 2, station 2 is busy 81–82% of the time while station 1 is busy 66–67% of the time, and not as 80–85% and 65%, respectively, as indicated in [5]. It should also be noted that matrices A11 , A33 and I of Disney [5] each are missing a row and a column. They should be corrected

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

M1 = 1, M2 = 3, µ1 = µ2

M1 = 1, M2 = 3, µ1 = µ2 1 Probability of System Idleness

1 Probability of System Loss

0.9 0.8 Disney Model 0.7 Our Model

0.6 0.5 0.4 0.3 0.2 0.1 0

387

0

5

10

15 20 25 30 ρ1; α1 = 0, α2 = 0, p = 1

35

0.8 0.7 0.6 Disney Model

0.5 0.4 0.3

Our Model

0.2 0.1 0

c1 = 1, c2 = 1

c1 = 1, c2 = 1

0.9

0

5

10

40

15 20 25 30 ρ1; α1 = 0, α2 = 0, p = 1

35

40

Fig. 3. Comparison of system idleness.

Fig. 2. Comparison of system loss.

M1 = 1, M2 = 3, µ1 = µ2 1



A11

A33



− 1 0 0 ⎢ 0 −(1 + ) ⎥ 1 0 ⎢ ⎥, =⎣ ⎦ 0 0 −(1 + ) 1 0 0 0 −(1 + ) ⎡ ⎤ −(1 + ) 1 0 0 ⎢  −(1 + ) 1 0 ⎥ ⎥ =⎢ ⎣ 0  −(1 + ) 1 ⎦ 0 0  −2

Probability of Station 1 Busy

as follows:



 ⎢0 I = ⎢ ⎣0 0

0  0 0

0 0  0

0.7 0.6 0.5 0.4 0.3 0.2

Our Model

0.1 0

5

10



0 0⎥ ⎥. 0⎦ 

Comparing the two cases of M1 = M2 = 1 and M1 = 1, M2 = 3, the probability of “lost’’ units is reduced from 20% in case M1 = M2 = 1 to 8–9% in case M1 = 1, M2 = 3 with  = 1, and not to 4–5% as indicated in [5]. Furthermore, the system idleness for  = 1 and  = 2 are reduced from 40% to 32% and from 20% to almost 9%, respectively, in the two cases. Reduction of “idleness’’ seems as significant as the “lost’’, despite what is claimed in [5] since lost reduces from 20% to 13.5% for  = 1 and from 40% to 13% for  = 2. For our model, when M2 = 3, as in Table 3, the two stations are almost equally loaded when  = 5.2, P1b = 0.714 and P2b = 0.699. They will be more closely equivalently loaded when traffic intensity increases quite a lot, when  = 11.8, P1b = 0.9221, P2b = 0.9201, or when  = 12.4, P1b = 0.9221, P2b = 0.9201, for instance. For Disney, this is when  = 1.2, P1b = 0.546, P2b = 0.548. This means that in our case the system will be equally loaded when work load is about one-tenth of the work load in Disney’s case. This shows a much more efficient system than Disney’s. It is interesting to see from Figs. 2 and 3, that in a comparison between the two cases of Disney’s [7] and ours when the service

c1 = 1, c2 = 1

0.8

0

and

Disney Model

0.9

15 20 25 30 ρ1; α1 = 0, α2 = 0, p = 1

35

40

Fig. 4. Comparison of station 1 busy.

rates are the same, in the absence of balking and reneging, for various values of  from 0.05 to 40, for instance, our system has consistently less loss to the system than Disney’s. For the same conditions, our model is constantly less idle than Disney’s too. That is, while servers are busier than Disney’s, we are losing less units than Disney’s does. This is another confirmation that our filling in property is a good property for priority queue. It is also interesting that as expected in a priority queue, in our system, station 1 is busier than station 2, see Table 3. However, in Disney’s case after the balance  value 1.2, the second station is busier than the first. Moreover, from Figs. 4 and 5 we see that our station 1 is less busy until our balance  value is 11.8, while our second station is always less busy than Disney’s. 4.2. General case In applying AY.1 and AY.2, Matlab computer software does not work well with a loop for matrix D. The reason is that sum of entries of each column is to be zero leading to zero determinant while the Matlab gives a non-zero value for the determinant. The problem was discussed with Matlab engineers, but

388

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2 Probability of the Station 1 Being Busy

M1 = 1, M2 = 3, µ1 = µ2 Probability of Station 2 Busy

1 Disney Model

0.9

c1 = 1, c2 = 1

0.8 0.7 0.6 0.5 0.4 0.3 0.2

Our Model

0.1 0

5

0

10

15 20 25 30 ρ1; α1 = 0, α2 = 0, p = 1

35

1 25 0.9

3

2

0.8 M2 = 1

0.7 0.6 0.5 0.4 0.3

c1 = 1, c2 = 1 0

1

2

3

4 5 6 7 ρ1; α1 = 0, α2 = 0, p = 1

8

9

10

40 Fig. 7. Probabilities of station 1 busy.

Fig. 5. Comparison of station 2 busy.

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2

0.8 Probability of the System Loss

Probability of the System Idleness

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2 0.8 c1 = 1, c2 = 1

0.7 0.6 0.5 0.4

M2 = 1

0.3 0.2

2

0.1 0

0

1

2

0.6 0.5

3

4 5 6 7 ρ1; α1 = 0, α2 = 0, p = 1

8

9

10

M2 = 1

0.4

2 3

0.3 0.2 4

0.1

25

0

25

c1 = 1, c2 = 1

0.7

0

1

2 3 4 ρ1; α1 = 0, α2 = 0, p = 1

5

6

Fig. 8. Probability of loss.

Fig. 6. Probability of idleness.

no reason or solution was offered. However, for smaller size finite matrices, Pm1 ,m2 ’s can be found using the matrix Q as follows: Delete the first row of Q. Call the new Q as Q1 . Delete the first column of Q1 . Call the new Q1 as Q2 . Now choose the first column of Q1 , multiply it by −1 and call it Q3 . Let ˜ Z1 = Q−1 2 Q3 . Add all elements of Z1 , 1 and X and call it S. Now continue with step AY.6. We consider the special case mentioned in Section 3.2 with M1 = 1, M2 = 1, . . . , 25. First we take the case 1 = 2 , 1 = 2 = 0, p = 1, 1 = 2 = . Comparing our Figs. 6 with 2 of [6], we see that when  = 1, in our case there is only 40% chance of the system being idle while for Disney this number is 80%. This chance does not change much as the storage capacity of the second station increases to 25. It can be seen from Fig. 6 that in all our cases with different  values and different capacity of the second buffer, we have lower percentages of system idleness compared with Disney’s [7]. At the same time we see that in our case, probabilities of station 1 being busy, Fig. 7, increase as the second buffer’s capacity does, while in Disney’s case they are all the same. We also see that probabilities of station 1 being busy in Disney’s case remain under 90% when  becomes 10; however, in our

case the station is busy with probability one when  is almost 3. From Fig. 8 we see that percentage of loss in our case is about the same or less than Disney’s for all values of . This is an example confirming our filling in feature is a good one. Comparing Figs. 6 and 9 in our case, we see that allowing balking and reneging that make the model more realistic, the percentage of station is busy with probability one when  is almost 3. From Fig. 8 we see that percentage of loss in our case is about the same or less than Disney’s for all values of . This is an example confirming our filling in feature is a good one. Idleness for the system is about the same for  values of 3.5 and higher. The system is more idle for  values between .25 and 3.5, as expected. Figs. 9 and 10 show the idleness of the system reduces when we double the service rate at the first station in presence of balking and reneging. Comparing Figs. 8 and 11 shows that presence of balking and reneging lower the percent of loss as values of 1 increase. Figs. 11 and 12 show the loss to the system reduces when we double the service rate at the second station in the presence of balking and reneging. Figs. 13–15 show losses to the system. From Figs. 14 and 15 we can see how the presence of balking and reneging reduces loss to the system. In fact, comparison of Figs. 13 and 14 shows

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2

0.8

c1 = 1, c2 = 1

0.7 0.6 0.5 0.4

M2 = 1, α2 = 0

0.3 0.2

2

0.1 0

25 0

1

2

3 4 5 6 7 ρ1; α1 = 0, α2 = 5, p = 0.6

8

9

10

Probability of the System Loss

Probability of the System Idleness

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

389

M1 = 1, M2 = 1, 2, ..., 25, µ1 = 2µ2

0.8

c1 = 1, c2 = 1

0.7 0.6 0.5

M2 = 1, α2 = 0

0.4

2

0.3

3

0.2 4

0.1 0

0

1

2 3 4 ρ1; α1 = 0, α2 = 5, p = 0.6

5

6

Fig. 9. Probability of idleness.

M1 = 1, M2 = 1, 2, ..., 25, µ1 = 2µ2

0.8

M1= 1, M2 = 25, µ1 = µ2, c1 = 1, c2 = 1

c1 = 1, c2 = 1

0.7

Expexted System Loss +/-σ

Probability of the System Idleness

Fig. 12. Probability of loss.

0.6 0.5 0.4

M2 = 1, α2 = 0

0.3 0.2

2

0.1 0

0

25 2

1

3 4 5 6 7 ρ1; α1 = 0, α2 = 5, p = 0.6

8

9

20 Mean Loss

15 10 5

Mean Loss - σ

0 -5

10

Mean Loss + σ

25

0

1

Fig. 10. Probabilities of idleness.

Expexted System Loss +/-σ

Probability of the System Loss

0.6 0.5

M2 = 1, α2 = 0

0.4

2

0.3

3

0.2 4

0.1 0

0

1

2 3 4 ρ1; α1 = 0, α2 = 5, p = 0.6

8

9

10

9

10

M1= 1, M2 = 25, µ1 = µ2, c1 = 1, c2 = 1

c1 = 1, c2 = 1

0.7

3 4 5 6 7 ρ1; α1 = 0, α2 = 0, p = 1

Fig. 13. Expected loss to the system ±.

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2

0.8

2

5

6

25 20 Mean Loss

15 10

Mean Loss + σ

5 0 -5

Mean Loss - σ

0

1

2

3 4 5 6 7 ρ1; α1 = 0, α2 = 5, p = 0.6

8

Fig. 11. Probability of loss.

Fig. 14. Expected loss to the system ±.

that this reduction is up to two and a half times. It should be noted that for values of 1 < 0.3 there is more loss due to balking and reneging for all values of M1 . The percentage of the second station being busy in both models seems the same, Fig. 16. However, when balking and reneging are introduced, station 2 is less busy, Fig. 17. Fig. 18 shows the probabilities of stations being busy when capacity of the second station is 18. Comparing to Disney’s, we see that in our case station 1 is always less busy than the second

station while it varies in Disney’s case. This is consistency of our model and it is perhaps due to the filling in feature of the model. Fig. 19 shows the busyness in the presence of balking and reneging. We see that not only the first station is always less busy than the second, but also the percent of stations being busy is lower than without balking and reneging. From Figs. 20 and 21 we see that when balking and reneging are permitted and the speed of service at the first station is

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

Expexted System Loss

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2, c1 = 1, c2 = 1 Loss Due to Balking and Reneging Loss Due to Stations Full

25 20 15 10 5 0

0

1

2

3 4 5 6 7 ρ1; α1 = 0, α2 =5, p = 0.6

8

9

10

Probability of the Stations Being Busy

390

c1 = 1, c2 = 1,M1 = 1, M2 = 25, µ1 = µ2 1 0.9 0.8 0.7

S2

0.6 0.5

S1

0.4 0.3 0.2 0.1

1

2

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2

1

25

0.9

3

0.8 2

0.7

M2 = 1

0.6 0.5 0.4

c1 = 1, c2 = 1

0.3 0

1

2

3

4 5 6 7 ρ1; α1 = 0, α2 = 0, p = 1

8

9

10

Mean Attended

Probability of the Station 2 Being Busy

M2 = 1, α2 = 0

0.5 0.4 0.3

c1 = 1, c2 = 1 0

1

2

3

4 5 6 7 ρ1; α1 = 0, α2 = 5, p = 0.6

9

10

9

10

0.8 0.7

S2

0.6 0.5

S1

0.4 0.3 0.2 0.1

1

2

3

4 5 6 7 ρ1; α1 = 0, α2 = 5, p = 0.6

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2 c1 = 1, c2 = 1

3

2

0.6

8

0.9

3.5

3

0.7

10

Fig. 19. Probabilities of stations busy.

25

0.8

9

c1 = 1, c2 = 1,M1 = 1, M2 = 25, µ1 = µ2

M1 = 1, M2 = 1, 2, ..., 25, µ1 = µ2

0.9

8

1

Fig. 16. Probabilities of station 2 busy.

1

4 5 6 7 ρ1; α1 = 0, α2 = 0, p = 1

Fig. 18. Probabilities of stations busy.

Probability of the Stations Being Busy

Probability of the Station 2 Being Busy

Fig. 15. Expected loss to the system.

3

8

9

10

2 1.5 1

doubled, the mean attendance increases as values of 1 does. At the same time from Figs. 22 and 23 we see that the queue length at station 2 decreases. 5. Concluding remarks We have posed and analyzed a novel model of a multi-station multi-server priority queueing system that represents a conveyor mode. What is novel in this paper is that priority is designed to minimize the idle period of servers. We also have included balking and reneging factors in this model that are very practical. In addition, we were able to offer an algorithm

M2 =1, α2 = 0

0.5 0

Fig. 17. Probabilities of station 2 busy.

25 24

2.5

0

1

2

3 4 5 6 7 ρs; α1 = 0, α2 = 5, p = 0.6

8

Fig. 20. Mean attendance.

that with the aid of the modern technology gave the distribution of the queue length in addition to the mean loss and attended among other parameters for a system with any number of servers and heterogeneous service rates. For four special cases we offered explicit solutions. We not only extended Disney’s models [5,7] drastically, but we also offered algorithmic solution for the general case. In fact, the algorithm with a slight modification can be used to solve Disney’s model. We have reached results Disney has arrived at, made some comments on his observations, and corrected some numerical

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

c1 = 1, c2 = 1

3 Mean Attended

imbalance occurs. However, in our model we do not see that perhaps due to allowing filling in feature. See Figs. 8 and 9. We have stated more properties of our model along with figures in the text. A future study could attempt to find the busy period distribution as well as transient case of the system.

M1 = 1, M2 = 1, 2, ..., 25, µ1 = 2µ2

3.5

25 24

2.5 2 1.5

Acknowledgements

1

M2 =1, α2 = 0

0.5 0

0

1

2

3 4 5 6 7 ρs; α1 = 0, α2 = 5, p = 0.6

8

9

10

Fig. 21. Mean attendance.

M1= 1, M2 = 1, 2, ..., 25, µ1 = µ2 Expected Q Length at S2

c1 = 1, c2 = 1 20 15 10 25 5 M2 = 1, α2 = 0

0 0

1

2

3 4 5 6 7 ρ1; α1 = 0, α2 = 5, p = 0.6

8

9

10

Fig. 22. Mean queue length at station 2.

M1= 1, M2 = 1, 2, ..., 25, µ1 = 2µ2 25 c1 = 1, c2 = 1 20 15 10 25 5 M2 = 1, α2 = 0

0 0

1

2

The authors would like to thank all three reviewers for their kind words and very helpful and constructive comments and suggestions. We think because of those comments, the quality of the paper has improved. References

25

Expected Q Length at S2

391

3 4 5 6 7 ρ1; α1 = 0, α2 = 5, p = 0.6

8

9

10

Fig. 23. Mean queue length at station 2.

values mentioned in [5,7]. We also offered an extensive numerical comparison between our model and Disney’s. It can be easily seen from the graphs that the system is idle a portion of the times since P00 > 0 as in [5], and there are times that an arrival may be lost to the system not only due to the system being full, as in [5], but due to the balking and reneging. It is interesting, however, that, in the absence of balking and reneging, in Disney’s model [5,7] a serious

[1] M. Benzi, M. T˚uma, A parallel solver for large-scale Markov chains, Appl. Numer. Math. 41 (2002) 135–153. [2] G.S. Brodal, J.L. Träff, C.D. Zaroliagis, A parallel priority data structure with applications, in: Proceedings of the 11th International Parallel Processing Symposium, 1997, pp. 689–693. [3] G.S. Brodal, J.L. Träff, C.D. Zaroliagis, A parallel priority queue with constant time operations, J. Parallel and Distributed Comput. 49 (1998) 4–21. [4] D.R. Cox, W.L. Smith, Queues, Methuen, London, UK, 1961. [5] R.L. Disney, Some multichannel queueing problems with order entry, J. Ind. Engng. 13 (1962) 46–48. [6] R.L. Disney, Some results of multichannel queueing problems with ordered entry—an application to conveyor theory, J. Ind. Engng. XIV (2) (1963) 105–108. [7] R.L. Disney, D. Kögin, Queuing networks: a survey of their random processes, SIAM Rev. 27 (3) (1958) 335–400. [8] E.A. Elsayed, Multichannel queueing systems with ordered entry and finite source, Comput. Oper. Res. 10 (3) (1983) 213–222. [9] A.R. Elsayed, C.L. Proctor, Ordered entry and random choice conveyors with multiple Poisson inputs, Internat. J. Prod. Res. 15 (5) (1977) 439–451. [10] A.R. Elsayed, C.L. Proctor, H.A. Elayat, Analysis of closed-loop conveyor systems with multiple Poisson inputs and outputs, Internat. J. Prod. Res. 14 (1) (1976) 99–109. [11] M.K. Girish, J.-Q. Hu, Higher order approximations for the single queue with splitting, merge and feedback, European J. Oper. Res. 124 (2000) 447–467. [12] C. Gregory, C.D. Litton, A conveyor model with exponential service time, Internat. J. Prod. Res. 13 (1) (1975) 1–7. [13] S.K. Gupta, Analysis of a two-channel queueing problem with ordered entry, J. Ind. Engng. 17 (1966) 54–55. [14] N.K. Jaiswal, Priority Queues, Academic Press, New York, 1968. [15] B.W. Lin, E.A. Elsayed, A general solution for multichannel queueing systems with ordered entry, Comput. Oper. Res. 5 (1978) 219–225. [16] M. Matsui, Job-shop model: a M/(G,G)/1(N) production with ordered selection, J. Prod. Res. 20 (2) (1982) 201–210. [17] A. Montazer-Haghighi, A many server queueing system with feedback, in: Proceedings of the 8th National Mathematics Conference, Aryamehr University of Technology, Tehran, 1977. [18] A. Montazer Haghighi, A many-server queueing system with feedback, Bull. Iranian Math. Soc. 9 (I) (1981) 65–74 (serial No. 16). [19] A. Montazer-Haghighi, J. Medhi, S.G. Mohanty, On a multi-server Markovian queueing system with balking and reneging, Comput. Oper. Res. 13 (4) (1986) 421–425. [20] B.U. Nguyen, K.Y.K. Ng, Modeling Canadian search and rescue operations, Mor. J. Abstracts 5 (1) (2000) 1. [21] I. Parberry, Load sharing with parallel priority queues, J. Comput. System Sci. 50 (1) (1995) 64–73.

392

A.M. Haghighi, D.P. Mishev / J. Parallel Distrib. Comput. 66 (2006) 379 – 392

[22] D.T. Phillips, R.W. Skeith, Ordered entry queueing networks with multiple services and multiple queues, AIIE Trans. 1 (4) (1969) 333–342. [23] A.A.B. Pritsker, Application of multichannel queueing results to the analysis of conveyor systems, J. Internat. Engng. XVII (1) (1966) 14–21. [24] M. Robbins, A NIMITZ-class carrier, Popular Science, 2003, December. [25] P. Sanders, Randomized priority queues for fast parallel access, J. Parallel and Distributed Comput. 49 (1) (1998) 86–97. [26] H.S. Stone, An efficient parallel algorithm for the solution of a tridiagonal linear system of equations, J. Assoc. Comput. Mach. 20 (1) (1973) 28–38. Aliakbar Montazer Haghighi received his Ph.D. in Probability and Statistics from Case Western Reserve University, Cleveland, Ohio, in 1976 under supervision of L. Takács. He received his B.A. and M.A. in mathematics from San Francisco State University, SF, California. He is currently Professor and Head of Department of Mathematics at Prairie View A&M University in Texas. Previously, he was employed by Institute of Statistics and Informatics (Iran), the National University of Iran, and Benedict College (Columbia, SC) as faculty (assistant professor true professor), department chair, vice president for academic affairs and interim president. His research interests are probability, statistics, stochastic processes, and queueing theory. In addition to his papers published internationally, he has published textbooks in mathematics in Farsi.

Dr. Dimitar P. Mishev (Michev) received his M.S. and Ph.D. in Mathematics from Sofia University, Sofia, Bulgaria, in 1977 and 1989, respectively. He is currently Assistant Professor of Mathematics at Prairie View A&M University in Texas. Previously, he was employed by Southern Illinois University Carbondale, IL, as Lecturer, 1999–2001; Claremont University Center, Claremont, CA, as Web writer, 1998–1999; Technical University, Sofia, Bulgaria, as Assoc. Prof. of Mathematics, 1985–1993; Head of Department Differential Equations, 1993–1998; Institutes for Foreign Students, Sofia, Bulgaria, as Asst. Prof. 1978–1985; and Central Institute of Computer Engineering, Sofia, Bulgaria, as Researcher-mathematician, 1977–1978. He has published two monographs jointly with D. D. Bainov on differential equations, and many research papers in mathematics.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.