Optimal Power and Retransmission Control Policies for Random Access Systems

Share Embed


Descrição do Produto

1156

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 6, DECEMBER 2004

Optimal Power and Retransmission Control Policies for Random Access Systems Guillermo del Angel, Member, IEEE, and Terrence L. Fine, Fellow, IEEE

Abstract—We consider in this study dynamic control policies for Slotted Aloha random access systems. New performance bounds are derived when random access is combined with power control for system optimization, and we establish the existence of optimal control approaches for such systems. We analyze throughput and delay when the number of backlogged users is known, where we can explicitly obtain optimal policies and analyze their corresponding performance using Markov Decision Process (MDP) theory with average cost criterion. For the realistic unknown-backlog case, we establish the existence of optimal backlog-minimizing policies for the same range of arrival rates as the ideal known-backlog case by using the theory of MDPs with Borel state space and unbounded costs. We also propose suboptimal control policies with performance close to the optimal without sacrificing stability. These policies perform substantially better than existing “Certainty Equivalence” controllers. Index Terms—Access protocols, Markov processes, multiaccess communications, stochastic optimal control.

I. INTRODUCTION AND OUTLINE

W

E CONSIDER dynamic control schemes to optimize access to a multiuser communications system. Specifically, we consider a situation in which a large population of bursty transmitters attempts to send information packets to a central receiver. Users are unable to coordinate their transmissions, and thus collisions are unavoidable. This problem has been extensively studied in the literature (see, for example [2]). We will focus on a controlled Slotted Aloha random access protocol with capture and/or multipacket reception in which packets are successfully received according to their signal-to-interference-and-noise ratio (SINR). We propose a joint retransmission and randomized power control strategy which yields better throughput and delay performance than conventional methods. We also propose a new estimation/control approach which provides better delay performance than conventional methods. Slotted Aloha is a suboptimal protocol, even when dynamically controlled in order to avoid instability. In a classical collision channel, where a transmission is successful if and only if only one user

Manuscript received December 9, 2001; revised September 30, 2003; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor I. Stavrakakis. This work was supported in part by the National Science Foundation under Grant NCR-9725251 and the Air Force under Grant F30602-00-2-0558. The work of G. del Angel was also supported in part by CONACYT (Mexico) Doctoral Grant 116047. This work was presented in part at the 38th and 39th Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, October 2000 and October 2001. G. del Angel was with the School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853 USA. He is now with Aware, Inc., Bedford, MA 01730 USA (e-mail: [email protected]). T. L. Fine is with the School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TNET.2004.838605

transmits at a time, better tree-like collision resolution protocols are known, which provide better delay and throughput characteristics. However, these protocols are more complex to implement and analyze, and incorporating stronger coupling between physical layer and protocol operation becomes a harder problem. Nevertheless, while we do not focus on these protocols here, we will briefly discuss extensions of the control approaches developed here in the context of Slotted Aloha protocols to these collision resolution algorithms. We pose this control problem as a Markov Decision Process (MDP). The system state is the number of blocked (or backlogged) users at the beginning of each time slot; the control actions at the beginning of such time slot are the probability that blocked users employ for attempting packet retransmission, as well as the probability distribution for a signal power level with which each user transmits. The use of MDPs in the context of Slotted Aloha control is not new [13]. Our formulation is different and more general, not only because we include transmitted signal power as a system control variable, but because we can then extend this control formulation to a general case when the number of blocked users is not known and has to be estimated from feedback information. The traditional control approach in the unknown-state case has been to use the “Certainty Equivalence” principle, separating state estimation and control. An early example of this approach was introduced in [22], who proposed an optimal recursive algorithm in order to estimate the number of backlogged users. Suboptimal estimation approaches which yielded stronger stability results were proposed in [3], [8], and [21] in the context of classical Slotted Aloha and in [7] in the context of multipacket reception systems. In those works, simple recursive estimators for the number of blocked users are developed using channel feedback information, which are then used in turn by the controller as if they were the true value. We show that this approach, while provably stable, is suboptimal and can yield worse performance than the approach proposed herein in terms of mean backlog and delay. Our approach relies on keeping a posteriori probability (APP) distributions for the number of blocked users given all of the observed data, as opposed to simple point estimators. It is shown, by the use of the theory of partially observable MDPs, that this approach loses no optimality. The optimal control policy treats this probability distribution on the backlog as a new system “state,” which we will refer to as a “belief vector.” With this new state variable, a new fully observable problem can be formulated which can then in theory be solved by conventional tools. However, getting optimal policies in this case is computationally intractable and is in general infeasible. We show nonetheless that simple greedy policies based on the optimal APP approach perform quite well

1063-6692/04$20.00 © 2004 IEEE

DEL ANGEL AND FINE: OPTIMAL POWER AND RETRANSMISSION CONTROL POLICIES FOR RANDOM ACCESS SYSTEMS

in simulations and achieve the same maximum throughput as the ideal known state case with only a small loss in terms of a mean number of blocked users and mean delay. This study is outlined as follows. In Section II, we give a mathematical description of the system model, dynamics, and operation. We study then in Section III an idealized case when the system state (i.e., number of blocked users) is known. This approach will serve to provide intuition to the problem, and the performance obtained in such case will lower bound the figures attainable when the backlog is unknown. We then generalize results for the more realistic case when the system state is not known in Section IV, proposing optimal and suboptimal control solutions. Some numerical examples are studied in Section V. Conclusions and future research work are given in Section VI. Finally, we provide a survey of MDP theory and its application to our work in the Appendix.

1157

The SINR threshold constant is system-dependent and, in general, would depend on physical layer characteristics such as modulation and coding. This SINR reception criterion is well known in the context of Slotted Aloha systems (see, for example, [14] and [19]), and it can also be derived from a simple ideal Gaussian random coding system with single-user decoding [6], in which case the constant only depends on the coding rate employed by the transmitters. Given the system constants and , there is a maximum number of users which may have their SINR target achieved. denote the number of successtul transmissions during Let slot . It was shown in [6] that needs to satisfy the following relationship:

II. MATHEMATICAL MODEL AND SYSTEM DYNAMICS We consider a slotted random access system, in which all time slots have the same duration and where we assume perfect synchronization between all transmitters and a central receiver/controller. There is an infinite total user population in the system. Each user can be in one of two states, namely idle or backlogged. Idle users generate bursty traffic according to a Poisson process with parameter . The new packet generation is assumed to be of known constant rate and independent among time slots. Let denote the number of new transmitters at slot and let denote the number of backlogged users at the beginning of slot . We . can then write If an idle user’s transmission collides with other user packets and the transmission is not decoded successfully at the central receiver, then such user becomes backlogged. Backlogged users at the beginning of slot attempt to retransmit their packet with probability . We assume that retransmission attempts are independent among backlogged users and that is the same among denote the number of retransmissions at slot . them. Let , Then, clearly we can write with . Each user is capable of perfect power control, that is, capable of perfectly scaling the transmitted signal power in order to achieve a certain power level at the receiver, in order to compensate for near/far effects and channel fading. Such power is constant during the complete time slot, and we assume that there is background noise of constant power spectral density. Let be the noise power received at each time slot. Transmitters are subject to a restriction of having their trans. Let denote mitted power equal to at most the maximum possible signal-to-noise ratio (SNR). We assume that a packet is successfully received if (and only denote the received if) it satisfies a minimum SINR. Let signal power at a certain time slot for transmitter and let be the number of transmitters at slot . We consider then that a packet from user is captured if and only if user ’s received signal power satisfies

The SINR reception model is a generalization of the classical iff , or collision channel, in which otherwise, and is equivalent to it if the system parameters and are such that . Under this reception model, the system design target of having all users be received with equal power may no longer be optimal. Intuitively, when some user signals are received with some signals much stronger than others, these strong users may have a better chance of achieving their target SINR than if they had been received with equal powers. It may be thus desirable to have a large variation of powers within the transmitting group. If powers are assigned in a fixed way or if no power control is exercised, there may be users who get assigned a higher power level or who may be physically closer to the receiver, which will give them an unduly large chance of success in their transmissions, raising a fairness issue. A solution to this problem, explored first in [14] and [15] in somewhat different contexts, was the use of randomized power control strategies. With this approach, if a user attempts a packet transmission at time slot , it transmits it with a random power level determined according to a certain probability distribution, independently of other transmitters. Signal powers at the central receiver will be then independent, identically distributed (i.i.d.) random variables . with a common distribution function This setup ensures fairness among users, and we will see that both system throughput and mean delay are greatly improved, at the cost of increased system complexity. In order to exercise optimal control in the system, the reception mechanism is assumed to be known at the receiver. That is, the receiver is assumed to have exact knowledge of parameters , , and , and it can determine and communicate to all users the power level probability distribution . has a finite support and If is characterized then by a probability vector , , we can then calculate the probabilities as

1158

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 6, DECEMBER 2004

with

We consider two feedback situations. is known at the controller at the beginning of slot , and 1) thus . Control in this situation is studied in Section III. is unknown, so other feedback messages are necessary 2) to stabilize the system. We will show that a sufficient condition for the existence of optimal policies in this case is at least as informative as idle/nonidle feedis that , either or back.That is, for every . We call any such observation space an Informative Space. As two particular examples of this we consider two feedback models: the mentioned idle/nonidle feedback or the following feedback model: is known. Additionally, if (no successes), it is also known if (idle) (collision). This feedback model corresponds or if to the ternary feedback (idle/success/collision) in a classical collision channel, but here it is generalized for channels with capture or multipacket reception. and are given, and if is known and From (3), if depends only on , then is independent of all previous . Hence, forms a Markov process with values of , transition kernel

(1)

where is the set of all power level realizations with transmitters yielding successes as follows:

(2)

, there exist finite power It was shown in [5] that, if level supports which do not lose optimality, in the sense that for there exists another probability any probability distribution measure defined by a probability vector with finite dimension , with such such that and of . support being independent of From the above, the backlog evolves as (3) In order to optimally control such system, at the beginning of each time slot users choose a control action consisting of the power level probability distribution and denote the set of the retransmission probability . Let probability distributions on , and let , where denotes the Action Set. At the end of each time slot, users get feedback from the central receiver. We assume that, at least, all successful transmissions are acknowledged. There may be, however, instances in which there is still more information available for the decision makers, in which case an is broadcast (noiselessly and instantaneously) Observation , where denotes an Observation to the users. Let Space. Given all information available at the decision makers at the beginning of time slot , namely the history , we need to determine a control mapping so that and some an performance measure is maximized. Call where denotes the space Admissible Policy, and let of all such admissible policies. A special class of policies are , where consists stationary deterministic policies , which map actions as a function of all mappings . In of only the most immediate observation, i.e., will not be optimal except for a few general, policies in special cases, as studied below. and such We will say that a policy is stable if will form an expectation is time-invariant, in which case ergodic random process. For such a case, then from (3). Furthermore, by use of Little’s formula, we will spent by a user have that the mean time (i.e., delay) between its arrival and its departure (i.e., the time its packet for is successfully transmitted) are related by any stable policy .

(4) It is shown in Section III that if is known there is no loss as a of optimality in considering policies that select action function of only while neglecting all past feedback history. Notice that , which only depends on the arrival process distribution. Also,

since, given the number of transmitters , is independent from . Furthermore, this probability can be readily computed given the channel reception model and the dynamic model for . , and let Let . Define (5) It can then be shown by standard use of drift criteria for Markov chains (see [5]–[7]) that, if is known, there exist for all stable stationary deterministic policies , which take the form , , where is a fixed constant. (i.e., the power level probability distribution , we have that is time-invariant). Furthermore, for all with probability one and for any policy

DEL ANGEL AND FINE: OPTIMAL POWER AND RETRANSMISSION CONTROL POLICIES FOR RANDOM ACCESS SYSTEMS

1159

. Thus, is the maximum system throughput in the sense that it is the maximum stable arrival rate. The existence of stable policies for the case of unknown for all can also be established. If is time-invariant, the channel reception matrix is also time-invariant, and hence our model reduces to a Slotted Aloha system with multipacket reception, and the results of [7] can be applied to our problem. For such systems, it was shown that there exist stable retransmission control policies for all based only on idle/nonidle feedback information. However, the above results only consider one system performance measure, which is stability. Thus, while such policies are useful in order to obtain maximum stable arrival rates, they say nothing about transient system behavior or mean delay . There exists an uncountably infinite number of policies which achieve the same stability limit , each of which achieves a certain associated delay. While provably stable, these policies may not perform well in more practical settings since they focus only on the asymptotic long-run system behavior.

policies also exist for this problem variation, which at. Setting then tain a minimum , then under suitable assumptions a distinguished state . Whereas there is no straightin the context we are studying, forward interpretation for as a differential value function for the this interpretation of discounted problem has some conceptual and computational advantages (see [20] for a discussion thereof). Getting such optimal triple ( , , ) can be done through the use of the Policy Iteration Algorithm or the Value Iteration Algorithm [16], [20] and is a fairly straightforward iterative numerical computation. While in theory feasible, the results obfor tained as optimal costs for this case (i.e., the constants each and ) have limited applicability, as we know that will not be known in practice. However, computation of these costs is important since they provide an absolute lower bound for the mean backlog and, in consequence, the mean delay that can be achieved by any practical system. We will explore this issue when presenting numerical results in Section V.

III. CONTROL WITH KNOWN SYSTEM STATE

IV. CONTROL WITH AN UNKNOWN SYSTEM STATE

Optimal control when is known is particularly simple both conceptually and in practice. We can formulate the problem as an average-cost MDP in the following form. Let the system state with state space , the transition kernel given as be in (4), and action space . The average cost incurred by the system when we start at a state using policy is defined as , where is the expectation of a random variable conditioned on initial state and using policy . The control goal is to find a policy such that . and be given. Then, for Proposition 1: Let , with given by (5), there exists an all arrival rates optimal delay-minimizing policy , a scalar , and for which the following a measurable function : optimality equation holds for all

We now show that there exist optimal stable policies for the is unknown, with the same stability limit as case where the case in which is known. We again rely on existing MDP theory, but with an unknown or hidden state. As we later show, we can transform the hidden state formulation into a fully observable one at the cost of expanding the system state considerably. Also, there are in many situations no known practical algorithms for solving these problems, except for some special cases. We describe this optimal approach in this section, leaving the technicalities and the necessary theoretical background to Appendix A. As mentioned in Section I, if is not known, most existing dynamic control approaches rely on building an estimator based on the feedback message sequence and then substituting it as if it were the true in order to determine the control action . However, while this approach can be shown to achieve the same stability limit as when is known [7], it is suboptimal from the point of view of minimizing delay or mean backlog. There is no loss in optimality in considering only separated policies which take on the following structure. Define , and define the infinite vector . Vector is called a “belief vector” and summarizes our existing knowledge of given all observed data. A separated policy first determines and then maps to an optimal action. See Section A in the Appendix for a full general technical treatment and for the justification of using separated policies. forms itself a Markov When using separated policies, process with probability vectors as its state space, i.e., . In order to convert the partially observable MDP into a fully observable one, we define the one-step cost function as . The control objective can be shown then to be equivalent to minimizing over all policies , for any initial state .

(6) Furthermore, and thus . Proof: This result is a straightforward application of the MDP theory presented in the Appendix which is taken from [9]–[11] and [20]. We show in Appendix I-B how this general theory is applied to prove Proposition 1. Note too that policy and thus depends only on observation . Now, for any given action , we have that , and so it follows that given the past success does not add any information to the controller. Hence, can be chosen solely as a function of , i.e., the optimal policy has . form can be interpreted as follows. If the conFunction trol policy aims at minimizing a discounted cost function for a nonnega, it can be shown that optimal tive discount factor

1160

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 6, DECEMBER 2004

Furthermore, by using conditional independence assumptions can be recursively computed from and and Bayes rule, as any action-observation pair

(7) be a mapping so that the th element Let is given by (7). Then, by of definition. The denominator of (7) gives us the probability of and action are used. observing when belief vector is For any feedback case, we can calculate (7) by letting and first noting that

(8) For the ternary feedback case, determining is straightforward as follows. 1) If , then

. Then, policy For a separated policy , let is acceptable if it satisfies the following conditions: (C1) There exists an arbitrary then . then (C2) If

such that, if .

Intuitively speaking, these assumptions mean the following. (C1) ensures that we transmit with probability one if the current cost is low enough. (C2) just states that, if the controller is certain that there exist some backlogged users, it will not assert a retransmission probability of one, which would lead then to a collision with certainty. as the set of all acceptable policies. Note that (C1) Denote then , so is consistent with (C2), since if there will be no conflict in the definition as long as , and can be arbitrarily chosen. Essentially, the purpose of (C1) will be can eventually be reached to ensure that state with positive probability from any starting state , and (C2) ensures that there exists probability of some successful transmission at any state . The technical details and the reasoning behind these assumptions will be deferred to the Appendix. It is then a straightforward application of MDP theory to show the following. be unknown and let the observation Proposition 2: Let be an Informative Space according to its definition space , with as given in (5), above. Then, for all arrival rates there exists a stable acceptable policy such that, for , the steady-state any initial distribution cost function satisfies for and is thus optimal. Furthermore, there exists any initial , solving the following Optimality a function Equation:

which follows from (8). , we get that 2) If (9)

3) If

, then

The observation probabilities for idle/nonidle feedback or for other feedback models can be obtained in a similar way. We will make an additional set of assumptions on the structure of admissible policies, which are innocuous from a practical is an irreducible perspective, but which ensure that process Markov process, simplifying our exposition.

Proof: As in the proof of Proposition 1, we will again apply MDP theory results to establish the existence of an optimal policy under suitable assumptions. However, in contrast to Proposition 1, verifying these assumptions is more involved. folThe existence of an initial stable policy for all lows, as mentioned in Section II, from the suboptimal control approach proposed in [7]. We defer the verification of these assumptions and the technical details to Appendix B. Unfortunately, getting solutions to [9] is in general infeasible. Straightforward MDP algorithms such as Policy Iteration or Value Iteration are not applicable, as they involve iterating over the whole state space which is now uncountable. For the case where both the state and action spaces are finite, there are exact algorithms for solving this problem, surveyed in [12] and [18] for the case of finite horizon problems. However, their application to our problem is not practical, as the complexity of

DEL ANGEL AND FINE: OPTIMAL POWER AND RETRANSMISSION CONTROL POLICIES FOR RANDOM ACCESS SYSTEMS

those algorithms increases exponentially in the state and action space sizes, both of which are infinite in our application. We introduce instead a simple greedy policy which will be shown in Section V to perform quite adequately for our purposes. A greedy alternative to the optimal policy would be to minimize the estimated backlog for the next slot, in the following way. Initialization: Let . A reasonable inias it would be tialization would be intuitively appealing to assume that when the system is started there are no backlogged users. However, any other initial state will be equivalent. Also, choose arbitrarily. , perform the following two steps at the beThen, for all ginning of each time slot: is obtained, calcu1) Belief Update: Once observation optimally with (7), from , , late belief vector . and 2) Action Selection: Choose as

Only the second step is suboptimal, as an optimal policy would instead solve (9). We will study the numerical performance of this algorithm through Monte Carlo simulations, comparing it with other simple suboptimal approaches and with the optimal known-state control described in Section V-C. V. NUMERICAL RESULTS We now study the system performance achieved in terms of mean backlog and mean delay for several numerical values. is known we can obtain steady-state costs in a nuWhile if is not known there is no closed-form merical fashion, when solution to the optimality equations. For such a case, the mean backlog and delay results obtained with the suboptimal greedy approach yield an upper bound to the optimal policy performance. A lower bound is provided by the known- method. We study the system behavior through the numerical simulation of two examples, which yield a representative view of the main system features. A. Slotted Aloha in a Classical Collision Channel For this channel model, we can compare our control algorithms with previously developed algorithms for this system. Control in this case is particularly easy, as the only control parameter is the backlogged user retransmission probability . Fig. 1 summarizes the obtained system performance for different control algorithms. Both mean backlog and mean delay (obtained from the backlog from Little’s formula ) were computed using Monte Carlo simulation, averaging over 50 runs, each lasting 20 000 slots. Only mean backlog is shown in the figure, as the mean delay has a very similar behavior. Two feedback models were applied with the greedy control algorithm developed above: using the classical

1161

Fig. 1. Mean backlog for several control algorithms for Slotted Aloha in the classical collision channel.

ternary feedback case, and the “full feedback” case in which , the total number of transmitters at each time slot, is known. The simple greedy recursive estimation/control algorithms developed in [3], [8], and [21] for the ternary feedback case included for comparison. This algorithm keeps an estimator of which is updated recursively according to ternary feedback , information, as with ( , , ) are the indicator functions for events empty, is then computed as success, and collision, respectively. . As mentioned in [4], the particular choice of parameters ( , , and ) of this family of algorithms does not affect system performance greatly as long as they are kept within a range to keep stability. For the particular simulations implemented here, we chose the parameters 0.72, 0, and 1, as they resulted in the best performance according to results in [4]. We also include the results obtained when applying the described estimation/control algorithm using binary feedback of the form idle/nonidle messages. For low arrival rates, all algorithms perform practically at is known. the limit imposed by the idealized system where , there is no need for more sophisticated Thus, if control approaches, and simple suboptimal estimators such as Clare–Rivest’s provide essentially optimal performance. However, there is a larger gap in the performance for different algorithms when is close to the maximum . Here, there is an improvement in performance when the greedy approach described above is used instead, compared to the Clare–Rivest approach. For both binary and ternary feedback cases, the performance of the greedy algorithm is not far from the optimal case is known. when We can generalize these results to other physical channel models, as we now describe. B. Slotted Aloha With Capture We will present now simulation results obtained for a more general system with capture. Namely, let and . It can then be shown [5] that an optimal power level support is

1162

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 6, DECEMBER 2004

Fig. 2. Mean backlog for several control algorithms for Slotted Aloha with capture, = 5, and = 1.

then , and it is found that with , so that at most one packet can be captured at once. In the context of Slotted Aloha systems with multipacket reception, Ghez et al. showed in [7] that there exist stable control algorithms using idle/nonidle feedback. While these results, as seen before, are theoretically important because they provide the existence of an initial stable policy from which the existence of an optimal policy can be deduced, they do not address the issue of backlog or delay performance. We present here such performance results compared to our optimal state estimation/greedy control algorithm. Fig. 2 summarizes these performance results. Once again, we can see that the performance gap is practically negligible for low arrival rates and thus that the greedy estimation/control al. For rates closer to gorithm is essentially optimal if , we get a wider performance gap, but the performance is still not far from the ideal lower bound. Note that the performance is notably better even for the case when only binary feedback is available, compared to the algorithm of [7]. Such performance can still be further improved by providing ternary feedback to the system. VI. CONCLUSIONS AND FUTURE WORK We have developed methods for the dynamic control of Slotted Aloha systems which couple power with retransmission control. By applying MDP theory results, we have established the existence of optimal policies for all arrival rates when is known, serving as a basis for establishing existence is not known. results for the case when is not known The proposed suboptimal approach when yields better results in terms of performance, but it has some drawbacks. These same drawbacks point out natural directions for future research. • We must figure out how to design simpler algorithms more amenable to efficient, real-time implementation. • We have established the existence of optimal policies for two particular feedback models (binary and ternary). A

more interesting (and challenging) result would be the following. Given a particular policy and a particular feedback model in the unknown-state case, for which arrival rates is this policy stable? We strongly conjecture, for example, that our greedy policy introduced above is stable , but a formal proof thereof is for all arrival rates lacking. • We have assumed that a complete and accurate probabilistic model is available for decision making. However, this assumption may be unduly optimistic. The channel reception dynamics and the traffic model may behave in a much more complex way than our simple models. We have not addressed the robustness of our approach with respect to modeling mismatches. It remains to be seen then how we can implement robust dynamic control algorithms that consider our model uncertainty in an effective and efficient way. • As mentioned in Section I, Slotted Aloha is not the only (nor the best) random access algorithm available. Most work on collision resolution protocols has focused on the classical collision channel, but phenomena such as capture and multipacket reception have generally not been incorporated in these protocols. The reason is one of robustness and of complexity. The latter is relevant, since many of these protocols are sensitive to channel feedback errors which may cause them to deadlock [2]. On the other hand, mathematical analysis of these protocols is not simple, and even for the classical collision channel with Poisson arrivals the problem of finding an optimal collision resolution protocol is still unknown. A collision resolution protocol which could incorporate joint power control into the random access mechanism in a feasible way would be an interesting future research direction. APPENDIX MARKOV DECISION PROCESSES We provide a survey of the theory of controlled Markov Processes, sometimes also called Markov Decision Processes. The material presented in this section is fairly standard in the literature and we refer the reader to [1] and [10] for more complete treatments on the subject. A discrete-time stationary MDP is a stochastic dynamical system specified by the five-tuple ( , , , , ), where is a Borel space, called state space, elements of which • are called System States. • is a Borel space, called the action or control space. • is a measurable map from the state space to the Borel -algebra of . represents the set of . Call admissible actions when system is in state the set of admissible state/action pairs. • is a stochastic kernel on given , called the transition kernel. • is the measurable one-stage cost function. The interpretation of the above is as follows. Given that the system is at state at time , the controller chooses an action . The system then evolves probabilistically to a new according to a probability distribution and state

DEL ANGEL AND FINE: OPTIMAL POWER AND RETRANSMISSION CONTROL POLICIES FOR RANDOM ACCESS SYSTEMS

incurs an expected cost . At time , the process is repeated. , The admissible history spaces are defined as . An admissible policy is a sequence of stochastic kernels on given satisfying the constraint , , . The set of all admissible policies is denoted by . A stationary deterministic policy is a measurable map , in which case . The set of all stationary deterministic policies . It is easily seen that under any policy is denoted by , then forms a Markov process with transition kernel . We require some concepts of recurrence and ergodicity which we draw from [10], [17]. Define the hitting time . A Markov chain , , is called -irreducible if there exists a -finite measure on such that implies that, for any , or, equivalently, that, for any , . If , we will say that set is accessible from state . Similarly, a Markov chain is called -recurrent (or Harris-recurrent) if implies that, for any , . Note that -irreducibility is a weaker concept than classical irreducibility for finite or countable-state Markov processes and is equivalent to it if we take as the counting measure. We say that a policy is stable if the corresponding Markov process is Harris-recurrent and if a unique invariant measure exists. Our main performance criteria is the expected long-run average cost defined, for a policy and a given initial state , as

If

has a probability distribution

, we let

The optimal control problem is then defined as follows. Given an MDP ( , , , , ) and a cost function , determine an optimal policy such that for all . In order to establish the existence of such policy, assume the following. , is a nonempty compact (A1) For each state subset of . for all . (A2) is a continuous function of (A3) for each and each bounded measurable function on . Furthermore, for each , the cost function is continuous. and an initial state (A4) There exists a policy such that and has an invariant probability measure . (A5) The one-stage cost function is strictly unbounded on . That is, there exist nondecreasing sequences of compact sets and such that .

1163

Proposition 3: [10, Theorem 5.7.9] Assume (A1)–(A5), and and a measurable function suppose there exists a constant in such that, for all , we have

(10) Then, the following is true. for all . 1) is a stationary deterministic policy such that 2) If maximizes the r.h.s. of (10), then is optimal for all . and , 3) For an optimal policy . 4) There exists a stable policy such that , and (10) holds . with is such that is -recurrent, then 5) If such policy is also an optimal policy and for any initial distribution , . A. MDPs With an Unknown State is availSo far, we have assumed that the entire history able for decision making at stage . In many practical applications, however, it is not feasible to observe the system state directly. Rather, we have an Observation Space which is observable and from which we need to formulate our decision making. This problem is known as a Partially Observable MDP (POMDP) or a Partially Observable Controlled Markov Process. POMDPs have been extensively studied, especially in the context of Operations Research and Artificial Intelligence [12] applications. However, the algorithmic difficulties are greater than in the known-state case and, thus, while the theory is fairly comprehensive, solution procedures are generally hard to get in practice. is the observation process and that its Assume that distribution is determined by the current state and action, with . We also have kernel , , an initial state distribution and a state transition kernel on given and , such that . The observed history at the beginning of stage now is the sequence of action/observation pairs along with the initial state with the distribution, i.e., observed history space defined as , , . In this context, an admissible policy is then a sequence of maps , since state information is not available and thus cannot be used for control. The key to analyzing a POMDP is converting the hidden-state formulation into a completely observable, general state model. This new formulation needs to have two characteristics. First, the new state needs to summarize at each time all relevant information for decision making. Second, the corresponding optimal policies based on the new state need to be equivalent to the ones in the original problem, in the sense that a policy for the original problem is optimal if and only if (iff) it is optimal for the new problem.

1164

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 6, DECEMBER 2004

Letting , it can be shown that can be taken to be our new “state” in the sense that a policy is iff its optimal for the original problem optimal for system formulation. Furthermore, for all our cases of interest, can and the newly observed be recursively computed based on . pair More specifically, given a POMDP model ( , , , , , , and ), we can construct a completely observable model ( , , , , and ) by taking the state space to be the set of for the original state space . probability distributions , . The new We interpret then cost function is defined as . Before defining the new transition kernel for the problem, we will focus only on the special case where both and are finite or countable and the action space is compact. See [1] and [10] for more general treatments. In this case, the space is the set of probability vectors. is sometimes referred to as a “Belief Vector” [12]. By using Bayes’ given currule and the assumption that is independent of rent state , we can recursively calculate as follows:

(11) We can then compute , with being the probability of observing when action is used and when the belief vector is . can be then calculated The transition kernel for process as

control policy for a POMDP has two steps. First, at the begin, vector is computed from , , and ning of time step using (11). Then, the optimal action is obtained by solving the corresponding optimality equation. While the first step is, in general, straightforward given a full probabilistic system model, the second one is in general difficult, as we need to find a solution to the optimality equation involving an uncountable number of arguments. Practical solution procedures generally rely on heuristics or on approximations. Application of Results We now discuss the application of the above to our specific models. In order to apply Proposition 3 to the proof of Proposition 1, assumptions (A1)–(A3) are trivial, recalling that in that case with and , with control and system transition kernel as in (4). Furactions , there exist policies for which the rethermore, for all is ergodic and irreducible with fisulting Markov chain . Thus, (A4)–(A5) hold and thus a nite steady-state cost which achieves minimum cost policy exists. In order to conclude that such policy is optimal, we is -irreonly need to check that the resulting process , ducible for some -finite measure . Defining can be reached from any starting we need to show that state state . From (3), this immediately holds for all . is the average policy cost, some For states above , since has to be eventually reached, as otherwise the cost state would be greater than . This implies that induces a process which is Harris recurrent, and, thus, from Proposition 3, is optimal. In contrast to the known- approach, checking assumptions for proving Proposition 2 is harder. The completely observable equivalent to this POMDP is described by the one-step cost and the tranfunction sition kernel as calculated from (12) with observations depending on the feedback models described in Section IV. Now, the acceptability conditions (C1) and (C2) introduced in Section IV are easily incorporated into our model by defining the admissible retransmission probabilities as follows. Fix an , define the admissible action set as arbitrary

(12) otherwise. Hence, the partially observable control problem can be converted into a fully observed MDP, with the caveat that now the state space is uncountable. A separated stationary policy for the POMDP is a mapping (equivalent to a stationary deterministic policy for the converted fully observable model). denote the space of all separated stationary policies. Let An important result is that, for any admissible policy in a POMDP model, there exists a separated stationary policy (i.e. a stationary determinstic policy in the converted fully observable model) such that for any initial state distribu, , and thus both probtion lems are equivalent. There is then no loss of optimality in considering separated stationary policies for the POMDP problem. That is, the optimal

(A1)–(A3) are still trivial. For a bounded function , the conditional expectation operator is

on

which is continuous in . This follows from the fact that the probability is continuous in both and . (A4) holds since we already mentioned that there exist stable based on [7] for which for policies for any informative observation space . Finally, assumption (A5) also holds, as clearly is also unby the same argument as for proving Proposibounded in tion 1. Thus, we can conclude that there exists a stable policy

DEL ANGEL AND FINE: OPTIMAL POWER AND RETRANSMISSION CONTROL POLICIES FOR RANDOM ACCESS SYSTEMS

with an invariant measure which achieves minimum cost —almost all initial states . To conclude that is opfor timal, we need again to determine -irreducibility so that it can achieve a minimum cost from every initial state. Denote state , and let . For iff there are no backlogged our problem formulation, users conditioned on the feedback history. Then, -irreducibility requires that, for any starting state , the system eventually returns to this empty-backlog state with positive probability when using policy . Let so that forms a countable partition of . Note too that if then with equality iff is degenerate. We will first show that set is accessible from any state in for any , meaning that set is accessible from any starting point . Then, we will show that state is accessible from any state if assumption (C1) holds. First, let so that . From (3), if for some , then . This means that there exists at least one observation . Recalling that such that

and seeing that , we see then that so that for some , , with positive probability if there is any positive probability of transmission successes. Let . We showed then that, if and for some , then . Assume is not accessible from state in . This implies from the definition that and hence that . This happens if either for all or if and for all . The first case is a degenerate case, as there is only one possible observation and so . In that case, as for all , we immediately get that and thus . By iterating this argument, we would need to ensure then that and thus there would would be be no possible transmission successes and policy clearly unstable, thus yielding a contradiction. If the second case happens, for some , but for . This case, however, is precluded by Assumption (C2) and thus cannot happen. So, has to be accessible for all , and thus is an accessible state. We will now show that state is accessible from any starting point in . If there is an idle observation, then is updated as follows. Define a transformation as

so that, if , then . Furthermore, the probability of such observation is given by

1165

with equality iff or . If , then there is nothing to show, as clearly , so since . If , first note that , with equality iff . Hence, is a fixed point of transformation for any , will make any initial and thus repeated applications of converge to . Specifically, it can be shown that is a -contraction mapping, i.e., that for any and for any , with equality iff or for some . In fact, as , there will be successive idle observations with a probability of at least . For sufficiently large, vector will satisfy so that, from (C1), and with positive probability. Hence, is accessible from any starting point in and in consequence from any point in . Note that, if , the particular value of is unimportant, and thus can be set to without loss of generality. Assumption (C2) only requires that there exists a neighborhood of for which . This is then sufficient to show -irreducibility for policy and hence to conclude that policy is optimal within for as claimed. any initial state Note that the assumption that is informative (i.e. it is at least idle/nonidle feedback) comes into play in two situations: to show the existence of an initial stable policy as in [7] and to ensure -irreducibility of process . ACKNOWLEDGMENT The authors wish to thank the paper reviewers for their thoughtful and constructive comments. REFERENCES [1] A. Arapostathis, V. S. Borkar, E. Fernández-Gaucherand, M. Ghosh, and S. I. Marcus, “Discrete-time controlled Markov processes with average cost criterion: A survey,” SIAM J. Control Optimization, vol. 31, no. 2, pp. 282–344, Mar. 1993. [2] D. Bertsekas and R. Gallager, Data Networks. Englewood Cliffs, NJ: Prentice-Hall, 1992. [3] L. P. Clare, “Control procedures for Slotted Aloha systems that achieve stability,” in Proc. ACM SIGCOMM Symp. Communication Architecture and Protocols, Aug. 1986, pp. 302–309. [4] G. A. Cunningham, “Delay versus throughput comparisons for stabilized Slotted Aloha,” IEEE Trans. Commun., vol. 38, pp. 1932–1934, Nov. 1990. [5] G. del Angel and T. L. Fine, “Randomized power control strategies for optimization of multiple access radio systems,” in Proc. 38th Annu. Allerton Conf. Communication, Control and Computing, 2000, pp. 327–336. , “Information capacity and power control for Slotted Aloha [6] random access systems,” IEEE Trans. Inform. Theory, submitted for publication. [7] S. Ghez, S. Verdú, and S. Schwartz, “Optimal decentralized control in the random access multipacket channel,” IEEE Trans. Automat. Contr., vol. 34, pp. 1153–1163, Nov. 1989. [8] B. Hajek and T. van Loon, “Decentralized dynamic control of a multiaccess broadcast channel,” IEEE Trans. Automat. Contr., vol. AC-27, pp. 559–569, June 1982. [9] O. Hernández-Lerma, Adaptive Markov Control Processes. Berlin, Germany: Springer-Verlag, 1989, vol. 79, Applied Mathematical Sciences. [10] O. Hernández-Lerma and J. B. Lasserre, Discrete-Time Markov Control Processes: Basic Optimality Criteria. Berlin, Germany: SpringerVerlag, 1996, Stochastic Modeling and Applied Probability. , Further Topics on Discrete-Time Markov Control Processes. [11] Berlin, Germany: Springer-Verlag, 1999, Stochastic Modeling and Applied Probability.

1166

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 6, DECEMBER 2004

[12] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra, “Planning and acting in partially observable stochastic domains,” Artif. Intell., vol. 101, no. 1–2, pp. 99–134, 1998. [13] L. Kleinrock and S. Lam, “Packet switching in a multiaccess broadcast channel: dynamic control procedures,” IEEE Trans. Commun., vol. COM-23, pp. 891–904, Sept. 1975. [14] R. O. LaMaire, A. Krishna, and M. Zorzi, “On the randomization of transmitter power levels to increase throughput in multiple access radio systems,” Wireless Networks, vol. 4, no. 3, pp. 263–277, 1998. [15] C. C. Lee, “Random signal levels for channel access in packet broadcast channels,” IEEE J. Select. Areas Commun., vol. SAC-5, pp. 1026–1034, July 1987. [16] S. P. Meyn, “The policy iteration algorithm for average reward Markov decision processes with general state space,” IEEE Trans. Automat. Contr., vol. 42, pp. 1663–1680, Dec. 1997. [17] S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability. Berlin, Germany: Springer-Verlag, 1993. [18] G. Monahan, “A survey of partially observable Markov decision processes: theory, models and algorithms,” Management Sci., vol. 28, no. 1, pp. 1–16, Jan. 1982. [19] C. Namislo, “Analysis of mobile radio Slotted Aloha networks,” IEEE J. Select. Areas Commun., vol. SAC-2, pp. 583–588, July 1984. [20] M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic Programming. New York: Wiley, 1994. [21] R. L. Rivest, “Network control by Bayesian broadcast,” IEEE Trans. Inform. Theory, vol. IT-33, pp. 323–328, May 1987. [22] A. Segall, “Recursive estimation from discrete-time point processes,” IEEE Trans. Inform. Theory, vol. IT-22, pp. 422–431, July 1976.

Guillermo del Angel (S’95–M’01) received the B.S.E.E. degree from the Instituto Tecnológico y de Estudios Superiores de Monterrey—Campus Estado de México in 1997, the M.S. degree from Boston University, Boston, MA, in 1998, and the Ph.D. degree from Cornell University, Ithaca, NY, in 2001. He is currently working at Aware, Inc., Bedford, MA, as a Signal Processing Engineer. His research interests are in the areas of information theory, statistical signal processing, network protocols, and multiuser communications.

Terrence L. Fine (S’62–M’63–SM’81–F’82) received the B.E.E. degree from the City College of New York, New York, in 1958, and the S.M. and Ph.D. degrees from Harvard University, Cambridge, MA, in 1959 and 1963, respectively. He held a postdoctoral position and was a Lecturer at Harvard from 1963 to 1964 and a Miller Institute Junior Fellow with the University of California, Berkeley, from 1964 to 1966. In 1966, he joined the faculty of Electrical Engineering at Cornell University, where he is now Professor of Electrical and Computer Engineering and of Statistical Science and Director of the Center for Applied Mathematics. He has been a Visiting Professor at Stanford University, Stanford, CA. he is the author of the monographs, Theories of Probability: An Examination of Foundations (New York: Academic, 1973) and Feedforward Neural Network Methodology (Berlin, Germany: Springer-Verlag, 1999). He is completing an undergraduate textbook, Probability and Probabilistic Reasoning for Electrical and Computer Engineers (Upper Saddle River, NJ: Prentice-Hall, 2005). His research interests are primarily in the foundations of probability but have included statistical questions in neural networks and machine learning and estimation and detection in communications. Prof. Fine was a founding member of the governing board of the Neural Information Processing Systems (NIPS) Foundation and of the Society for Imprecise Probability and Its Applications (SIPTA). He was an Associate Editor for book reviews and for detection and estimation of the IEEE TRANSACTIONS ON INFORMATION THEORY and President of the BOG of the Information Theory Group, overseeing its transition to a Society. He was the recipient of the IEEE Third Millenium Medal.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.