Opportunistic Spectrum Access for Energy-Constrained Cognitive Radios

Share Embed


Descrição do Produto

Opportunistic Spectrum Access for Energy-constrained Cognitive Radios Anh Tuan Hoang, Ying-Chang Liang, David Tung Chong Wong, Yonghong Zeng, and Rui Zhang Institute for Infocomm Research, Singapore Abstract This paper considers a scenario in which a secondary user makes opportunistic use of a channel allocated to some primary network. The primary network operates in a time-slotted manner and switches between idle and active states according to a stationary Markovian process. At the beginning of each time slot, the secondary user can choose to stay idle or to carry out spectrum sensing to detect if the primary network is idle or active. If the primary network is detected as idle, the secondary user can carry out data transmission. Spectrum sensing consumes time and energy and introduces false alarms and mis-detections. Given the delay cost associated with staying idle, the energy costs associated with spectrum sensing and data transmission, and the throughput gain associated with successful transmissions, the objective is to decide, for each time slot, whether the secondary user should stay idle or carry out sensing, and if so, for how long, to maximize the expected net reward. We formulate this problem as a partially observable Markov decision process (POMDP) and prove several structural properties of the optimal spectrum sensing/accessing policies. Based on these properties, heuristic control policies with low complexity and good performance are proposed.

I. I NTRODUCTION The traditional approach of fixed radio spectrum allocation leads to under-utilization. It has been reported in recent studies by the US Federal Communications Commission (FCC) that there are vast temporal and spatial variations in the usage of allocated spectrum [1]. This motivates the concepts of opportunistic spectrum access (OSA), which allows secondary cognitive radio (CR) systems to opportunistically exploit the under-utilized spectrum. One of the core components of an OSA system is the spectrum-sensing module, which examines a spectrum of interest to determine whether the primary network which owns the spectrum is currently active or idle. Spectrum sensing, therefore, is a binary hypothesis test or a series of binary hypothesis tests. Spectrum sensing can have several effects on the OSA system, which are highlighted as follows.

2



Energy consumption: carrying out spectrum sampling and subsequent signal processing consumes energy. In general, the longer the sensing time, the more the consumed energy.



Time consumption: to avoid self interfering with the sensing process, the OSA system may need to suspend its data communications when carrying out spectrum sensing. This means spectrum sensing consumes communication time.



False alarms: a false alarm occurs when the spectrum sensing module mistakes an idle primary network as active. This leads to a spectrum access opportunity being overlooked.



Mis-detections: a mis-detection occurs when the spectrum sensing module mistakes an active primary network as idle. This leads to possible collision between secondary and primary transmissions.

In this paper, we take the above effects into account when designing an energy-constrained OSA system. A. General Control Problem We consider a scenario in which a secondary user makes opportunistic use of a channel allocated to some primary network. The primary network operates in a time-slotted manner and switches between idle and active states according to a stationary Markovian process. Within each time slot, the state of the primary network remains unchanged. At the beginning of each time slot, the instantaneous state of the primary network is not directly observable and the secondary user needs to decide whether to stay idle or to carry out spectrum sensing. If the secondary user chooses to carry out spectrum sensing, it needs to decide the duration of the sensing period and to configure related parameters to meet a minimum detection probability. Subsequently, if spectrum sensing indicates that the primary network is idle, the secondary user proceeds to transmit data during the rest of the time slot. There are important trade-offs when the secondary user makes the above control decisions. By staying idle in a particular time slot, the secondary user conserves energy, but at the same time suffers increase in delay and reduction in throughput. By carrying out spectrum sensing, the secondary user consumes time and energy to acquire knowledge of the state of the primary network, and stands a chance to transmit data if the primary network is idle. Furthermore, there are trade-offs involving energy consumption, sensing accuracy, and transmission time when the duration of sensing periods is varied. When the required probability of detection is fixed, increasing the sensing time can reduce the probability of false alarms and therefore increase the probability of transmission for the secondary user. However, increasing the sensing time also reduces the time available for transmission.

3

For the secondary user, given the delay cost associated with staying idle in a time slot, the energy costs associated with spectrum sensing and data transmission, and the throughput gain associated with a successful transmission, we consider the problem of finding an optimal policy which decides the idle and sensing modes, together with spectrum sensing time, to maximize the expected net reward. Here, the reward is defined as a function of delay and energy costs and throughput gain. B. Contributions The main contributions of this paper are as follows. •

We formulate the control problem that captures important throughput and delay/energy trade-offs for OSA systems. The problem involves important decisions for the secondary user, i.e., to stay idle or to carry out spectrum sensing, and to determine the optimal duration of each sensing period.



We analyze the problem using the framework of partially observable Markov decision processes (POMDP) and prove important structural properties of the optimal control policies that maximize the expected net reward.



Based on theoretical characterization of the optimal policies, we propose heuristic control policies that can be obtained at lower complexity while achieving good performance. One of these policies is based on grid approxmimation of POMDP solutions.



Finally, we obtain numerical results to support our theoretical analysis.

C. Related Work There has been a series of recent works on optimizing spectrum-sensing activities in OSA systems [2]–[6]. These works can roughly be classified into two groups, i.e., those that focus on the control within each time slot, when the status of a primary network is more or less static [2]–[4] and those that focus on the time dynamics of the control problem [5], [6]. In [2]–[4], the sensing duration within each time slot can be varied, i.e., the spectrum-sensing module can operate at different ROC curves. The objective then is to trade off between sensing accuracy and time available for communications. Assuming the mis-detection probability is fixed, the longer the sensing duration, the lower the false alarm probability. However, the longer the sensing duration, the less time is available for communications. As the focus is on control within each time slot, the dynamics of primary networks is not taken into account in [2]–[4].

4

In [5], Zhao et al. consider a spectrum access scenario similar to ours, where primary networks switch between idle and active states in a Markovian manner and a secondary user carries out spectrum sensing prior to opportunistic data transmission. An important result of [5] is the separation principle, which decouples the sensing policy from that of the sensor and spectrum access policy. Unlike in our model, in [5], the energy cost of sensing is not of concern and the secondary user carries out sensing in every time slot. Furthermore, in [5], determining the sensing duration is not part of the control problem, rather, it is assumed that a fixed Receiver Operating Characteristic (ROC) that defines the relationship between false-alarm and mis-detection probabilities is given. In our model, varying the spectrum-sensing duration results in different ROC curves. The work in [6] does take into account the energy and power consumption when scheduling spectrum sensing. However, in [6], spectrum sensing is assumed perfect, i.e., there are no false alarms and mis-detections. To some extend, this paper bridges the gap between the two classes of problems considered in [2]–[4] and in [5], [6]. In particular, our control problem recognizes the energy costs of sensing and transmission, allows the variation of spectrum-sensing duration within each time slot, and incorporates the dynamics of the primary network over time. All these factors are taken into account for the final objective of maximizing the expected long term net reward received by the secondary user. It is also interesting to note that some important results in this paper bear significant similarities to those in [7] and [8], even though the control scenarios are totally different. In [7], [8], the problem of scheduling packet transmission over time-varying channels with memory is considered, with the objective of balancing throughput and energy consumption. The state of the channel is not directly observed and the authors also formulate and analyze the problem using a POMDP framework.

D. Paper Organization The rest of this paper is organized as follows. In Section II, we describe the system model and the control problem. Important properties of the optimal control policies are proved and discussed in Section III. In Section IV, heuristic policies are proposed. Numerical results and discussion are presented in Section V. Finally, we conclude the paper and highlight future directions in Section VI.

5

II. S YSTEM M ODEL A. Primary Network We consider a channel being allocated to a primary network operating in a time-slotted manner. In each time slot of duration T , the primary network is either active (on) or idle (off). From one time slot to the next, the primary network switches between active and idle states according to a stationary Markovian process specified by the following state-transition matrix:   1−b b  , 0 < g, b < 1, M = g 1−g

(1)

where b is the probability that the primary network becomes active in the next time slot, given that it is idle in the current slot and g is the probability that the primary network becomes idle in the next time slot, given that it is active in the current time slot. The stationary probabilities of being idle and active for the primary network are πi = g/(b + g) and πa = b/(b + g), respectively. From the secondary user’s point of view, when the primary network is idle, the user has a ‘good’ channel to exploit. On the other hand, an active primary network results in a ‘bad’ channel for the secondary user. This leads to an interesting observation that M can be regarded as the state-transition matrix for a virtual Gilbert-Elliot (GE) channel of the secondary user. In [9], for a GE channel with state-transition matrix M, the channel memory is defined as µ = 1 − b − g. When µ > 0, it is said that the channel has positive memory, i.e., the probability of remaining in a particular state is greater than or equal to the stationary probability of that state. In this paper, we also assume that µ = 1 − b − g > 0. B. Opportunistic Spectrum Access A secondary user opportunistically accesses the channel when the primary network is idle by first synchronizing with the slot structure of the primary network and then carrying out the following mechanism (illustrated in Fig. 1). 1) Spectrum Sensing: If the secondary user wishes to transmit in a particular slot, it will first spend a time duration τ at the beginning of the slot to carry out spectrum sensing. This basically involves sampling the channel and carrying out a binary hypothesis test: H0 : the primary network is idle,

(2)

versus H1 : the primary network is active. Let θ denote the outcome of the above binary hypothesis test, where θ = 0 means H0 is detected and θ = 1 otherwise. Associated with the spectrum sensing activity are probability of false alarm, i.e., mistaking H0

6

for H1 , and probability of mis-detection, i.e., mistaking H1 for H0 . In this paper, we assume that the secondary network must carry out spectrum sensing to meet a fixed probability of detection Pd . Then, the probability of false alarm is a function of the sensing time τ and is denoted by Pf a (τ ). The sensing duration τ must be within the interval [τmin , τmax ], where 0 < τmin ≤ τmax < T . It is assumed that for the given range of τ , 0 < Pf a (τ ) < Pd < 1. This is in fact a reasonable assumption as in practical cognitive radio systems [10], we normally have Pd > 90% and Pf a < 10%. Furthermore, we assume that Pf a (τ ) is continuous, differentialbe, and decreasing in τ ∈ [τmin , τmax ]. 2) Data transmission: If the spectrum sensing results in θ = 0, the secondary user proceeds to transmit data in the rest of the time slot. Otherwise, if θ = 1, the secondary user must stay quiet and wait until the next time slot to try again. 3) Acknowledgment: Even though the spectrum sensing outcome indicates θ = 0, this can be due to a mis-detection. Mis-detections result in collisions between primary and secondary transmissions. In this paper, we assume that if collision happens due to mis-detection, a negative acknowledgment (NAK) is returned. On the other hand, if the secondary transmission is carried out when the primary network is actually idle, a positive acknowledgment (ACK) is returned. C. POMDP Formulation At the beginning of each time slot, the secondary user decides whether or not to carry out spectrum sensing, and if so, for how long. As the instantaneous state of the primary network is not directly observed, our control problem can be classified as a discrete-time POMDP with the following components. 1) Belief State: In a discrete-time POMDP, the decision maker selects an action and receives some reward, together with some observation which reveal information about the actual system state. It is well known ( [11]) that for each POMDP, all information that is useful for making decisions can be encapsulated in the posterior distribution vector of the system states. In our control problem, at the beginning of each time slot, based on previous actions and observations, the secondary user can calculate the probability that the primary network is idle in the time slot. We denote this probability by p and name it the ‘belief state’. After each time slot, depending on the action taken by the secondary user and the corresponding outcome, the belief state p can be updated according to one of the following four cases. Case 1: The secondary user stays idle and does not carry out spectrum sensing. Then, the next belief state, i.e., the probability that the primary network is idle in the next time slot can be derived as: L1 (p) = p(1 − b) + (1 − p)g = p(1 − b − g) + g.

(3)

7

Case 2: The secondary user senses the channel for the duration τ , obtains the outcome θ = 1, i.e., the primary network is active, and therefore needs to keep quiet in the rest of the time slot. Using Bayes’ rule, the belief state in the next time slot can be derived as: L2 (p, τ ) =

pPf a (τ )(1 − b) + (1 − p)Pd g . pPf a (τ ) + (1 − p)Pd

(4)

Case 3: The secondary user senses the channel for the duration τ , obtains θ = 0, i.e., the primary network is idle, carries out transmission and subsequently receives an ACK at the end of the slot. The ACK implies that the primary network is actually idle during the current time slot and therefore, the belief state in the next time slot is L3 = 1 − b. Case 4: All the same as Case 3, except that an NAK is received at the end of the slot. This implies that mis-detection happens during spectrum sensing, the primary network is actually active during the current time slot and therefore, the belief state in the next time slot is L4 = g. As can be noted, L3 and L4 do not depend on p and τ . However, in the rest of this paper, in order to simplify the notation, we use L1 (p, τ ), L3 (p, τ ), L4 (p, τ ) interchangeable for L1 (p), L3 , L4 defined above. Also, letting Qi (p, τ ), i = 2, 3, 4 denote the probabilities that Case i above happens, we have: Q2 (p, τ ) = pPf a (τ ) + (1 − p)Pd ,

Q3 (p, τ ) = p(1 − Pf a (τ )),

and

Q4 (p, τ ) = (1 − p)(1 − Pd ). (5)

2) Properties of Li (p, τ ): From (3) and the assumption that the state-transition matrix M has positive memory, i.e., 1 − b − g > 0, it follows that L1 (p) is increasing in p. Also, it can be verified that: ∂L2 (p, τ ) (1 − b − g)Pd Pf a (τ ) = . ∂p [Pd − p(Pd − Pf a (τ ))]2 As 1 − b − g > 0 and Pd > Pf a (τ ) > 0,

∂L2 (p,τ ) ∂p

(6)

is positive and increasing in p. Therefore, L2 (p) is convex

and increasing in p. At the same time: L2 (p) =

(1 − b)(pPf a (τ ) + (1 − p)Pd ) − (1 − p)Pd (1 − b − g) ≤ (1 − b). pPf a (τ ) + (1 − p)Pd

(7)

Similarly, it can be shown that L2 (p) ≥ g. So L4 (p) ≤ L2 (p) ≤ L3 (p).

(8)

3) Costs, Reward, and Control Objective: When carrying out spectrum sensing, the secondary user spends energy in channel sampling and signal processing. We assume that the energy cost of carrying out spectrum sensing for τ units of time is a continuous, non-negative and increasing function in τ and is denoted by cs (τ ). If the sensing outcome is θ = 0, the secondary user proceeds to transmit during the

8

rest of the time slot. Let rt and ct respectively be the gain in throughput and the energy cost, both are measured per unit of transmission time. It is reasonable to assume that rt > ct ≥ 0, otherwise, there is no justification for the secondary user to carry out transmission. Note that the secondary user can also choose to stay idle, i.e., neither carry out spectrum sensing nor data transmission, during a time slot to conserve energy. However, doing so results in negative effects such as lower throughput and longer latency. We assume that, for the secondary user, the cost of staying idle during each time slot is ci , ci ≥ 0. In a particular time slot, if the probability of the the primary network being idle is p and the secondary user carries out spectrum sensing for τ units of time, then the expected net gain can be calculated as:    G(p, τ ) =p 1 − Pf a (τ ) (T − τ )rt − cs (τ ) − p(1 − Pf a (τ )) + (1 − p)(1 − Pd ) (T − τ )ct (9) =p(1 − Pf a (τ ))(T − τ )(rt − ct ) − (1 − p)(1 − Pd )(T − τ )ct − cs (τ ). As, by assumption, rt > ct , T > τ , and Pf a (τ ) < Pd < 1, G(p, τ ) is continuous and increasing in p. G(p, τ ) is also continuous in τ . To simplify the notation, we also use τ = 0 to represent that the secondary user chooses to stay idle. We then have the following expected reward when the sensing decision is set to τ .   −ci τ =0 R(p, τ ) =  G(p, τ ), τ min ≤ τ ≤ τmax .

(10)

We are interested in the following problem.

Definition 1: Let pn denote the probability that the primary network is idle during time slot n, select the sensing time τn , τn ∈ {0, [τmin , τmax ]}, to maximize the following discounted reward function E

−1 nN X n=0

o αn R(pn , τn ) p0 = p ,

(11)

where 0 < α < 1 is a discounting factor, and 1 ≤ N ≤ ∞ is the control horizon. III. S TRUCTURE

OF

O PTIMAL P OLICIES

A. Monotonicity and Convexity of Value Functions When N < ∞, let V N (p) denote the maximum achievable discounted reward function in Definition 1, V N (p) satisfies the following Bellman equation ( [11]): n o X V N (p) = max −ci +αV N −1 (L1 (p)), G(p, τ )+α Qi (p, τ )V N −1 (Li (p, τ )) , τ ∈[τmin ,τmax ]

i=2,3,4

N > 1, (12)

9

where V 1 (p) =

max

τ ∈[τmin ,τmax ]



− ci , G(p, τ ) .

(13)

When N = ∞, let V (p) denote the maximum achievable discounted reward function in Definition 1, V (p) satisfies the following Bellman equation ( [11]): n o X V (p) = max − ci + αV (L1 (p)), G(p, τ ) + α Qi (p, τ )V (Li (p, τ )) . τ ∈[τmin ,τmax ]

(14)

i=2,3,4

Note that in (14), G(p, τ ) is the immediate gain obtained by sensing for duration τ while the expected P discounted future gain given this sensing duration is α i=2,3,4 Qi (p, τ )V (Li (p, τ )). As can be seen, both immediate and future gains are dependent on τ .

It can be shown ( [11]) that limn→∞ V n (p) = V (p). It can also be verified that V N (p) and V (p) are continuous in p. Let us now prove some important structural results for V N (p) and V (p). Proposition 1: V N (p) and V (p) are nondecreasing in p. Proof: First, let us prove the property for V N (p), N < ∞. The proof proceeds by induction. As G(p, τ ) is increasing in p, from (13), it follows that V 1 (p) is nondecreasing in p. Now assuming that V n (p) is nondecreasing in p for some value of n ≥ 1, we have n o X V n+1 (p) = max − ci + αV n (L1 (p)), G(p, τ ) + α Qi (p, τ )V n (Li (p, τ )) . τ ∈[τmin ,τmax ]

(15)

i=2,3,4

Let E1 = −ci + αV n (L1 (p)), and E2 = G(p, τ ) + α

P

i=2,3,4

Qi (p, τ )V n (Li (p, τ )). As L1 (p) and V n (p)

are both nondecreasing in p, it follows that E1 is nondecreasing in p. The first term in E2 , i.e., G(p, τ ), is nondecreasing in p. For the second term in E2 , letting 0 < q < p, we have: X X Qi (p, τ )V n (Li (p, τ )) − Qi (q, τ )V n (Li (q, τ )) i=2,3,4

>

X

i=2,3,4

(Qi (p, τ ) − Qi (q, τ ))V n (Li (p, τ )),

as L2 (p, τ ) > L2 (q, τ ),

i=2,3,4

= V n (L3 ) (Q3 (p, τ ) − Q3 (q, τ )) + V n (L2 (p, τ )) (Q2 (p, τ ) − Q2 (q, τ )) (16) X X n + V (L4 ) (Q3 (q, τ ) + Q2 (q, τ ) − Q3 (p, τ ) − Q2 (p, τ )) , as Qi (p, τ ) = Qi (q, τ ) = 1, i=2,3,4





i=2,3,4



= V n (L3 ) − V n (L2 (p, τ )) Q3 (p, τ ) − Q3 (q, τ )    + V n (L2 (p)) − V n (L4 ) (Q3 (p, τ ) + Q2 (p, τ )) − (Q3 (q, τ ) + Q2 (q, τ )) ≥ 0, where the last inequality is due to Q3 (p, τ ) = p(1 − Pf a (τ )) is increasing in p and Q3 (p, τ ) + Q2 (p, τ ) = p(1 − Pd ) + Pd

(17)

10

is also increasing in p. So the second term in E2 is increasing in p, which implies that E2 is also increasing in p. As both E1 and E2 are nondecreasing in p, it follows that V n+1 (p) is nondecreasing in p. As limn→∞ V n (p) = V (p), it follows that V (p) is also nondecreasing in p. Proposition 2: V N (p) and V (p) are convex in p. Please refer to the Appendix A for the proof. Remark 1: Proposition 1 states intuitively that, the higher the probability p that the primary network is idle at the beginning of the control process, the higher the maximum achievable expected reward V N (p) and V (p). Proposition 2 then indicates how fast V N (p) and V (p) increase in p. As V N (p) and V (p) are convex, V N (p) and V (p) increase at least linearly in p. B. Properties of Optimal Policies Let us explore some useful structural properties of the optimal control policies. Letting G∗ (p) =

max

τ ∈[τmin ,τmax ]

G(p, τ ),

(18)

as G(p, τ ) is increasing in p, so is G∗ (p). We state the following property of the optimal control policies. Proposition 3: Let p∗ be the minimum value of p such that G∗ (p∗ ) > −ci . If, in a particular time slot, the probability of the primary network being idle is p such that p ≥ p∗ , then an optimal policy must carry out spectrum sensing in that time slot. n

1

o

Proof: Let us prove for the case when N < ∞. We have V (p) = maxτ ∈[τmin ,τmax ] −ci , G(p, τ ) = max{−ci , G∗ (p)}, where G∗ (p) ≥ G∗ (p∗ ) > −ci , therefore, spectrum sensing should be carried out when N = 1. For N > 1 we have n o X V N (p) = max − ci + αV N −1 (L1 (p)), G(p, τ ) + α Qi (p, τ )V N −1 (Li (p, τ )) . τ >0

Notice that

P

i=2,3,4

i=2,3,4

Qi (p, τ ) = 1 and

p, it follows that V N −1 (L1 (p)) ≤

(19)

P

X

i=2,3,4

Qi (p, τ )Li (p, τ ) = L1 (p). Then, as V N −1 (p) is convex in

Qi (p, τ )V N −1 (Li (p, τ )),

i=2,3,4

∀ τmin ≤ τ ≤ τmax .

(20)

This, together with the fact that G∗ (p) ≥ G∗ (p∗ ) > −ci , implies the system should carry out spectrum sensing in the current time slot. The proof for N = ∞ is similar. Remark 2: Proposition 3 gives the sufficient condition on the value of p, i.e., p ≥ p∗ , for carrying out spectrum sensing in a particular time slot. However, this may not be a necessary condition. In general,

11

the optimal control policies for our POMDP model may not possess the threshold-based structure (in the value of p). To the best of our knowledge, there have been a limited number of works that prove the threshold-based characteristic of the optimal control policies for some specific POMDP models (see [12]–[14]). Unfortunately, our problem does not directly fit into these models. A natural question to ask is, given that sensing is carried out, how the optimal sensing time τ would vary with p. Let F N (p, τ ) = α

X

Qi (p, τ )V N −1 (Li (p, τ )),

i=2,3,4

N > 1, τmin ≤ τ ≤ τmax ,

(21)

be the expected discounted future reward if the probability that the primary network being active in a particular time slot is p and sensing is carried out for a duration τ . Similarly, for the case of infinite control horizon, define F (p, τ ) = α

X

Qi (p, τ )V (Li (p, τ )),

i=2,3,4

τmin ≤ τ ≤ τmax .

(22)

The following result highlights the effect of increasing the spectrum sensing time τ . Proposition 4: F N (p, τ ) and F (p, τ ) are nondecreasing in τ . Please refer to the Appendix B for the proof. Remark 3: Essentially, Proposition 4 makes concrete the intuition that the more sensing being carried out in the current time slot, the better the expected reward in the future time slots. This is because increasing the sensing time gives the secondary user more accurate knowledge of the state of the primary network, which in turn improves future control. To further study the effect of varying the sensing time, we need to make the following assumptions. •

A1: The probability of false alarm, i.e., Pf a (τ ), is convex and decreasing in τ , τmin ≤ τ ≤ τmax .



A2: The energy cost of sensing, i.e., cs (τ ), is convex and increasing in τ .

For the justifications of assumptions A1 and A2, please refer to the Appendix E. Lemma 1: Given assumptions A1 and A2, function G(p, τ ) is concave in τ , τmin ≤ τ ≤ τmax . Please refer to the Appendix C for the proof. As the function G(p, τ ) is concave, continuous, and has a strictly decreasing first order derivative for all τ in the interval [τmin , τmax ], there exists an unique maximum point for this function. Letting τ ∗ (p) = arg max G(p, τ ), τmin ≤τ ≤τmax

(23)

12

the following proposition relates the optimal sensing time to the value of τ ∗ (p) defined in (23). Proposition 5: Given assumptions A1 and A2, if at the beginning of a particular time slot, the probability of the primary network being idle is p and sensing is carried out, then the optimal sensing time τ opt is greater than or equal to τ ∗ (p). Proof: We prove for N = ∞, the case when N < ∞ is similar. The proof is by contradiction. Suppose the optimal sensing time τ opt < τ ∗ (p). Due to the concavity of G(p, τ ) in τ we have G(p, τ opt ) < G(p, τ ∗ (p)). Furthermore, from Proposition 4, we have F (p, τ opt ) ≤ F (p, τ ∗ (p)), therefore G(p, τ opt ) + F (p, τ opt ) < G(p, τ ∗ (p)) + F (p, τ ∗ (p)),

(24)

which contradicts to the fact that τ opt is the optimal sensing time given p. This completes the proof. Proposition 5 gives the lower bound for the optimal sensing time τ opt . To see how this lower bound varies with p, consider the first order derivative of G(p, τ ) with respect to τ :     ∂G ′ = p (rt − ct ) Pf a (τ ) − (T − τ )Pf a (τ ) − 1 + (1 − Pd )ct − c′s (τ ) − (1 − Pd )ct ∂τ

(25)

= pu(τ ) − v(τ ),  where u(τ ) = (rt − ct ) Pf a (τ ) − (T − τ )Pf′ a (τ ) − 1 + (1 − Pd )ct and v(τ ) = c′s (τ ) − (1 − Pd )ct . We have

argued in the proof of Lemma 1 that u(τ ) is strictly decreasing in τ . Furthermore, as cs (τ ) is convex, v(τ ) is nondecreasing in τ . The following lemma characterizes how τ ∗ (p) varies in p. Lemma 2: Given assumptions A1 and A2, we have the following cases. •

Case 1: v(τmin ) ≥ 0 then τ ∗ (p) is nondecreasing in p.



Case 2: v(τmax ) < 0 then τ ∗ (p) is nonincreasing in p.



Case 3: v(τmin ) < 0 ≤ v(τmax ). Letting v(τ v ) = 0, τmin < τ v ≤ τmax , a) if u(τ v ) > 0 then τ ∗ (p) is nondecreasing in p. b) if u(τ v ) ≤ 0 then τ ∗ (p) is nonincreasing in p.

Please refer to the Appendix D for the proof. Remark 4: Proposition 5 states that when the probability of the primary network being idle is p and sensing is carried out, then the lower bound of the optimal sensing time is τ ∗ (p). Lemma 2 further shows that the lower bound τ ∗ (p) is always monotonic in p.

13

Remark 5: If the energy costs of sensing and transmission are ignorable, then the gain of sensing for τ unit of time when the probability of primary network being idle is p can be simplified to:  ˜ τ ) = p 1 − Pf a (τ ) (T − τ )rt . G(p,

(26)

˜ τ ) does not depends on p. Therefore, it can be verified Then the value of τ = τ˜ that maximizes G(p, that the policy that carries out sensing for τ˜ unit of time for every time slot maximizes the expected reward. Our optimization problem is then equivalent to the problem considered in [2], [3], which focus on maximizing throughput within each time slot. IV. H EURISTIC P OLICIES Directly solving the POMDP described in Section III can be computational challenging. In this section, suboptimal control policies that can be obtained at lower complexity are discussed. A. Grid-based Approximation Grid-based approximation is a widely-used approach for approximating solutions to POMDPs. In this approach, the value function is approximated at a finite number of belief points on a grid. The value function at belief points not belonging to the grid is evaluated using interpolation. In this paper, we employed the fixed-resolution, regular-grid approach proposed by Lovejoy [15]. Applying to our POMDP, the range of belief state is first divided into P equally spaced points p0 , p1 , . . . pP −1 . Then, the value function at these grid points are calculated using the following iteration. n o X − ci + αV˜ (L1 (pj )), G(p, τ ) + α Qi (p, τ )V˜ (Li (pj , τ )) , v(pj ) = max τ ∈[τmin ,τmax ]

(27)

i=2,3,4

j

j+1

where, for each value of p, we find j such that p ≤ p ≤ p and calculate p − pj pj+1 − p j+1 V˜ (p) = j+1 v(p ) + v(pj ). p − pj pj+1 − pj

(28)

As pointed out in [15], the iteration in (27) is guaranteed to converge and the value V˜ (p) in (28) is an upper-bound of the optimal value function V (p) (as V (p) is convex). After obtaining the approximation function V˜ (p), we can substitute into the Bellman equation (14) to obtain the corresponding sensing time. B. Myopic Policy ζ m (.) Proposition 3 identifies the sufficient condition on the probability that the primary network is idle for sensing to be carried out. Furthermore, Proposition 5 gives the lower bound on the optimal sensing time, given that sensing is carried out. Based on these two propositions, we consider the following policy:   0, if pn < p∗ m τn = ζ (pn ) = (29)  τ ∗ (pn ), if pn ≥ p∗ ,

14

where ζ m(.) maps the belief state p into the sensing time for the secondary user. Note that setting the sensing time τ = ζ m(p) myopically maximizes the instantaneous gain when the belief state is p. C. Static Policy ζ s (.) Consider a static spectrum access policy that always carries out sensing with a fixed sensing duration. Note that the stationary probability of the primary network being idle is πi =

b . b+g

We can calculate

τ ∗ (πi ) = arg max G (πi , τ ).

(30)

τmin ≤τ ≤τmax

Then, for every time slot, sensing is carried out for the duration of τ ∗ (πi ). We term this policy ζ s (.). D. Genie’s Policy To facilitate comparison, we are going to normalize the performance of different control policies to that of the following so called ‘Genie’s policy’. Suppose that at the beginning of each time slot, a genie tells the secondary user exactly what the state of the primary network is. Then, the best action for the secondary user is to carry out transmission if the primary network is idle, and to stay idle if the primary network is active. The expected discounted reward for the secondary user can be calculated as:   ∞ X b gT (rt − ct ) − bci g genie n V = α T (rt − ct ) − ci = . b + g b + g (1 − α)(b + g) n=0 V. N UMERICAL R ESULTS

AND

(31)

D ISCUSSION

In this section, we present numerical results that illustrate our theoretic analysis. We also compare the performance of the optimal policies to that of the suboptimal policies described in Section IV. We focus on the infinite horizon control scenario, i.e., when N = ∞. A. Model for Numerical Studies We assume that the primary network operates on a channel with bandwidth of B = 6 MHz. The time slot duration is T = 10 msecs. The secondary user employs energy detection for spectrum sensing, with channel being oversampled at rate fs = 7/8B. Given the required probability of detection Pd , the SNR of the active primary signal at the receiver of the secondary user being γ, and the sensing time of τ , the probability of false alarm can be calculated by ( [2]): Pf a (τ ) = Q(

p

2γ + 1Q−1 (Pd ) +

p τ fs γ),

(32)

15

where Q(.) is the complementary distribution function of a standard Gaussian variable. We set γ = −15 dB and Pd = 10%. We assume that the sensing cost is linear in sensing time, i.e., cs (τ ) = sτ , s > 0. This satisfies the assumption that cs (τ ) is convex and increasing in τ . The memory of the primary network switching process, i.e., µ = 1 − b − g, is varied from 0.1 to 0.95. While doing so, we always set b = g = (1 − µ)/2. B. Characteristics of Optimal Policies To obtain optimal solutions to our POMDP, we use the solver package provided by A. Cassandra [16]. 1) Optimal reward function is convex and nondecreasing: In Fig. 2, we plot the optimal reward function V (p) versus the initial belief state p. As proved in Propositions 1 and 2, V (p) is convex and nondecreasing in p. As can be seen, for high value of p, V (p) is close to a linear function. This can be explained by the fact that the optimal control action does not vary much for high value of p (refer to Figs. 5 and 6). 2) Convexity of Pf a (τ ) and concavity of G(p, τ ): In Figs. 3 and 4, we respectively plot the probability of false alarm Pf a (τ ) and the instantaneous gain G(p, τ ) as functions of sensing time τ when other parameters are fixed. As shown in Appendix E, for energy detection, Pf a (τ ) is convex and decreasing in the region where 0 < Pf a (τ ) ≤ 0.5. Also, from Lemma 1, when Pf a (τ ) is convex and decreasing while cs (τ ) is convex and increasing, the instantaneous gain G(p, τ ) is concave in τ . These effects are illustrated in Figs. 3 and 4. 3) Optimal sensing time τ opt (p): In Figs. 5 to 8, given different system parameters, we plot the sensing time versus the probability of the primary network being idle for different control policies, i.e., optimal, grid-based approximation, myopic, and static. From these plots, it can be observed that there is a threshold value for the belief state p above which it is optimal for the secondary user to carry out spectrum sensing and below which it is optimal to stay idle. In other words, the optimal sensing strategies tend to exhibit the ‘threshold-based’ characteristic. Note that this conjecture is stronger than Proposition 3, where it is shown that there is a minimum value of p above which sensing should be carried out. From Fig. 5, it can also be observed that for a relatively low value of memory of the primary network switching process (µ = 0.5), both the grid-based and myopic policies approximate the optimal sensing durations well. However, in Fig. 6, when the memory increases (µ = 0.9), only the grid-based policy is close to optimal, while the myopic policy is significantly different from the optimal policy. This difference can be explained by the fact that the myopic policy only focuses on instantaneous gain and ignores future effects. As a result, it is not able to exploit the memory (correlation) in the primary network states. We note

16

also that the grid-based policy plotted in Figs. 5 and 6 is obtained using only 10 discretized points. This shows that, for our POMDP control problem, small-sized grid-based approximations are good enough. Comparing Fig. 5 and 6, it is evident that when the memory of of the primary network switching process increases from µ = 0.5 to µ = 0.9, the optimal sensing time also increases. This is because the higher the memory, the slower the change in status of the primary network and therefore, the more useful the sensing activity for predicting the future state of the primary network. In Fig. 7, the cost of staying idle in a time slot is increased to ci = 1000. As can be seen, this discourages the secondary user from staying idle, i.e., sensing is carried out given smaller value of probability that primary network is idle. In Figs. 5 to 7, it can be observed that τ ∗ (p) is nondecreasing in p. This is because by setting s = 1 and ct = 2, we have v(τ ) = c′s (τ ) − (1 − Pd )ct = 0.8 > 0, therefore, Case 1 in Lemma 2 applies. On the other hand, in Fig. 8, it can be observed that τ ∗ (p) is nonincreasing in p. This is because by setting s = 1 and ct = 50, we have v(τ ) = c′s (τ ) − (1 − Pd )ct = −4 < 0, therefore, Case 2 in Lemma 2 applies. C. Performance Comparison To facilitate comparison, we normalize the performance of different control policies to that of the ‘Genie’s policy’ described in Section IV-D. 1) Rewards: In Fig. 9, we plot the normalized expected reward versus the memory µ of the primary network switching process for optimal, grid-based, myopic, and static policies. As can be seen, the performance of the optimal, grid-based and myopic polices all improves with increase in memory µ. This is because the higher the memory, the slower the primary network switches state, which implies the more useful sensing activities for future control. On the other hand, the performance of the static policy (ζ s (.)) does not change with memory µ. This is because when µ is varied, we always keeps b = g, which makes the stationary probability of the primary network being active or idle unchanged (πi = 0.5). This means that when the memory changes, in every time slot, ζ s (.) always carries out sensing for a fixed duration. Therefore, the performance of ζ s (.) does not change with increase in memory. In Fig. 9, it can be observed that for the low range of memory (µ < 0.7), the performance of both grid-based and myopic policies are close to that of the optimal policy. On the other hand, for higher value of memory, i.e., when primary network switches state more slowly, only grid-based policy can approximate the performance of the optimal policy. The explanation for this is the same as what we have given for the difference in the sensing time in Figs. 5 and 6. In particular, for low memory, both myopic

17

and grid-based policies can approximate the optimal decisions well. However, for high value of memory, only grid-based policy has decisions close to optimal. In Fig. 9, it is evident that the performance of a 10-point grid-based approximation is almost the same as that of the optimal policy. The performance of a 5-point grid-based approximation is also very close to optimal. This shows that we can employ the grid-based approximation approach without loss of performance in our POMDP problem. In Fig. 10, we set ci = 1000 to examine the effect of introducing the cost of staying idle. As can be seen, the performance trends are similar to those in Fig. 9. The extra effect that can be observed is that the performance of myopic policy (ζ m (.)) is closer to that of the optimal and grid-based policies in this scenario. This can be explained by referring back to Fig. 7, where the sensing time of the myopic policy is quite close to that of the optimal and grid-based policies. VI. C ONCLUSION In this paper, we study spectrum-sensing policies that take into account the dynamics of the primary networks and determine the spectrum sensing duration in order to optimize secondary users’ performance. As such, this paper bridges the gap between the two groups of existing spectrum-sensing/control literature that either focus on adapting to the dynamics of the primary networks or on optimizing the spectrumsensing duration, but not on both. We present detailed theoretical analysis and numerical studies for the optimization problem. We are currently extending this work in two directions. One is to study different trade-offs when some constraints in this paper are relaxed, e.g., instead of enforcing a fixed probability of detection, a timeaverage interference constraint can be introduced to protect primary networks’ operation. Another direction is to extend this work to multi-user scenarios and consider cooperative and/or distributed spectrum-sensing policies. A PPENDIX A. Proof of Proposition 2 Proof: We proceed by induction. From (13), V 1 (p) = maxτ ∈[τmin ,τmax ]

n

o − ci , G(p, τ ) , where

G(p, τ ) is linear in p. As maximum of convex functions is a convex function, V 1 (p) is convex in p. Now, assume that V n (p) is convex in p for some value n ≥ 1. From (12), if we can prove that V n (L1 (p)) and Qi (p, τ )V n (Li (p, τ )), i = 2, 3, 4 are all convex. Then it follows that V n+1 (p) is convex.

18

As L1 (p) is linear, L1 (p) is convex. By assumption, V n (p) is convex and from Proposition 1, V n (p) is nondecreasing. It can be shown that ( [17]), if h(x) and g(x) are two functions such that h(x) is convex and nondecreasing and g(x) is convex, then the composition f (x) = h(g(x)) is also convex. Applying here, it follows that V n (L1 (p)) is convex. We have Q3 (p, τ )V n (L3 (p, τ )) = p(1 − Pf a )(τ )V n (1 − b), i.e. linear and convex in p. Similarly, Q4 (p, τ )V n (L4 (p, τ )) = (1 − p)(1 − Pd )V n (g) is linear and convex in p. To check if Q2 (p, τ )V n (L2 (p, τ )) is convex in p, we look at its second order derivative with respect to p. It can be verified that V n ′′ (L2 (p, τ ))(1 − b − g)2 Pd2 Pf2a (τ ) ∂ 2 (Q2 (p, τ )V n (L2 (p, τ ))) = ∂p2 [Pd − p(Pd − Pf a (τ ))]3

(33)

We have shown earlier that L2 (p, τ ) is convex and increasing in p, this, coupled with the fact that V n (p) is convex and nondecreasing, leads to V n (L2 (p, τ )) is convex and nondecreasing. Therefore, V n ′′ (L2 (p, τ )) ≥ 0. It then follows from (33) that the second order derivative of Q2 (p, τ )V n (L2 (p, τ )), with respect to p, is nonnegative, which implies that Q2 (p, τ )V n (L2 (p, τ )) is convex. We have proved that V n (L1 (p)) and Qi (p, τ )V n (Li (p, τ )), i = 2, 3, 4 are all convex in p, therefore, V n+1 (p) is convex. As limn→∞ V n (p) = V (p), it follows that V (p) is also convex in p. This completes the proof. B. Proof of Proposition 4 Proof: We prove the property of F (p, τ ), the proof for F N (p, τ ) is similar. Let τmin ≤ τ1 < τ2 ≤ τmax . It can be verified that (F (p, τ2 ) − F (p, τ1 ))/α =(Q3 (p, τ2 ) − Q3 (p, τ1 ))V (L3 ) + Q2 (p, τ2 )V (L(p, τ2 ))

(34)

− Q2 (p, τ1 )V (L2 (p, τ1 )). From (5) and the fact that Pf a (τ ) is decreasing in τ , we have: Q3 (p, τ2 ) − Q3 (p, τ1 ) > 0 and

Q2 (p, τ2 ) > 0,

and (35)

Q3 (p, τ2 ) − Q3 (p, τ1 ) + Q2 (p, τ2 ) = Q2 (p, τ1 ). Furthermore (Q3 (p, τ2 ) − Q3 (p, τ1 ))L3 + Q2 (p, τ2 )L2 (p, τ2 ) = (pPf a (τ1 ) − pPf a (τ2 ))(1 − b) + (pPf a (τ2 ) + (1 − p)Pd )

pPf a (τ2 )(1 − b) + (1 − p)Pd g pPf a (τ2 ) + (1 − p)Pd

= pPf a (τ 1)(1 − b) + (1 − p)Pd g = Q2 (p, τ1 )L2 (p, τ1 ).

(36)

19

From (35) and (36) and the fact that V (p) is convex in p, it follows that F (p, τ2 )−F (p, τ1 ) is non-negative. This completes the prove.

C. Proof of Lemma 1 Proof: We have: G(p, τ ) = p(1 − Pf a (τ ))(T − τ )(rt − ct ) − (1 − p)(1 − Pd )(T − τ )ct − cs (τ ).

(37)

As cs (τ ) is convex, −cs (τ ) is concave. Also, the second term in (37) is linear in τ . So all we have to prove is that the first term in (37) is concave in τ . Let w(p, τ ) = p(1 − Pf a (τ ))(T − τ )(rt − ct ).  ∂w = p(rt − ct ) Pf a (τ ) − (T − τ )Pf′ a (τ ) − 1 , ∂τ

(38)

where Pf′ a (τ ) is the first order derivative of Pf a (τ ). From assumption A1, it follows that Pf′ a (τ ) is negative and nondecreasing in τ . Furthermore, T − τ is positive and decreasing in τ . That leads to (T − τ )Pf′ a (τ ) being negative and nondecreasing in τ . This, together with the fact that Pf a (τ ) is decreasing in τ and rt > ct , implies that ∂w/∂τ is decreasing in τ . Therefore, w(p, τ ) is concave in τ and so is G(p, τ ). D. Proof of Lemma 2 Proof: The proof is straightforward and based mainly on the fact that u(τ ) is decreasing in τ while v(τ ) is nondecreasing in τ . Due to limited space, we only give the proof for Case 3a. Given that v(τmin ) < 0 ≤ v(τmax ), v(τ v ) = 0, u(τ v ) > 0, and letting 0 < q < p < 1, we need to prove that τ ∗ (p) ≥ τ ∗ (q). There are the following scenarios to consider: Scenario 1: τ ∗ (q) = τmin . It immediately follows that τ ∗ (p) ≥ τ ∗ (q). Scenario 2: τ ∗ (p) = τmax . It immediately follows that τ ∗ (p) ≥ τ ∗ (q). Scenario 3: τ ∗ (q) = τmax . This implies that qu(τmax ) − v(τmax ) ≥ 0. Furthermore, as v(τmax ) ≥ 0 by assumption, we must have u(τmax ) ≥ 0. This leads to: pu(τmax ) − v(τmax ) ≥ qu(τmax ) − v(τmax ) ≥ 0,

(39)

which implies that τ ∗ (p) = τmax . Scenario 4: τ ∗ (p) = τmin . This implies that pu(τmin ) − v(τmin ) < 0. Furthermore, from the assumption that u(τ v ) > 0 and the fact that u(τ ) is decreasing in τ , we must have u(τmin ) > 0. Therefore qu(τmax ) − v(τmax ) < pu(τmax ) − v(τmax ) < 0,

(40)

20

which implies that τ ∗ (q) = τmin . Scenario 5: τmin < τ ∗ (p), τ ∗ (q) < τmax . We then have pu(τ ∗ (p)) − v(τ ∗ (p)) = qu(τ ∗ (q)) − v(τ ∗ (q)) = 0.

(41)

If τ ∗ (p) ≤ τ v then from the assumption that u(τ v ) > 0 = v(τ v ) and the fact that u(.) is decreasing while v(.) is nondecreasing, we have pu(τ ∗ (p)) − v(τ ∗ (p)) > 0, contradicts to (41). Therefore, τ ∗ (p) > τ v . Similarly, τ ∗ (q) > τ v . Then from (41) we have: u(τ ∗ (p)) =

v(τ ∗ (p)) v(τ ∗ (q)) < = u(τ ∗ (q)), p q

(42)

which implies that τ ∗ (p) > τ ∗ (q). Combining the arguments in Scenarios 1 - 5, completes the proof for Case 3a. The proof for other cases is similar.

E. Convexity of Pf a (τ ) and cs (τ ) Let us justify the assumption that Pf a (τ ) is convex and decreasing and cs (τ ) is convex and increasing for the case of spectrum sensing based on energy detection. 1) Convexity of Pf a (τ ): Assume that the primary signal is complex-valued PSK and noise is circular symmetric complex Gaussian. Given required probability of detection Pd and the SNR of primary signal at the secondary receiver of γ, the probability of false alarm can be calculated by ( [2]): Pf a (τ ) = Q(

p

2γ + 1Q−1 (Pd ) +

p τ fs γ).

Differentiating with respect to τ gives: √   p p dPf a γ fs −1/2 ′ Pf a (τ ) = =− √ τ exp − ( 2γ + 1Q−1 (Pd ) + γ τ fs )2 /2 . dτ 2 2π

(43)

(44)

For τ > 0, it is clear that Pf′ a (τ ) < 0 so Pf a (τ ) is decreasing in τ . Furthermore, when Pf a (τ ) ≤ 0.5, from √ √ (43) we have 2γ + 1Q−1 (δ) + γ τ fs ≥ 0. This, together with (44), imply that Pf′ a (τ ) is monotonically increasing in τ when Pf a (τ ) ≤ 0.5, i.e., Pf a (τ ) is convex in τ when Pf a (τ ) ≤ 0.5. 2) Convexity of cs (τ ): Energy detection comprises of three main steps, i.e., i) to take K samples of the signal; ii) to calculate the average power of the K samples; and iii) to compare the average power to a certain threshold. The complexity of the first and second steps is linear in K. Furthermore, K is linear in τ , so cs (τ ) can be assumed linear in τ when spectrum sensing is based on energy detection.

21

R EFERENCES [1] FCC, “Spectrum policy task force report, FCC 02-155.” Nov. 2002. [2] Y. C. Liang, Y. H. Zeng, E. Peh, and A. T. Hoang, “Sensing-throughput tradeoff for cognitive radio networks,” in Proc. of IEEE ICC’07, Glasgow, Jun. 2007. [3] A. Ghasemi and E. Sousa, “Optimization of spectrum sensing for opportunistic spectrum access in cognitive radio networks,” in Proc. of 4th IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, USA, Jan. 2007. [4] Y. Pei, A. T. Hoang, and Y.-C. Liang, “Sensing-throughput tradeoff in cognitive radio networks: How frequently should spectrum sensing be carried out?” in Proc. of 18th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, Athens, Greece, Sep. 2007. [5] Q. Zhao, L. Tong, A. Swami, and Y. Chen, “Decentralized cognitive mac for opportunistic spectrum access in ad hoc networks: A pomdp framework,” IEEE Journal on Selected Areas in Communications (JSAC): Special Issue on Adaptive, Spectrum Agile and Cognitive Wireless Networks, vol. 25, no. 3, pp. 589–600, Apr. 2007. [6] A. T. Hoang and Y.-C. Liang, “Adaptive scheduling of spectrum sensing periods in cognitive radio networks,” in to appear in Proc. of 50th IEEE Global Telecommunications Conference (Globecom), Washington DC, USA, Nov. 2007. [7] D. Zhang and K. Wasserman, “Transmission schemes for time-varying wireless channels with partial state observations,” in Proceedings of IEEE INFOCOM’02, New York, Jun. 2002, pp. 467–476. [8] L. A. Johnston and V. Krishnamurthy, “Opportunistic file transfer over a fading channel: A pomdp search theory formulation with optimal threshold policies,” IEEE Transactions on Wireless Communications, vol. 5, no. 2, pp. 394–405, Feb. 2006. [9] M. Mushkin and I. Bar-David, “Capacity and coding for the gilbert-elliott channels,” IEEE Transactions on Information Theory, vol. 35, no. 6, pp. 1277–1290, Nov. 1989. [10] IEEE 802.22 Wireless RAN, “Functional requirements for the 802.22 WRAN standard, IEEE 802.22- 05/0007r46,” Oct. 2005. [11] D. P. Bertsekas, Dynamic Programming and Optimal Control: 2nd edition, vols. 1 and 2.

Athena Scientific, 2001.

[12] S. C. Albright, “Structural results for partially observable markov decision processes,” Operations Research, vol. 27, no. 5, pp. 1041– 1053, Sep.-Oct. 1979. [13] W. S. Lovejoy, “Some monotonicity results for partially observable markov decision processes,” Operations Research, vol. 35, no. 5, pp. 736–743, Sep.-Oct. 1987. [14] I. MacPhee and B. Jordan, “Optimal search for a moving target,” Probability in Engineering and Information Sciences, vol. 9, pp. 159–182, Sep.-Oct. 1995. [15] W. Lovejoy, “Computationally feasible bounds for partially observed markov decision processes,” Operations Research, vol. 39, no. 1, pp. 162–175, Jan./Feb 1991. [16] A. R. Cassandra, “Tony’s pomdp page,” Website http://www.cs.brown.edu/research/ai/pomdp/. [17] S. Boyd and L. Vandenberghe, Convex Optimization, 1st ed.

Cambridge University Press, 2004.

22

Slots PU

SU

s

1

2

idle

idle

trans.

ack s

quiet

s

3

4

active

idle

quiet

idle

5 active

s

trans.

nak

Fig. 1. Operations of a primary network and secondary user. The primary network switches between active and idle according to a Markovian process. The secondary user must carry out spectrum sensing before transmitting in each time slot. Sensing can introduces false alarms (e.g. in slot 2) and mis-detections (e.g. in slot 5). Positive acknowledgments (ACK) and negative acknowledgments (NAK) are returned for successful and failed transmissions respectively.

0.65 0.6 0.55

Normalized V(p)

0.5 0.45 0.4 0.35 0.3 0.25 0.2

0

0.2

0.4 0.6 0.8 p (prob. of primary network being idle)

1

Fig. 2. Normalized optimal reward function. Optimal reward function V (p) is convex and increasing in p. Pd = 10%, ci = 0, s = 1, ct = 2, rt = 3, and µ = 0.9. The optimal reward function is normalized by the reward of Genie’s Policy.

23

0.5 0.45 0.4 0.35

Pfa(τ)

0.3 0.25 Convexity of P (τ) fa

0.2 0.15 0.1 0.05 0 0.05

0.1

0.15

τ/T

0.2

0.25

0.3

Fig. 3. Probability of false alarm for energy detection. Given that energy detection is used for spectrum sensing, for the range of τ in which 0 ≤ Pf a (τ ) ≤ 0.5, Pf a (τ ) is convex and decreasing in τ . Parameters: Channel bandwidth = 6 MHz, oversampling ratio = 8/7, SNR = −15 dB. Pd = 10%, s = 1, ct = 2, rt = 3.

0.14 0.12 0.1

G(0.5, τ)/T

0.08

Concavity of G(0.5, τ)

0.06 0.04 0.02 0 −0.02 −0.04 0.05

0.1

0.15

τ/T

0.2

0.25

0.3

Fig. 4. Instantaneous gain as a function of sensing time. For the range of τ in which Pf a (τ ) is convex and decreasing, G(p, τ ) is concave in τ . Parameters: Channel bandwidth = 6 MHz, oversampling ratio = 8/7, SNR = −15 dB. Pd = 10%, s = 1, ct = 2, rt = 3.

24

0.2

µ=0.5

0.18 0.16 0.14

τ/T

0.12 0.1 0.08 0.06 0.04

Optimal Grid

0.02

ζm(.)

0

ζs(.) 0

0.2

0.4 0.6 0.8 p (prob. of primary network being idle)

1

Fig. 5. Sensing times for different policies when the memory of primary user switching process is set to µ = 0.5. Pd = 10%, ci = 0, s = 1, ct = 2, rt = 3.

0.2

µ=0.9

0.18 0.16 0.14

τ/T

0.12 0.1 0.08 0.06 0.04

Optimal Grid

0.02

ζm(.)

0

ζs(.) 0

0.2

0.4 0.6 0.8 p (prob. of primary network being idle)

1

Fig. 6. Sensing times for different policies when the memory of primary user switching process is set to µ = 0.9. Pd = 10%, ci = 0, s = 1, ct = 2, rt = 3.

25

0.2

µ=0.9

0.18 0.16 0.14

τ/T

0.12 0.1 0.08 0.06 0.04

Optimal Grid

0.02

ζ (.)

m s

ζ (.) 0

0

0.2

0.4 0.6 0.8 p (prob. of primary network being idle)

1

Fig. 7. Effect of delay cost to sensing time. Sensing times for different policies when the memory of primary user switching process is set to µ = 0.9, Pd = 10%, ci = 1000, s = 1, ct = 2, rt = 3.

0.35

µ=0.9

0.3

0.25

τ/T

0.2

0.15

0.1 Optimal

0.05

ζm(.) s

ζ (.) 0

0

0.2

0.4 0.6 0.8 p (prob. of primary network being idle)

1

Fig. 8. A scenario when τ ∗ (p) (sensing time of myopic policy ζ m (.)) is non-increasing in p when sensing is carried out. µ = 0.9, Pd = 10%, ci = 0, s = 1, ct = 50, rt = 80.

26

0.44 0.42 0.4

Optimal Grid (10 points) Grid (5 ponts) m

ζ (.) s

Normalized reward

0.38

ζ (.) 5−point grid approximation

0.36 10−point grid approximation 0.34 0.32 0.3 0.28 0.26 0.24 0.1

0.2

0.3 0.4 0.5 0.6 0.7 0.8 µ (memory of primary network switching process)

0.9

Fig. 9. Normalized rewards achieved by different control policies. The rewards achieved by different policies are normalized by that of the Genie’s Policy. ci = 0, cs = 1, ct = 2, rt = 3.

0.4

0.38

Optimal Grid (20 points) Grid (10 points) ζm(.) s

Normalized reward

0.36

ζ (.) 10−point grid approximation

0.34 20−point grid approximation

0.32

0.3

0.28

0.26 0.1

0.2

0.3 0.4 0.5 0.6 0.7 0.8 µ (memory of primary network switching process)

0.9

Fig. 10. Effect of increasing delay cost to rewards achieved by different control policies. The rewards achieved by different policies are normalized by that of the Genie’s Policy. ci = 1000, cs = 1, ct = 2, rt = 3.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.