Full backward non-homogeneous semi-Markov processes for disability insurance models: A Catalunya real data application

Share Embed


Descrição do Produto

Insurance: Mathematics and Economics 45 (2009) 173–179

Contents lists available at ScienceDirect

Insurance: Mathematics and Economics journal homepage: www.elsevier.com/locate/ime

Full backward non-homogeneous semi-Markov processes for disability insurance models: A Catalunya real data application Guglielmo D’Amico a,∗ , Montserrat Guillen b , Raimondo Manca c a

Dipartimento di Scienze del Farmaco, Università ‘‘G. D’Annunzio’’ di Chieti, via dei Vestini 31, 66013 Chieti, Italy

b

Departament d’Econometria, Estadística i Economia Espanyola, RFA-IREA Universitat de Barcelona E-08034, Spain

c

Dipartimento di Matematica per le decisioni economiche, finanziarie e assicurative, Università di Roma ‘‘La Sapienza’’ 00161 Roma, Italy

article

info

Article history: Received September 2008 Received in revised form May 2009 Accepted 20 May 2009

abstract In this paper a stochastic model for disability insurance contracts is presented. The model is based on a discrete time non-homogeneous semi-Markov process to which the backward recurrence time process is joined. This permits us to study in a more complete way the disability evolution and to face the duration problem in a more effective way. The model is applied to a sample of contracts drawn at random from a mutual insurance company. © 2009 Elsevier B.V. All rights reserved.

JEL classification: C02 G22 Subject Category and Insurance Branch Category: IM01 IB21 Keywords: Multi state model Duration model

1. Introduction Homogeneous semi-Markov processes (HSMP) were defined in the fifties, independently by Levy (1954) and Smith (1955). At first, they found applications mainly in the field of engineering and particularly in problems of reliability and maintenance (see, for example, Howard (1971)). Non-homogeneous semiMarkov processes (NHSMP) were defined in a different way by Hoem (1972) and Iosifescu Manu (1972). The second approach was further generalized in Janssen and De Dominicis (1984). The development of results on semi-Markov processes quickly found applications in finance and insurance problems. The reader can find examples in Janssen (1966), Hoem (1972), CMIR12 (1991), Carravetta et al. (1981), Balcer and Sahin (1979, 1986), Janssen and Manca (1997) and in the book by Janssen and Manca (2007). A generalization of the NHSMP transition probabilities is obtained by introducing the backward environment. In this case, the transition probabilities are also dependent on the time of entrance into a given state, whereas in a ‘‘simple’’ (simple because



Corresponding author. Tel.: +39 08713554609; fax: +39 08713554622. E-mail address: [email protected] (G. D’Amico).

0167-6687/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.insmatheco.2009.05.010

there are no recurrence time processes, see Janssen and Manca (2006)) semi-Markov process it is supposed that the system entered the starting state at the starting time. A detailed description of continuous time homogeneous backward semi-Markov processes is reported in Limnios and Oprişan (2001) and in Janssen and Manca (2006). The discrete time non-homogeneous backward is presented in Stenberg et al. (2007). Recurrence time backward processes can be considered at the beginning (initial backward time) and at the end (final backward time) of the considered horizon time. To the best of the authors’ knowledge, this paper is the first time when the general formulae of a discrete time NHSMP with initial and final backward stochastic processes are presented. Section 2 of the paper is a short introduction to NHSMP. In Section 3 the NHSMP with initial and final backward recurrence time processes will be explained. Section 4 presents in the first part of the algorithm useful to solve this special kind of problem, and in the second part an insurance model useful for the study of disability evolution. Section 5 describes the disability data from a mutual insurance company from Catalunya and gives the results obtained by the model with these data. At the end of the paper some concluding remarks will be outlined.

174

G. D’Amico et al. / Insurance: Mathematics and Economics 45 (2009) 173–179

2. The discrete time non-homogeneous semi-Markov processes (DTNHSMP) We follow the notation given in Janssen and Manca (2006). In a semi-Markov process (SMP) environment, two random variables (r.v.) run together. Jn , n ∈ N with state space I = {1, 2, . . . , m} represents the state at the n-th transition. Tn , n ∈ N with state space equal to N represents the time of the n-th transition, T n : Ω → N.

Jn : Ω → I

We suppose that the process (Jn , Tn ) is a non-homogeneous Markov renewal process and by Xn = Tn+1 − Tn we denote the inter-arrival time process. The kernel Q = [Qij (s, t )] associated to the Markov renewal process is defined in the following way: Qij (s, t ) = P [Jn+1 = j, Tn+1 ≤ t | Jn = i, Tn = s] and so: pij (s) = lim Qij (s, t ); t →∞

i, j ∈ I , s, t ∈ N, s ≤ t

where P(s) = [pij (s)] is the transition matrix of the embedded nonhomogeneous Markov chain in the process. Furthermore, it is necessary to introduce the probability that the process will leave state i from time s up to time t:

and dij (s, t ) =

8 (t , t ) = D (t , t ) = I,

8 (t − 1, t ) = D (t − 1, t ) +

The related probabilities can be obtained by means of the following formula: if pij (s) 6= 0 if pij (s) = 0.

The main difference between a discrete time non-homogeneous Markov process and a DTNHSMP is in the distribution functions Fij (s, t ). In a Markov environment, this function has to be a geometric d.f. Instead, in the semi-Markov case the distribution functions Fij (s, t ) may be of any type. By means of the Fij (s, t ) we can take into account the problem given by the duration inside the states. In the disability context, we know that the transition probabilities depend on the time an individual has remained in a certain state level. Now let N (t ) = sup {n ∈ N|Tn ≤ t } then the NHSMP Z (t ) = JN (t ) , t ∈ R can be defined as the state occupied by the process at each time. The transition probabilities are defined in the following way:

φij (s, t ) = P [Z (t ) = j|Z (s) = i] . They are obtained when solving the following evolution equations:

φij (s, t ) = dij (s, t ) +

m t X X

biβ (s, ϑ)ϕβ j (ϑ, t )

β=1 ϑ=s+1

where bij (s, t ) = P [ Jn+1 = j, Tn+1 = t |Jn = i, Tn = s]

 =

B (t − 1, ϑ) 8 (ϑ, t )

ϑ=t

8 (t − 2, t ) = D (t − 2, t ) +

Qij (s, t ) − Qij (s, t − 1) 0

if t > s if t = s

(2.1)

t X

B (t − 2, ϑ) 8 (ϑ, t )

ϑ=t −1

= D (t − 2, t ) + B (t − 2, t − 1) 8 (t − 1, t ) + B (t − 2, t ) 8 (t , t ) = D (t − 2, t ) + B (t − 2, t − 1) [D (t − 1, t ) + B (t − 1, t )] + B (t − 2, t ) .

Fij (s, t ) = P [Tn+1 ≤ t | Jn = i, Jn+1 = j, Tn = s].

Qij (s, t )/pij (s) 1

t X

As third step, by putting s = t − 2 we get

Qij (s, t ).

Now it is possible to define the distribution function of the waiting time in each state i, given that the state successively occupied is known:



∀ t ∈ N.

= D (t − 1, t ) + B (t − 1, t ) 8 (t , t ) = D (t − 1, t ) + B (t − 1, t ) I.

j =1

Fij (s, t ) =

if i = j if i 6= j.

As second step, by putting s = t − 1 we get

Obviously it follows that: m X

1 − Hi (s, t ) 0

The first part of formula (2.1), dij (s, t ), gives the probability that the system does not have transitions up to the time t given that it entered in the state i at time s. dij (s, t ), in a disability insurance model represents the probability that the policyholder does not have any new evaluation from the time s up to the time t. This part makes sense if and only if i = j. In the second part of (2.1), biβ (s, ϑ) represents the probability that the system enters state β just at time ϑ given that it entered in the state i at time s. After the transition, the system will go to state j following one of the possible trajectories that go from state β at the time ϑ and bring the system to be in state j at time t. To solve (2.1) it is possible to use a recursive method. As first step, by putting s = t and expressing (2.1) in matrix notation we get

Hi (s, t ) = P [Tn+1 ≤ t | Jn = i, Tn = s].

Hi (s, t ) =



Then, by backward substitution we get 8 (s, t ) knowing 8 (r , t ) ∀r such that t ≥ r > s. Finally we follow the same steps as before by replacing t with t − 1 in the Eq. (2.1), and then we iterate the procedure. 3. DTNHSMP with initial and final backward time recurrence processes Now we give the following definition. Definition 3.1. Let: B(t ) = t − TN (t ) be the backward recurrence time process (see Limnios and Oprişan (2001), Janssen and Manca (2006)). We denote by:

φij (l, s; t ) = P [Z (t ) = j|Z (s) = i, B(s) = s − l] ,   φijb (s; l0 , t ) = P Z (t ) = j, B(t ) = t − l0 |Z (s) = i , b

(3.1) (3.2)

where (3.1) and (3.2) represent the semi-Markov transition probabilities with initial and final backward recurrence time, respectively. In (3.1) we know that at time s the system is in the state i. We know also that it entered in this state at time l and s − l represents the initial backward time. Then we are looking for the probability to be in the state j at time t. In (3.2) we know that the system entered in the state i at time s. In this case we are interested in the probability to be in the state j at time t with the entrance in this state at time l0 . The final backward time is t − l0 .

G. D’Amico et al. / Insurance: Mathematics and Economics 45 (2009) 173–179

175

Fig. 1. A trajectory of NHSMP with initial and final backward times.

other transition, up to time t given that it entered at time s in state i. The first part of the second term of (3.5)

Putting the two cases together we obtain: b

 φijb (l, s; l0 , t ) = P Z (t ) = j, B(t ) = t − l0 |Z (s) = i,  B(s) = s − l .

(3.3)

In Fig. 1 a trajectory of an NHSMP with initial and final backward times is reported. In this figure we have that N (s) = n, N (t ) = h − 1, the starting backward B(s) = s − Tn = s − l and the final backward B(t ) = t − Th−1 = t − l0 . To present the evolution equations of probabilities (3.1)–(3.3) we introduce the following notation:

  1 − Hi (l, t ) dij (l, s; t ) = 1 − Hi (l, s) 

if i = j

which represents the probability to have no transition from state i between times l and t given that no transition occured from state i between times l and s. Moreover by bij (l, s; t ) =

bij (l, t ) 1 − Hi (l, s)

we denote the probability to make next transition from state i to state j from time l to time t given that the system does not make transitions from state i between times l and s. The relations (3.4), (3.5) and (3.6) represent the evolution equations of (3.1), (3.2) and (3.3) respectively: b

φij (l, s; t ) = dij (l, s; t ) +

m t X X

biβ (l, s; ϑ)ϕβ j (ϑ, t ),

(3.4)

β=1 ϑ=s+1

φ (s; l , t ) = dij (s, t )1{l0 =s} + b ij

0

m l0 X X

β=1 ϑ=s+1

biβ (s, ϑ)ϕβb j (ϑ; l0 , t ), (3.5)

φijb (l, s; l0 , t ) = dij (l, s; t )1{l0 =s} +

m l0 X X

β=1 ϑ=s+1

biβ (l, s; ϑ)ϕβb j (ϑ; l0 , t ),

φij (s, t ) =

t X

φijb (s; l0 , t ).

l 0 =s

Expression (3.6) gives the probability that the system entered in the state j at time l0 and remained inside this state without any other transition up to the time t given that it entered in the state i at time l and it did not move up to s. The term dij (l, s; t )1{l0 =s} gives the probability not to have transitions from l to t outside state i given that no transition occurred from l to s. This probability contributes only if i = j and l0 = s. The second part of (3.6) represents the probability to make next transition from i at time l to whatever state β at whatever time ϑ and then to move following whatever trajectory which make provision for the entrance in j at time l0 with no transition up to time t. This probability is conditioned on the permanence of the system in i from time l up to time s. Remark 2. (3.6) is the mixture of (3.4) and (3.5). This last evolution equation is the one used to construct the model for the disability insurance. This kind of model was suggested by Haberman and Pitacco (1999) but there was no formulae to the problem, or they were not presented.

where 1{l0 =s} = 1 iff l0 = s otherwise is equal to 0. b

represents the probability not to have a transition from time s to time t. Consequently the final backward time t − l0 must be exactly equal to t − s, and it has sense only if i = j. The second part of (3.5) means that the system does not move from time s to time ϑ and that, just at this time, it jumps to state β . Afterwards, following one of the possible trajectories, the system arrives in state j just at time l0 and does not move from this state at least up to time t. Remark 1. It should be noted that considering all the possible final backward process values we recover the transition probabilities (2.1) that is:

if i 6= j

0

dij (s; t )1{l0 =s}

(3.6)

where 1{l0 =s} = 1 iff l0 = s, otherwise is equal to 0. Expression (3.4) provides the probability that the system is in the state j at time t given that it was in the state i at time s and entered in this state at time l. If in (3.4) l = s then we recover the Eq. (2.1). Expression (3.5) gives the probability that the system will arrive in the state j just at time l0 and will remain in this state, without any

4. The disability model and the related algorithm 4.1. The algorithm The relations of a discrete time initial and final backward semiMarkov processes are fully described above and we present the program by means of a pseudo-language so that the reader can follow the algorithm used to obtain the solution for the given process.

176

G. D’Amico et al. / Insurance: Mathematics and Economics 45 (2009) 173–179

The computer program that was used in the application was written in a Mathematica code. Inputs: T : time horizon that is considered, m: number of states, P(t ): matrix of the non-homogeneous embedded Markov chain, F(s, t ): matrix of the waiting time distribution functions (* kernel construction*) FOR s = 0, s ≤ T , s + +, FOR t = s + 1, t ≤ T , t + +, Q(s, t ) = P(s) ∗ F(s, t ); END FOR; END FOR; (* probability to exit from state i *) FOR s = 0, s ≤ T , s + +, FOR t = s + 1, t ≤ T , t + +, FOR i = 1, i ≤ m, i + +, Hi (s; t ) =

m P

Qij (s; t );

j =1

END FOR; END FOR; END FOR; (* probability to have no transition from state i with initial backward time s − u *) FOR u = 0, u ≤ T , u + +, FOR s = u, s ≤ T , s + +, FOR t = s + 1, t ≤ T , t + +, FOR i = 1, i ≤ m, i + +, dii (u, s; t ) = (1 − Hi (u, t ))/(1 − Hi (u, s)); END FOR; END FOR; END FOR; END FOR; (* probability to go from state i to state j just at time t with an initial backward time s − u *) FOR u = 0, u ≤ T , u + +, FOR s = u, s ≤ T , s + +, FOR t = s + 1, t ≤ T , t + +, FOR i = 1, i ≤ m, i + +, FOR j = 1, j ≤ m, j + +, bij (u, s; t ) = (Qij (u, t ) − Qij (u, t − 1))/(1 − Hi (u, s)); END FOR; END FOR; END FOR; END FOR; END FOR; (* solution of NHSMP initial and final backward evolution equation *) (* step 1 – only final backward time if there is *) FOR t = T , t > 0, t − −, FOR l = t , l ≥ 0, l − −, FOR i = 1, i ≤ m, i + +, ϕiib (l, l; l, t ) = dii (l, l; t ); (*probability to remain always in i; if i 6= j, dij = 0*) END FOR; FOR s = l − 1, s ≥ 0, s − −, (* the check on the FOR condition is made at the beginning *) FOR k = l, k > s, k − −, 8b (s, s; l, t )+ = B(s, s; k) · 8b (k, k; l, t ); (*calculation of (2.1) and (3.5)*) END FOR; END FOR; END FOR;

END FOR; (* step 2 – initial and final backward time *) FOR t = T , t ≥ 1, t − −, FOR l = t , l ≥ 0, l − −, FOR u = l − 1, u ≥ 0, u − −, FOR i = 1, i ≤ m, i + +, b b ϕii (u, l; l, t ) = dii (u, l; t ); END FOR; END FOR; FOR s = l − 1, s > 0, s − −, FOR u = s − 1, u ≥ 0, u − −, FOR k = l, k > s, k − −, b b 8 (u, s; l, t )+ = B(u, s; k) ·b 8b (k, k; l, t ); (*calculation of (3.4) and (3.6)*) END FOR; END FOR; END FOR; END FOR; END FOR; The ∗ means the element by element or Hadamard matrix product. The · is the usual row column matrix product. The variable names written in Italic are real numbers, the ones written in boldface are matrices or vectors, i.e. pij (t ) is an element of the matrix P(t ). The algorithm shows how to solve the (2.1) and (3.4)–(3.6) evolution equations. After the reading of the inputs, the kernel of the process is constructed multiplying the matrices P and F. After, the coefficients and the known terms of the system are calculated. The last step is the resolution of the process evolution equations. This is the algorithm main part and it solves (3.6) by four down-to loops. In step 1 the relations (2.1) and (3.5) are solved; the first down-to loop initializes the final backward transition probabilities charging on the main diagonal the probabilities to have no transition from state i; the second down-to loop calculates the final backward transition probabilities by backward recursion. Step 2 computes the relations (3.4) and (3.6) attaching the initial backwards to the results obtained in the step 1. The algorithm, in the third down-to loop, first initializes the initial and final backward transition probabilities charging on the main diagonal the probabilities to have no transition from state i with initial backward time s − u. In the fourth down-to loop calculates the initial and final backward transition probabilities by backward recursion. 4.2. The disability model In this subsection the semi-Markov disability model is presented. In Janssen and Manca (2001) it was proved that from numerical discretization of continuous time non homogeneous semi-Markov process (CTNHSMP) evolution equation it is possible to obtain the DTNHSMP evolution equation. Furthermore it is also shown that reducing to zero the discretization step, from DTNHSMP it is recovered the CTNHSMP evolution equation. For these reasons the discrete time environment is used in this application. We would also outline that if a CTNHSMP model is constructed then to get results it will be necessary to discretize it. The model considers initial and final backward times. The model has the following four states: (1) (2) (3) (4)

W – active P – pensioner Di – disabled De – dead.

In Fig. 2 the states and transitions are presented; virtual transitions are allowed, meaning that, after a doctor visit, it is possible that the insured person remains in the same state.

G. D’Amico et al. / Insurance: Mathematics and Economics 45 (2009) 173–179

The data analyzed here come from a sample of contracts drawn at random from a mutual insurance company from Catalunya. 150,000 insurance contracts are analysed and 2800 LTC spells are observed for a large period from 1975 to 2005. The data contains information for policyholders having underwritten the same type of insurance contract. The product is a disability coverage that provides a monthly compensation when a person is declared disabled. The state of disability is assessed by doctors appointed by the company on the basis of standard medical and physical tests. Disability is equivalent to a severe dependence level, because the individual is not able to perform daily life activities without the assistance of another person. Conditions to become eligible are defined as a permanent and irreversible loss of the capacity to function autonomously due to: irreversible psychotic disorder, hemiplegia, paraplegia, severe Parkinson disorder, aphasia or Wernicke disorder and dementia due to cerebral malfunction. In addition, due to the traditional practice in the company, blindness or losing two arms or legs are sufficient conditions to grant compensation. Even this simple product is difficult to analyse and many simplifying assumptions have usually been considered to address all the necessary actuarial calculations.

Di

W

177

De

P

Fig. 2. Evolution graph of a simple disability insurance contract.

The State De is the absorbing state, this means that in the last column of the evolution equation solution there will be the distribution function of the mortality probabilities given the starting state (see D’Amico et al. (2005)). It is well known that transition probabilities from the disabled state are functions of the duration in the current state (see Haberman and Pitacco (1999)). In the SMP environment this aspect is considered but the (2.1) solution is not sensitive to the duration aspect. The introduction of the backward process as in (3.6) permits to manage transition probabilities that depend on the initial and final durations.

6. Results Despite the large sample size there were a lot of withdrawals in the insurance company and we removed that information. So in order to simplify the model, we chose to work with a five year interval. In Fig. 3 the embedded non-homogeneous Markov chain at time 1, 2, 3 and 4 are reported. They respectively correspond to ages 20, 25, 30 and 35. The height of each colour represents the corresponding probability that is also shown inside the histograms.

5. Data description In Spain a few insurers have sold long-term care (LTC) coverage for many years as an extension to permanent and temporary disability insurance. Individuals who cannot perform daily life activities receive a constant annuity, even if they already retired.

100%

100%

90%

90%

80%

80%

70%

70%

60%

60%

50%

50%

40%

40%

30%

30%

20%

20%

10%

10%

0%

0%

100%

100%

90%

90%

80%

80%

70%

70%

60%

60%

50%

50%

40%

40%

30%

30%

20%

20%

10%

10%

0%

0%

Fig. 3. Non-homogenous embedded Markov chain at age 20, 25, 30 and 35.

178

G. D’Amico et al. / Insurance: Mathematics and Economics 45 (2009) 173–179 Waiting time D.F. form age 25

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

90

0.4

80

0.3

95

0.5

85

0.4

75

0.3

65

70

0.1

40

35

30

0 o2 1t

o3 1t

1t

o4 3t

1t

1t

o2

o3

o4 1t

2t

o3

2t o4

o4 3t

o4

25

20

2t o3

0

45

o4

0.1

55

0.2

2t

50

Age

60

0.2

Age

0.5

Probability

Probability

Waiting time D.F. form age 20

Waiting time D.F. form age 35

Waiting time D.F. form age 30

1 0.9

0.9

0.8

0.8

0.7

0.6 0.5 90

0.4 80

0.3

0.6 95

0.5

85

0.4

75

0.3

65

70

0.1

50

0.2

Ag

60

55

e

0.2

0.1

45

40

0

o2 1t

o3 1t

o4 1t

o3 2t

o4 2t

o4

35

3t

o3

o2

1t

1t

o4 1t

o3 2t

o4

30

2t

3t

o4

0

Age

Probability

0.7

Probability

1

Fig. 4. Waiting time DF of the transitions between states.

From the disability state it is possible only to go to the death state. In Fig. 4 the waiting time distribution functions are reported. It is interesting to note that up to class of ages from 55 to 60 there are no pensioners and that both the waiting time distribution functions to the transitions to disabled and dead states have a logistic shape despite the five years age classes. At last in Fig. 5 the different behaviour of initial and final recurrence backward times is reported. In all histograms W is the starting state. The blue colour reports the results in absence of initial backward (IBk = 0), the red colour the case with 1 year of initial backward recurrence time (IBk = 1). The first observation is that the probability distribution is spread among the final backward recurrence times (for example in the south west histogram FBk = 0, 1, 2 and 3) and the arriving states. Indeed, in the north-west part there are eight possible events with arriving time equal to starting time plus one (AT = ST + 1), four in the case of final backward time equal to 1 and four with final backward time equal to 0. In the north-east, with arriving time equal to starting time plus two (AT = ST + 2), there are twelve possible cases, four for each different final backward time and so

on. The first blue and red bars of each histogram represent the probability to stay in the starting state; it decreases in function of the arriving time. It is also interesting to observe that the shape of the histograms changes in function of both the initial and final backward times, so the model results to be sensitive to both backward times. To help the Fig. 5 understanding the first two bars of the first b histogram represent respectively the probabilities b ϕ11 (0, 0; 0, 1), b b ϕ11 (0, 1; 1, 2). 7. Conclusions In this paper, the evolution equation of a discrete time non-homogeneous semi-Markov process with initial and final backward recurrence times has been solved. The model was applied to a sample of insured population. The total number of transitions was not enough to be applied to one-year age groups. So we decided to apply the model to construct time classes of five years of age. The data reported only four possible states: healthy or active, pensioner, disabled and dead. Despite all these

G. D’Amico et al. / Insurance: Mathematics and Economics 45 (2009) 173–179

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5 0.4 0.3 0.2 0.1 0

179

0.5 0.4 0.3 0.2 0.1 0

Fig. 5. Comparison between initial and final recurrence backward times, where: ST – Starting Time; SS – Starting State; IBk – Initial Backward; VS – VerSus; AT – Arriving Time; AS – Arriving State; FBk – Final Backward.

constraints the results were very interesting because it was shown that changing the initial and final backward times may alter the results. We would also mention that for the introduction of recurrence times it is not necessary to increase the number of data required to apply a semi-Markov model because, as in the simple semi-Markov case, the embedded Markov chain and the waiting time distribution function are the only two inputs to be constructed from the data. In the near future we hope to pick up data that could be sufficient to construct a non-homogeneous semi-Markov model that could take into account all significant ages of disabled individuals, furthermore with data reflecting the severity of disability we could follow, by means of our model also the evolution of disability conditions. Another successive step will be the introduction of semiMarkov reward processes that could measure the costs and revenues of models measuring fairness of an insurance contract. Acknowledgement The authors are grateful to the anonymous referee for valuable advice. References Balcer, Y., Sahin, I., 1979. Stochastic models for a pensionable service. Operations Research 27, 888–903. Balcer, Y., Sahin, I., 1986. Pension accumulation as a semi-Markov reward process, with applications to pension reform. In: Janssen, J. (Ed.), Semi-Markov Models. Plenum, NY.

Carravetta, M., De Dominicis, R., Manca, R., 1981. Semi-Markov stochastic processes in social security problems. Cahiers du C.E.R.O., 23, pp. 333–339. CMIR12, 1991. Continuous Mortality Investigation Report, number 12. The analysis of permanent health insurance data. The Institute of Actuaries and the Faculty of Actuaries. D’Amico, G., Janssen, J., Manca, R., 2005. Homogeneous discrete time semi-Markov reliability models for credit risk management. Decisions in Economics and Finance 28, 79–93. Haberman, S., Pitacco, E., 1999. Actuarial Models for Disability Insurance. Chapman & Hall. Hoem, J.M., 1972. Inhomogeneous semi-Markov processes, select actuarial tables, and duration-dependence in demography. In: Greville, T.N.E. (Ed.), Population, Dynamics. Academic Press, pp. 251–296. Howard, R., 1971. Dynamic Probabilistic Systems. Vol I, XVII, 576. Vol II, XVIII, 577, 1108. Wiley, New York. Iosifescu Manu, A., 1972. Non homogeneous semi-Markov processes. Studii si Cercetuari Matematice 24, 529–533. Janssen, J., 1966. Application des processus semi-markoviens à un probléme d’invalidité. Bulletin de l’Association Royale des Actuaries Belges 63, 35–52. Janssen, J., De Dominicis, R., 1984. Finite non-homogeneous semi-Markov processes. Insurance: Mathematics and Economics 3, 157–165. Janssen, J., Manca, R., 1997. A realistic non-homogeneous stochastic pension funds model on scenario basis. Scandinavian Actuarial Journal 113–137. Janssen, J., Manca, R., 2001. Numerical solution of non-homogeneous semi-Markov processes in transient case. Methodology and Computing in Applied Probability 3, 271–293. Janssen, J., Manca, R., 2006. Applied Semi-Markov Processes. Springer. Janssen, J., Manca, R., 2007. Semi-Markov Risk Models for Finance, Insurance and Reliability. Springer. Levy, P., 1954. Processus semi-Markoviens. In: Proc. of International Congress of Mathematics, Amsterdam. Limnios, N., Oprişan, G., 2001. Semi-Markov Processes and Reliability. Birkhauser, Boston. Smith, W.L., 1955. Regenerative stochastic processes. Proceedings of the Royal Society of London, Series A 232, 6–31. Stenberg, F., Manca, R., Silvestrov, D., 2007. An algorithmic approach to discrete time non-homogeneous backward semi-Markov reward processes with an application to disability insurance. Methodology and Computing in Applied Probability 9, 497–519.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.