Neural Network Approaches to Grade Adult Depression

Share Embed


Descrição do Produto

J Med Syst (2012) 36:2803–2815 DOI 10.1007/s10916-011-9759-1

ORIGINAL PAPER

Neural Network Approaches to Grade Adult Depression Subhagata Chattopadhyay & Preetisha Kaur & Fethi Rabhi & U. Rajendra Acharya

Received: 5 April 2011 / Accepted: 7 July 2011 / Published online: 21 July 2011 # Springer Science+Business Media, LLC 2011

Abstract Depression is a common but worrying psychological disorder that adversely affects one’s quality of life. It is more ominous to note that its incidence is increasing. On the other hand, screening and grading of depression is still a manual and time consuming process that might be biased. In addition, grades of depression are often determined in continuous ranges, e.g., ‘mild to moderate’ and ‘moderate to severe’ instead of making them more discrete as ‘mild’, ‘moderate’, and ‘severe’. Grading as a continuous range is confusing to the doctors and thus affecting the management plan at large. Given this practical issue, the present paper attempts to differentiate depression grades more accurately using two neural net learning approaches—‘supervised’, i.e.,

S. Chattopadhyay (*) Dept. of Computer Science and Engineering, National Institute of Science and Technology, Berhampur 761008 Orissa, India e-mail: [email protected] P. Kaur Dept. of Biomedical Engineering, Manipal Institute of Technology, Manipal, India e-mail: [email protected] F. Rabhi School of Computer Science and Engineering, The University of New South Wales, Sydney, New South Wales, Australia e-mail: [email protected] U. R. Acharya Department of ECE, Ngee Ann Polytechnic, Singapore, Singapore e-mail: [email protected]

classification with Back propagation neural network (BPNN) and Adaptive Network-based Fuzzy Inference System (ANFIS) classifiers, and ‘unsupervised’, i.e., ‘clustering’ technique with Self-organizing map (SOM), built in MATLAB 7. The reason for using the supervised and unsupervised learning approaches is that, supervised learning depends exclusively on domain knowledge. Supervision may induce biasness and subjectivities related to the decision-making. Finally, the performance of BPNN and ANFIS are compared and discussed. It was observed that ANFIS, being a hybrid system, performed much better compared to the BPNN classifier. On the other hand, SOM-based clustering technique is also found useful in constructing three distinct clusters. It also assists visualizing the multidimensional data clusters into 2-D. Keywords Depression . Grading . BPNN . ANFIS . Classification . SOM

Introduction Approximately hundred million people globally suffer from various depressive disorders [1, 2]. According to World Health Organization (WHO) it is the third most common debilitating disease in the world affecting about 2.6–12% males and 7–21% females [2, 3]. Depression has a relapsing course that largely hampers one’s quality of life due to cost of treatment, withdrawal from society, suicidal attempts, and so forth [4]. There are many scales used for screening and grading of depression manually, such as Hamilton’s scale [5], Zung’s scale [6], Beck’s depression inventory [7] etc.

2804

Diagnosis is made based on the final score, which is expressed within a range and hence diagnoses are often given as ‘mild-to-moderate’ and ‘moderate-to-severe’ [8]. As such diagnoses fail to specify whether the patient is actually suffering from mild or moderate or severe depression, appropriate treatment plans may not be adequate. Depression, being highly non-linear in nature i.e., the cause-effect relationships are unknown in most of the cases, the symptoms and signs vary a lot among patients, and there are multiple causes. Hence, it is difficult to be modeled mathematically. This is an interesting research challenge in today’s informatics research and it is the motivation behind the present work. To address this problem, this paper describes a novel attempt to develop classifiers based on Back Propagation Neural Networks (BPNN) and Adaptive Neuro-fuzzy Inference Systems (ANFIS) techniques that could classify depression grades into ‘mild’, ‘moderate’, and ‘severe’ more accurately. The rest of the paper is organized as follows, & & & &

Section 2 presents related work in applying classifiers to the health domain Section 3 discusses the materials and methods used in this study Section 4 compares the results of BPNN and ANFIS classifiers, and finally, The paper concludes in section 5.

Related work The concept of an Artificial Neural Network (ANN) is to mimic human-like reasoning and adaptation given a set of inputs. A human brain accomplishes reasoning through synaptic connections and somatic processing. An ANN performs it by virtue of its neural connections organized in a layered architecture [9, 10]. The weighted links store the necessary information and are updated as per the requirements [9, 10]. In a feed-forward network the information propagates from input layers, hidden layers, and then finally to the output layers. Activation functions are used to compute the outputs at each layer and the final output is determined (i.e., output of the output layer). In a BPNN, the final output is compared with the target output from the training data and attempts are made to minimize the difference between the calculated and targeted outputs through iterations. A BPNN acts more like an adaptive control system than a simple inference system and hence, it is a widely used approach in various medical domains for decision making. Some interesting areas

J Med Syst (2012) 36:2803–2815

where BPNN has been used are laboratory medicine [11]; diagnosis of breast and ovarian cancers [12]; chromosome-classifications in gene study [13]; detection of ophthalmic artery stenosis to prevent blindness [14]; psychosomatic diagnosis [15]; predicting gastrointestinal disorders [16]; radiologic diagnosis [17], analysis of heart sounds in children [18]; decision making in brain surgeries [19] and brain structure measurement [20]. Specifically, in the mental health domain, some of the applications of ANN include detection of seizure activity [21], psychiatric diagnosis [22], length of hospital stay for psychiatric patients [23], grading adult depressions [24, 25], and so forth. However, to handle real-world data, a single approach, e.g., only BPNN might not be suitable. Hence, BPNN has been used in conjunction with various other approaches (i.e., as hybrid systems). These techniques include fuzzy logic, genetic algorithms, simulated annealing, steepest descent methods, least square techniques and so forth. Fuzzy Inference Systems (FIS) were proposed by Prof. Lotfi Zadeh in 1965 [28]. An FS is able to mathematically translate the subjective human knowledge into an interpretable way by manipulating its knowledge base (obtained from the domain experts). The FS works by calculating the membership functions (μ) using several available distributions (such as triangular, bell-shaped, Gaussian, Trapezoid, etc.) of any given input. This is the way it handles the qualitative terms, which are encountered in our research. Fuzzy systems are popular for interpretation of medical findings and syndrome differentiation [29]. Some of their medical applications include analysis of nystagmus [30], development of diabetic control techniques [31], diagnosis of valvular heart diseases [32], breast cancer diagnosis [33], diagnosis of gastrointestinal Tumors [34], evaluation of ischemic heart diseases [35], and so on. In the mental health domain, its contributions include assessment of depression associated with obstructive sleep apnea [36], classification of depression (clinical and remission) by fuzzy C-means algorithm [37], screening and prediction of adult psychoses [38–41] and so forth. However, the development of an FS having a good performance is not an easy task; because finding the optimum membership functions and appropriate rules are computationally expensive. For this reason, efficient NN-based learning algorithms are applied to FIS to help learning and gain maturity while decision making. Adaptive Network-based Fuzzy Inference System (ANFIS) [26, 27] is one such Takagi Sugeno [42] based hybrid Neuro-Fuzzy System developed by Jang and uses Back propagation learning to determine the fuzzy parameters.

J Med Syst (2012) 36:2803–2815

2805

ANFIS has been diversely used in respiratory motion prediction [43], automatic diagnosis of fetal heart rate [44], Oscillometric blood pressure estimation [45], classification of diabetes [46], diagnosis of joint pains [47], Cardiac state diagnosis [48] and so on. In the neuropsychiatry domain, ANFIS has been used for classification of Electroencephalogram (EEG) signals [49], Brain Abnormality Segmentation [50], Entropies for detection of epilepsy in EEG [51] and so forth. In this paper, we have applied ANFIS-technique for classification of depression, as our initial study with BPNN could not yield desired result [24, 25]. This is the major motivation behind using ANFIS for grading the depression levels. It is also important to mention here that the accuracies of BPNN and ANFIS classifiers have been tested on a set of testing data by comparing its computed output with that of the target output provided by the medical doctors.

Material and methods The paper aims to grade depression cases in more discrete terms, such as ‘mild’, ‘moderate’, and ‘severe’. This is a partitioning task, which has been performed using two neural network learning approaches—‘supervised’ (i.e., classification) and ‘unsupervised’ (clustering). To accomplish the task, the methodology used is composed of (i) capturing the predictors of depression and structuring them into a matrix form, (ii) examining data-reliability or internal consistency, (iii) developing BPNN and ANFIS algorithms under ‘supervised learning’ and SOM under ‘unsupervised learning’. The algorithms are tested on Iris data to examine the partitioning fidelity of the code prior using on the depression data, and finally the (iv) partitioning tasks have been accomplished by these techniques into three ‘mild’, ‘moderate’, and ‘severe’ grades. It is worth noting that neural network approaches, used in this work are supervised learners, i.e., the learning depends on the training by the domain experts who supply the class labels on a particular case. Such learning could be subjective in nature as the opinion varies from person-to-

Table 1 Constructs and indicators of depression

person. Hence, the classes thus derived with BPNN and ANFIS could be ‘by chance’. With this logical assumption, we have therefore, tested the partitioning task with Selforganizing map (SOM) [52], which is an unsupervised learner and is able to map multidimensional data into lower dimensional space for visualization. In a nutshell, the paper handles the partitioning task by both supervised and unsupervised approaches. Capturing the predictors of depression At first, the biological predictors of depression have been captured from the available literature [24, 25]. Let’s assume that depression is a multifaceted illness, however, in this paper, we have chosen some biological constructs, such as ‘Emotional’ [53], ‘Cognitive’ [54], ‘Motivational’ [55] and ‘Vegetative’ [55]. Finally, a set of indicators (M=15) for these constructs are captured quantitatively by setting relevant closed ended questions. Table 1, below shows the constructs and their corresponding indicators and Table 2 shows the questions asked and their corresponding indicators. It is important to note here, that, a group of psychiatrists and psychologists have been consulted to design the questionnaire. With the help of the questionnaire, anonymous depression data have been collected from various hospital sources in India after following appropriate ethical approval processes. Answer to each question has been graded into a three-point scale as ‘0’, ‘0.5’, and ‘1.0’ denoting ‘disagree’, ‘not sure’, and ‘agree’, respectively. Total (N=124) cases have been interviewed and the data is set as an N × M matrix. Out of the 124, a total of 2 cases were diagnosed as mild, 14 cases as moderate and 108 cases as severe, manually. Testing the data reliability Checking the reliability of any data set is mandatory prior to putting it into the actual experiment. In this study, we have checked the reliability of the data by computing Cronbach’s α [56]. It is actually the coefficient of consistency or reliability and not a statistical test, but is available in various statistical tools, such as MINITAB-15,

Constructs

Indicators

Emotional Cognitive Motivational Vegetative

Dejected mood, negative expression, loss of gratification, loss of emotional attachment Negative expectations, self blame, indecisiveness, distorted body image Avoidance, suicidal wishes, paralysis of will Loss of hunger, loss of libido, fatigability and sleeplessness.

2806

J Med Syst (2012) 36:2803–2815

Table 2 Questionnaire generated corresponding to the indicators Questions

Agree (~0.0)

Disagree (~1.0)

Neutral (~0.5)

1. I often feel sad and down-hearted. (DEJECTED MOOD) 2. I don’t feel good about my career. (NEGATIVE EXPECTATIONS) 3. I feel unwanted. (REDUCTION IN GRATIFICATION) 4. I still enjoy the things I used to do. (REDUCTION IN GRATIFICATION) 5. I get irritated / agitated easily. (REDUCTION IN GRATIFICATION) 6. The pleasure and joy has gone out of my life. (LOSS OF EMOTIONAL ATTACHMENT) 7. I have lost interest in the aspects of life that are important to me. (LOSS OF EMOTIONAL ATTACHMENT) 8. I feel others would be better off we I was dead. (NEGATIVE FEELINGS ABOUT SELF) 9. I feel hopeless about the future. (NEGATIVE EXPECTATIONS) 10. I think of myself as a failure. (NEGATIVE FEELINGS ABOUT SELF) 11. I have trouble sleeping at night. (SLEEP DISTURBANCES) 12. My sleep is disturbed—too little, too much or broken sleep. (SLEEP DISTURBANCES) 13. I feel fatigued. (FATIGUIBILITY) 14. I feel tired very often. (FATIGUIBILITY) 15. I still enjoy sex as before. (LOSS OF LIBIDO) 16. I eat as much as I used to. (LOSS OF APETITE) 17. I notice that I am losing weight. (LOSS OF APETITE) 18. I do things slowly, slower than I used to. (PARALYSIS OF WILL) 19. I find it easy to do things that I used to. (PARALYSIS OF WILL) 20. I spend time thinking how I might kill myself. (SUICIDAL WISHES) 21. I feel killing myself would solve a lot of problems in my life. (SUICIDAL WISHES) 22. I am restless and can’t keep still. (AVOIDANCE) 23. I don’t feel good about myself when I look into the mirror. (DISRORTION OF BODY IMGAE) 24. I find it easy to take decisions. (INDICISIVE) 25. I feel I am a guilty person and deserve to be punished. (SELF BLAME AND SELF CRITICISM) 26. My mind is as clear as it used to be. (NEGATIVE FEELINGS ABOUT SELF)

SPSS-15 etc. The expression for computing α is as follows: a¼

N :c  n þ ðN  1Þ:c

ð1Þ

Where, ‘N’=number of items; ‘c’=average inter item covariance; and ‘v’=average inter-item variance. From the equation, it is obvious that if the value of α is directly proportional to ‘N’ and inter-item covariance, i.e., ‘c’. BPNN and ANFIS classifier design: supervised learning approach This subsection describes the design of both BPNN and ANFIS classifiers. The performance of these classifiers are then tested and compared using two data sets—the first one is the Iris data [57] (which is a well-acclaimed data set for

testing the accuracy of our MATLAB codes) and the second one is our real-world depression data. A random of 62 tuples out of the total of 124 is used for training, and the rest of 62 are used for testing the performance of classifiers in each case. BPNN classifier development A BPNN is essentially a gradient-based iterative search for the optimum set of weights (i.e., the information inside the net) so as to minimize the Mean Squared Error (MSE) between the network’s class prediction and the known target values of the training cases. Furthermore, the assigned learning rate helps in avoiding getting stuck into a local minima in the given decision space. Therefore, it is appropriately chosen for such a classification problem where accuracy is the desired output. The degree of accuracy depends on training that depends on the size and

J Med Syst (2012) 36:2803–2815

quality of the training data. Another important advantage of any BPNN is its high tolerance towards adapting the noisy data. Hence, BPNN has been largely used on medical data. It is also worth mentioning here that a K-times hold out random sub-sampling has been performed to test the classifiers’ best accuracy as well as its robustness [58]. Two third of the data set was used for training, and one third of it was used for testing. The procedure was repeated for 10 times, with the same training and testing data combination, to check for the robustness of K-times hold out sampling. In this study, the depression data has been subdivided into two segments—half of it is used for training and testing the classifier, respectively and this procedure is repeated for obtaining the combinations of training and testing data for 10 times, to check for its robustness. Only fifty percent of the data has been used for training to make sure that when the classifier is tested on the rest fifty percent of the data set, a substantial number of test cases can be made available to draw an inference about the classifier’s performance. Errors in prediction by the developed BPNN classifier are calculated using percentage deviation, which is based on the percentage differences between the target and calculated outputs given all calculated outputs. A regression study is then performed to find the best correlation coefficient for ascertaining the goodness of the model fit. Furthermore, the accuracy of the classifier is measured by calculating the number of correctly classified tuples, given the total number of tuples. It is important to note that as we have to predict only one class label at a time, hence, sensitivity and specificity are not measured. Instead we measured how many cases the classifier was able to correctly diagnose/label and how many it could not. We compared the computed values of depression with that of the values given by the medical doctors to measure the accuracy of the developed BPNN classifier.

Fig. 1 General Structure of ANFIS

2807

ANFIS classifier development As mentioned earlier, ANFIS constructs a Sugeno-based Fuzzy Inference System (FIS) whose membership functions are tuned using a BPNN algorithm. A Subtractive Clustering Algorithm [59] has been used to generate the FIS such that, the data is first clustered and a minimum number of fuzzy rules required to distinguish amongst the clusters are generated to overcome the curse of dimensionality, a concern with our depression data. It is interesting to note that the firing strengths could not be defined, because the generation of rules is not controlled by the user directly. The number of generated rules are inversely related to the cluster radius size [0, 1], defined by the user. In this study, cluster radii of 0.5 has been chosen not to have too less or too many rules. The Root Mean Square Errors (RMSE) during training and testing are measured by calculating the square root of the arithmetic mean of the squares of the original values. Optimization of the ANFIS classifier is attained in as low as 50 epochs, which makes it a fast and efficient classifier. The accuracy of the ANFIS is measured by calculating the number of correctly classified tuples, given the total number of tuples. As only one class label is predicted at a time in this study, sensitivity and specificity are not measured. Instead we measured how many cases the ANFIS was able to correctly diagnose/label and how many it could not. Based on these values we measured the accuracy of the developed ANFIS. The following figure explains the basic structure of ANFIS [60] (Fig. 1). It’s a 5 layered structure, driven by two inputs and produces the output. Here, the symbol pie represents the product, N represents the norm, and sigma represents the sum. The Layer 1 is represented as follows: Oi1 ¼ m Ai ðxÞ . . .

ð2Þ

Where x is the input to the node i and Oi1 is the membership Ai, usually taken to be bell shaped with

2808

J Med Syst (2012) 36:2803–2815

Table 3 BPNN-based classification (Test inputs=62) Depression grade

Correctly classified

Incorrectly classified

Accuracy (%)

02 08 44

00 02 06

100 80 88

Mild Moderate Severe

maximum equal to 1 and minimum equal to 0. Layer 2 multiplies the incoming signals and can be represented as follows: wi ¼ m Ai ðxÞX m Bi ðyÞ; i ¼ 1; 2 . . .

ð3Þ

Here, each node output represents the firing strength of a rule. In Layer 3, the ith node calculates the ratio of the ith rule’s firing strength to the sum of all rules’ firing strengths, as follows: w1 ¼ wi =ðw1 þ w2 Þ; i ¼ 1; 2 . . .

The output, f can be represented as follows:

Fig. 2 Target (line) vs. predicted (circle) plots for Iris data

nðXi Þ ¼ min

r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Xi  Wji

hj; nðxiÞ ðtÞ ¼ exp 

dj;2nðxi Þ

2

ð8Þ

!

2 a 2i

ð9Þ

ð5Þ

Where wi is the output of Layer 3 and pi, qi and ri constitute the parameter set. The Layer 5 computes the overall output as the summation of all incoming signals and is represented as follows: X X X O5i ¼ overall output ¼ wi f i ¼ wi f i = wi . . . ð6Þ

f ¼ ½w1 =ðw1 þ w2 Þf 1 þ ½w2=ðw1 þ w2 Þf 2 . . .

SOM is a self-organized neural net that learns unsupervised through iterative competitions by selecting winner neurons [52]. It has been successfully used to visualize multidimensional clustered data of psychosis disorders [41]. Its most promising feature is that the original topology of the data is duly preserved. It could therefore be viewed as the non-linear generalization of Principal Component Analysis [61]. The operations of SOM are ‘competition’ among neurons lying on the competition layer to select the ‘winner’ (see Eq. 8), ‘cooperation’ to obtain its neighborhood (refer to Eq. 9), and ‘updating’ the synaptic weights (see Eq. 10) of the ‘winner’ and its neighborhood [52].

ð4Þ

The outputs from this layer are called normalized firing strengths. Layer 4 can be represented as follows: O4i ¼ wi f i ¼ wi ðpi x þ qi y þ ri Þ . . .

SOM algorithm development: unsupervised learning approach

ð7Þ

h i Wji ðt þ 1Þ ¼ Wji ðtÞ þ hðtÞhi; nðXi Þ ðtÞ Xi  Wji ðtÞ

ð10Þ

In Eq. 8, ‘Xi’ and ‘Wji’ are input vectors and its corresponding synaptic weights, where ‘i’=1, 2, 3, N (sample size), ‘j’ denotes the neuron in the competition layer. ‘n (Xi)’ is the Euclidean distances. Equation 9 a Gaussian distribution to compute the neighborhood and ‘t’ in Eq. 10 refers to the iteration, while ‘η’ is the learning rate.

J Med Syst (2012) 36:2803–2815

2809

Fig. 3 Target(line) vs. predicted (circle) plots for Depression data

Mapping of multidimensional data into lower dimension, i.e., 2-D or 3-D is done by calculating the distances of the ‘winners’ from the origin of higher dimensional space, which is kept (0,0). The winning neurons are then located into a 2-D space and hence visible without any alteration of the topological information [41].

Results and discussions

optimum) are 0.3 and 0.6, respectively based on the most converging values, obtained through iterations. The learning rate was varied from 0.1 to 1, keeping the momentum constant for optimization, and once the most optimized value (with least mean square error) was found, the learning rate was kept constant and momentum was optimized, in a similar manner. As we know the target outputs the calculated outputs are matched inside the MATLAB and in this case we would be able to predict all the class labels of Iris data with 100% accuracy.

This section discusses the results of the study as follows. On depression data Results of internal consistency test Internal consistency testing was performed to understand the reliability of the real-world data. The measure of α for the depression data has been calculated as 0.77, which indicates good/acceptable reliability of the data [62]. Cronbach’s α can range from 0 to 1. A minimum score of 0.7 is required to declare the data set as acceptable [63]. As Iris is a well-acclaimed dataset, it does not require internal consistency testing. Results of BPNN only classifier

There are 124 adult depression cases registered in this study. Sixty-two tuples have been randomly selected for training and the remaining 62 tuples have been used for testing the performance of the learned BPNN classifier. According to the test data, there are 50 ‘severe’, ten ‘moderate’, and two ‘mild’ cases that are to be classified. For this data, the topology of the BPNN is as follows. One input layer consisting of 26 neurons; one hidden layer empiricaly consisting of 10 neurons, and one output layer consisting of 2 neurons because our objective is to inject much accuracy in diagnosing the depression grade one at a

On iris data The Iris data matrix, which is 150 * 3, has been divided in half and each half used for training and testing the developed the BPNN classifier. The topology of the BPNN is set as follows- one Input layer consisting of 4 neurons, 1 Hidden layer consisting of 2 neurons, and 1 Output layer consisting of 2 neurons. It is important to note that the optimized values for the learning rate and the momentum or gradient (so that the search does not get stuck into the local

Table 4 ANFIS-based classification on 62 test cases Depression grade

Mild Moderate Severe

Correctly classified

Incorrectly classified

Accuracy (%)

02 10 46

00 00 04

100 100 92

2810

time and thus making it more specific. In this classification task, we have also checked the optimum learning rate and the momentum/gradient through iterations, which is 0.4 for each learning rate and momentum. Finally the accuracy of classification is measured for each class labels (mild, moderate and severe) as seen in Table 3.

J Med Syst (2012) 36:2803–2815

Results of ANFIS On iris data Seventy-five random tuples out of the 150 Iris data tuples (i.e., 50%) are used for training and the remaining 50% are

Fig. 4 a. Iris Setosa. b. Iris Verginica. c. Iris Versicolor

(a). Iris Setosa.

(b). Iris Verginica.

J Med Syst (2012) 36:2803–2815

2811

Fig. 4 (continued)

(b). Iris Versicolor

used for testing the proposed ANFIS classifier, which means, 25 tuples from each of the 3 classes were used for testing and training. The ANFIS could predict the class labels for iris setosa with an accuracy of 100% (all 25 test tuples lie in the class label defined by 0.5–1.5), iris versicolor with 96% (24 test tuples lie in the class label defined by 1.5–2.5) and iris virginica with 88% (22 test tuples lie in the class label defined by 2.5–3.5). The RMS error associated with training is as low as 0.0116 and that with testing is 0.2368. A plot of the target classes vs. the predicted classes for the Iris data set gives us an idea about the accuracy of the ANFIS (see Fig. 2). On depression data Sixty two random tuples out of 124 Depression data tuples are used for testing the performance of ANFIS classifier. The classifier was able to predict the class labels for ‘Mild’ and ‘Moderate’ cases with 100% accuracies while ‘Severe’ cases were predicted with an accuracy of 92%. Hence, ANFIS yields better results compared to BPNN alone. The RMS error associated with the training is as low as 0.0047 and the one associated with testing is 0.09356. A plot of the target classes vs. the predicted classes for the Depression data set gives us an idea about the accuracy of the FS (see Fig. 3).

Finally the accuracy of classification which is measured for each class labels (mild, moderate and severe) can be seen in Table 4. Results of SOM On IRIS data On IRIS data, SOM is able to cluster the dataset into three plots, each for Iris Setosa, Verginica and Versicolor (see Fig. 4a, b, and c). On depression data SOM has then been used on 62 test cases. Best learning rate has been carefully chosen (η=0.05) based on the ‘best’ possible updation of the winner neuron and its neighborhood, which has been achieved at the 200th iterations out of total 500 iterations, initially assigned. It may be noted that the movement of each case into/out of a cluster has been monitored while coding SOM. To have information about the original class labels, properties of the cases under each cluster are matched with the cases inside the main dataset. Case-wise description of SOM clusters is shown in Table 5. SOM is then plots these clusters into more objective term (described before) and Fig. 5a, b, and c below shows the 2-D plots.

2812

J Med Syst (2012) 36:2803–2815

Table 5 SOM-based clusters of 62 test cases Depression grade

Mild Moderate Severe

Correctly clustered

Incorrectly clustered

Accuracy (%)

02 09 49

00 01 01

100 90 98%

So, finally we may state that SOM also could produce three distinct clusters to validate the classification task, which might produce subjective clusters.

Conclusions This paper proposes two approaches, such as classification (supervised learning) and clustering (unsupervised learning) to partition the grades of depression into discrete terms,

Fig. 5 a. SOM-based cluster 1 (mild). b. SOM-based cluster 2 (moderate).c. SOM-based cluster 3 (severe)

(a). SOM-based cluster 1 (mild).

(b). SOM-based cluster 1 (moderate).

J Med Syst (2012) 36:2803–2815

2813

Fig. 5 (continued)

(c). SOM-based cluster 3 (severe).

such as ‘mild’, ‘moderate’, and ‘severe’ instead of continuous terms, described before. Under the first approach, a BPNN and ANFIS-based classifiers are developed for predicting the depression grades. SOM has been used as the second approach. We have collected depression parameters (four constructs and its respective indicators) from the available literature. Information about the indicators has been captured through a questionnaire and their individual weights are mapped onto a three-point scale (0, 0.5 and 1.0) to refer the grade levels. Domain experts helped us in generating the questionnaire for data collection and quantification. The data, thus collected is secondary in nature and hence its reliability has been checked by estimating the Cronbach’s α, which is 0.77 for this dataset. It substantiates that the collected depression data is reliable and consistent. While assessing the performance of the proposed BPNN classifier, it is evident that the algorithm can accurately predict mild depression with 100% rate. While for moderate and severe it shows 80% and 88% accuracies, respectively. On the other hand, ANFIS results are much more encouraging because this classifier could predict ‘mild’ and ‘moderate’ depressions with 100% accuracies and with 92% accuracy for the ‘severe’ class. It is worth mentioning here that the maximum expected deviation of testing data from training data is with respect to the ‘moderate’ class as the borderline distinction of classes could be complex. The same has been demonstrated by the BPNN results where the classifier has its least accuracy of 80% towards moderate class but ANFIS shows 100% accuracy for

classification of moderate class tuples. As mentioned earlier, ANFIS constructs a Sugeno-based Fuzzy Inference System (FIS) whose membership function parameters are tuned using a BPNN algorithm. A Subtractive Clustering Algorithm (SCA) has been used in this study to generate the FIS such that the data is first clustered and a minimum number of fuzzy rules required to distinguish amongst the clusters are generated to overcome the curse of dimensionality—a concern with our depression data. Another reason is that the accuracy might be further increased due to BPNN’s inherent property of incremental learning. It may be noted that, such classification could lead to subjective classes, as the supervised training could be biased. To handle this logical issue, SOM has been implemented to cross-check whether it too could render clear-cut grades. It may be noted that SOM is able to produce three distinct clusters, which are much objective in nature and is able to validate our technique. Although it is difficult to find the labels of a cluster, because the target class is unknown; in this study, the property of each case which has been clustered has been carefully tracked to get the clue about its respective class label. Here, SOM is able to predict ‘mild’ cases with 100% accuracy, while ‘moderate’ and ‘severe’ cases are predicted with 90% and 98% accuracies, respectively. Summarily, it may be stated that neural network approaches are useful techniques to handle non-linear clinical data having subjectivities in the class labels. The contributions of this paper are integrating various biological predictors of depression, modeling it into a neural net, and designing classifiers to handle higher

2814

dimensional subjective data. The limitation is that the sample size is small as access to real-world data is restricted due to ethical and legal considerations. Acknowledgement Authors gratefully acknowledge the psychologists and psychiatrist colleagues who had helped in collecting depression data.

References 1. Gotlib, I. H., and Hammen, C. L., Psychological aspects of depression: Toward a cognitive-interpersonal integration, vol. xi. John Wiley & Sons, Oxford, England, pp. 330, 1992. The Wiley series in clinical psychology. 2. http://www.who.int/topics/depression/en/ 3. Keller, M. B., Depression: a long-term illness. Br. J. Psychiatry 26:9–15, 1994. 4. Kaplan, H. I., Saddock, B. J., and Greb, J. A., Synopsis of Psychiatry, Behavioural Science and Clinical Psychiatry. B I Waverly Pvt. Ltd, New Delhi India, pp. 803–823, 1994. 5. Hamilton, M., A Rating Scale for Depression. J. Neurol. Neurosurg. Psychiatry 23:56–62, 1960. 6. Zung, W. W. K., The depression status inventory: An adjunct to the self-rating depression scale. J. Clin. Psychol. 28(4):539–543, 1972. 7. Beck, A. T., & Alford, B. A., Depression: Causes and Treatment, 2nd Edition. University of Pennsylvania Press, 2008. 8. Bagby, R. M., Andrew, G. R., Deborah, R. S., and Marshall, M. B., The Hamilton Depression Rating Scale. Am. J. Psychiatry 161:2163–177, 2004. 9. Chen, H., Fuller, S. S., Friedman, C., and Hersh, W., Knowledge management, Data mining, and Text mining in Medical informatics. In: Chen, H., Fuller, S. S., Friedman, C., and Hersh, W. (Eds.), Medical informatics Knowledge management and Data mining in Biomedicine. Springer’s Integrated Series in Information Systems, NY USA, pp. 4–30, 2005. 10. Kohonen, T., Self organizing maps. Springer Verlag, Berlin, 1995. 11. Astion, M. L., and Wilding, P., The application of backpropagation neural networks to problems in pathology and laboratory medicine. Arch. Pathol. Lab. Med. 116(10):995–1001, 1992. 12. Wilding, P., Morgan, M. A., Grygotis, A. E., Shoffner, M. A., and Rosato, E. F., Application of backpropagation neural networks to diagnosis of breast and ovarian cancer. US National Library of Medicine 77(2–3):145–53, 1994. 13. Cho, J. M., Chromosome classification using backpropagation neural networks. IEEE 19(1):28–33, 2000. 14. Guler, I., and Ubeyli, E. D., Detection of ophthalmic artery stenosis by least-mean squares backpropagation neural network. Engineering Applications of Artificial Intelligence 18(4):413–422, 2003. 15. Aruna, P., Puviarasan, N., and Palaniappan, B., An investigation of neurofuzzy system in psychosomatic disorders. Exp. Syst. Appl. 28(4):673–679, 2005. 16. Aruna, P., Puviarasan, N., and Palaniappan, B., Neuro-Fuzzy model for diagnosis of gastrointestinal disorders, in proc. of 5th International Conference on Neural Networks & Expert Systems in Medicine & Healthcare & 1st International Conference on Computational Intelligence in Medicine & Healthcare, Sheffield Hallam University, England, 2003. 17. Tu, J. V., Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes. J. Clin. Epidemiol. 49:1225–1231, 1996.

J Med Syst (2012) 36:2803–2815 18. Bhatikar, S. R., DeGroff, C., and Mahajan, R. L., A classifier based on the artificial neural networks for cardiologic auscultation in pediatrics. Artif. Intell. Med. 33(3):251–60, 2004. 19. Li, Y., Liu, L., Chiu, W., and Jian, W., Neural Network modeling for surgical decisions on traumatic brain injury patients. Int. J. Med. Inform. 57(1):1–9, 2000. 20. Magnotta, V. A., Andreasen, N. C., Heckel, D., Cizadlo, T., Corson, P. W., Ehrhardt, J. C., and Yuh, W. T. C., Measurement of Brain Structures with Artificial Neural Networks: Two and three dimensional applications. Radiology 211:781–90, 1999. 21. Pradhan, N., Sadasivan, P. K., and Arunodaya, G. R., Detection of seizure activity in EEG by an Artificial Neural Network: A preliminary study. Comput. Biomed. Res. 29(4):303–13, 1999. 22. Zou, Y., Shen, Y., Shu, L., Wang, Y., Feng, F., Xu, K., Ou, Y., Song, Y., Zhong, Y., Wang, M., and Liu, W., Artificial Neural Network to assist psychiatric diagnosis. Br J Psych 169:64–67, 1996. 23. Davis, G. E., Lowell, W. E., and Davis, G. L., A Neural Network that predicts psychiatric length of stay. MD Comput 10(2):87–92, 1993. 24. Chattopadhyay, S., Kaur, P., Rabhi, F., Acharya, U. R., An Automated System to Diagnose the Severity of Adult Depression. In Jana D, Pal P (Eds.), the proceedings of 2nd International conference on Emerging Areas of IT, pp. 121–124, 2011. 25. Chattopadhyay, S., Kaur, P., Rabhi, F., Acharya, U. R., Automatic Grading of Adult Depression using a Back Propagation Neural Net Classifier. In Advances in Data mining in Biomedical signalling, Imaging, and Systems by Dua S. and Acharya U.R (Eds.). CRC Press, USA. (Accepted 2010; in press). 26. Jang, J. S. R., ANFIS—Adaptive—Network—based Fuzzy Inference System. IEEE Trans. Syst. Man Cybern. 23(3):665– 685, 1993. 27. Vieira, J., Dias, F. M., Mota, A., Neuro—Fuzzy Systems: A Survey, 5th WSEAS NNA International Conference, 2004. 28. Zadeh, L. A., Fuzzy Sets. Inform. Control 8:338–353, 1965. 29. Phuong, N. H., and Kreinovich, V., Fuzzy Logic and Its Applications in Medicine. I. J. Med. Informatics 62(2):165–173, 2001. 30. Arzi, M., and Magnin, M., A fuzzy set theoretical approach to automatic analysis of nystagmic eye movements. IEEE Trans Biomed. Eng. 36(9):954–963, 1989. 31. Grant, P., A New Approach to Diabetic Control: Fuzzy logic and Insulin Pump Technology. Med Eng Phys 29(7):824–827, 2007. 32. Watanabe, H., Yakowenko, W. J., Kim, Y.-M., Anbe, J., and Tobi, T., Application of a fuzzy discrimination analysis for diagnosis of valvular heart disease. IEEE T. Fuzzy Syst. 2(4):267–276, 1994. 33. Kovalerchuk, B., Triantaphyllou, E., Ruiz, J. F., and Clayton, J., Fuzzy Logic in Computer-Aided Breast Cancer Diagnosis: Analysis of Lobulation. Artif. Intell. Med. 11(1):75–885, 1997. 34. Schineider, J., Bitterlich, N., and Schulze, G., Improved Sensitivity in the Diagnosis of Gastro-Intestinal Tumors by Fuzzy Logic based Tumor Marker Profiles including the Tumor M2-PK. International Journal of Cancer Research and Treatment 25 (3):1507–1515, 2005. 35. Presedo, J., Vila, J., Delgado, M., Barro, S., Palacios, F., and Ruiz, R., A Proposal for the Fuzzy Evaluation of Ischaemic Episodes. Comput Cardio, 709–712, 1995. 36. McBurnie, K, Kwiatkowska, M., Matthews, L., and D’Anguiulli, A., A Multi-Factor Model for the Assessment of Depression Associated with Obstructive Sleep Apnea: A Fuzzy Logic Approach. Annual Meeting of the North American Fuzzy Information Processing Society, pp. 301–306, 2007. 37. Yu, S.-C., and Lin, Y.-H., Applications of Fuzzy Theory on Health Care: An Example of Depression Disorder Classification Based on FCM. WSEAS Transactions on Information Science & Applications 5(1):31–36, 2008. 38. Chattopadhyay, S., Pratihar, D. K., and De Sarkar, S. C., Statistical Modelling of Psychoses Data. Comput. Methods Programs Biomed. 100(3):222–236, 2010.

J Med Syst (2012) 36:2803–2815 39. Chattopadhyay, S., Pratihar, D. K., and De Sarkar, S. C., Fuzzy Logic-based Screening and Prediction of Adult Psychoses: A Novel Approach. IEEE T. Syst. Man Cy A 39(2):381–387, 2009. 40. Chattopadhyay, S., Pratihar, D. K., and De Sarkar, S. C., Developing Fuzzy Classifiers to Predict the Chance of Occurrence of Adult Psychoses. Knowledge-based Systems 20:479–497, 2008. 41. Chattopadhyay, S., Pratihar, D. K., and De Sarkar, S. C., Some Studies on Fuzzy clustering of psychosis data. International Journal of Business Intelligence and Data Mining 2(2):143–159, 2007. 42. Takagi, T., and Sugeno, M., Fuzzy identifiation of systems and its application to modeling and control. IEEE Transactions on Systems, Man and Cybernetics—Part C (SMC-15): 116–132, 1985. 43. Manish, K., Hakan, N., Aarup, L R, Nottrup, T. J., Olsen, D. R., Respiratory Motion Prediction by Using the ANFIS. Phys. Med. Biol. 50(19), 2005. 44. Magenes, G., Signorini, M. G., and Sassi, R., Automatic Diagnosis of Fetal Heart Rate: Comparison of Different Methodological Approaches. Proceedings of the 23rd Annual International Conference of Engineering in Medicine and Biology Society IEEE, 2:1604–1607, 2001. 45. Forouzanfar, M., Dajani, H. R., Groza, V. Z., Bolic, M., Rajan, S., ANFIS for Oscillometric Blood Pressure Estimation. IEEE International Workshop on Medical Measurements and Applications Proceedings, pp. 125–129, 2010. 46. Vosoulipour, A, Teshnehlab M, and Moghadam, H. A., Classification on Diabetes Mellitus Data-set Based on ANN and ANFIS. Proceedings of 4th Kala Lumpur International Conference on Biomedical Engineering, pp. 27–30, 2008. 47. Ozkan, A. O., Sadik, K., Salli, A., Sakarya, M. E., and Gunes, S., Medical Diagnosis of Rheumatoid Arthritis Disease from Right and Left Hand Ulnar Artery Doppler Signals using ANFIS and MUSIC Methods. Adv Eng Softw 41(12):1295–1301, 2010. 48. Kannathal, N., Lim, C. M., Acharya, U. R., and Sadasivan, P. K., Cardiac state diagnosis using adaptive neuro fuzzy technique. Med Eng Phys 28:809–815, 2006. 49. Vatankhah, M., and Yaghubi, M., ANFIS for Classification of EEG Signals using Fractal Dimension, 3rd UKSIM European

2815

50.

51.

52. 53. 54.

55.

56. 57. 58.

59. 60.

61. 62. 63.

Symposium on Computer Modeling and Simulation: pp. 214– 218, 2009. Noor, N. M., Khalid, N. E. A., Hassan, R., Ibrahim, S., Yassin, I. M., ANFIS for Brain Abnormality Segmentation, Control and System Graduate Research Colloquium IEEE: pp. 68–70, 2010. Kannathal, N., Lim, C. M., Acharya, U. R., and Sadasivan, P. K., Entropies for detection of epilepsy in EEG. Comput. Methods Programs Biomed. 80:187–194, 2005. Kohonen, T., Self Organizing Maps. Proc IEEE 78(9):1464–80, 1990. Ball, H. A., McGuffin, P., and Farmer, A. E., Attributional style and depression. Br. J. Psychiatry 192:275–278, 2008. Austin, M. P., Mitchell, P., and Goodwin, G. M., Cognitive deficits in depression: Possible implications for functional neuropathology. Br. J. Psychiatry 178:200–206, 2001. Forsell, Y., Jorm, A. F., and Winblad, B., Association of age, sex, cognitive dysfunction, and disability with major depressive symptoms in an elderly sample. Am. J. Psychiatry 151:1600– 1604, 1994. Cronbach, L. J., Coefficient alpha and the internal structure of tests. Psychometrika 16:297–334, 1951. Fisher, R. A. The use of multiple measurements in axonomic problems. Annals of Eugenics 7:179–188, 1936. Han, J., and Kamber, M., Data Mining Concepts and Techniques. Morgan Kaufmann Publishers, San Francisco, California USA, pp. 327–36, 2006. Chiu, S., Fuzzy Model Identification Based on Cluster Estimation. J. Int. Fuzzy Syst. 2(3):267–268, 1994. Shing, J., and Jang, R., ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Trans. Syst. Man Cybern. 23(3):665–85, 1993. Jolliffe, I. T., Principal Component Analysis. Springer-Verlag Heidelberg, Germany, 1986. Nunnaly, J., Psychometric theory. McGraw-Hill, New York, 1978. Gliem, J. A., Gliem, R. R., Calculating, Interpreting and Reporting Cronbac’s Alpha Reliability Coefficient for LikertType Scales. In the proceedings of Midwest Research-to-Practice Conference in Adult, Continuing and Community Education, pp. 45–48, 2003.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.