Automatic breast parenchymal density classification integrated into a CADe system

Share Embed


Descrição do Produto

Int J CARS (2011) 6:309–318 DOI 10.1007/s11548-010-0510-z

ORIGINAL ARTICLE

Automatic breast parenchymal density classification integrated into a CADe system G. Bueno · N. Vállez · O. Déniz · P. Esteve · M. A. Rienda · M. Arias · C. Pastor

Received: 12 January 2010 / Accepted: 19 June 2010 / Published online: 5 August 2010 © CARS 2010

Abstract Purpose Breast parenchymal density is an important risk factor for breast cancer. It is known that mammogram interpretation is more difficult where dense tissue is involved. Therefore, automated breast density classification may aid in breast lesion detection and analysis. Methods Several image pattern classification techniques for screen-film (SFM) mammography datasets were tested and classified according to BIRADS categories using known cases. A hierarchical classification procedure based on k-NN, SVM and LBN combined with principal component analysis on texture features uses the breast density features. The classification techniques have been incorporated into a CADe system to drive the detection algorithms. Results The results obtained on 322 mammograms demonstrate that up to 84% of samples were correctly classified. The results of the lesion detection algorithms were obtained from modules integrated within the CADe system developed by the authors. Conclusions The ability to detect suspicious lesions on dense and heterogeneous tissue has been tested. The tools enhance the detectability of lesions and they are able to distinguish their local attenuation without local tissue density constraints. Keywords Breast tissue classification · CADe system · Texture analysis · Lesion detection G. Bueno (B) · N. Vállez · O. Déniz Universidad de Castilla-La Mancha, E.T.S. Ingenieros Industriales, Avda. Camilo José Cela, 3, 13071 Ciudad Real, Spain e-mail: [email protected] P. Esteve · M. A. Rienda · M. Arias · C. Pastor Dpt. Radiología, Hospital General de Ciudad Real, Tomelloso s/n, 13005 Ciudad Real, Spain

Introduction Breast cancer continues to be an important health problem. Early detection can potentially improve breast cancer prognosis and significantly reduce female mortality. And computer-aided diagnosis (CAD) systems can be of tremendous help to radiologists in the detection and classification of breast lesions, [1]. Therefore, the development of reliable CAD systems is an important and challenging task. The automated interpretation of mammogram lesions remains a difficult task, however. The presence of dense breast tissue is one of the potential problems. Dense tissue may cause suspicious areas to be almost invisible and may be easily misinterpreted as calcification [2–4]. Since the discovery by Wolfe [5]of the relation between mammographic parenchymal patterns and the risk of developing breast cancer in 1976, there has been a heightened interest in investigating breast tissue. A good review of the work done on breast tissue classification since that time can be found in [6]. Our research has been prompted by this need to classify breast tissue and drive the development of CAD algorithms for the automated analysis of breast lesions. This paper is not intended to be applied to the classification of benign or malignant lesions associated with tissue types [4,7] but only to drive the CAD algorithms, as mentioned previously. Several classification methods have been presented in the literature to classify different types of abnormalities, i.e., masses, micro calcification and distortions. The best results are reported with support vector machine (SVM) using both independent and principal component analysis [8–13]. In our study, several classification methods have been compared, and a hierarchical classification procedure combined with principal component analysis (PCA) on texture features is proposed as the best solution. Experimental results have been given on different mammograms with various densities and

123

310

Int J CARS (2011) 6:309–318

(a)

(e)

(b)

(c)

(d)

(f)

(g)

(h)

Fig. 1 BIRADS tissue classification and preprocessed images: T.I fatty, T.II fatty-glandular or fibroglandular, T.III heterogeneously dense and T.IV extremely dense

abnormalities. The datasets have been classified according to 4 categories specified by the American College of Radiology BIRADS [14]. That is: T.I) fatty, T.II) fatty-glandular or fibroglandular, T.III) heterogeneously dense and T.IV) extremely dense. The second section describes the methods and material used for this work. These include the feature extraction procedure applied to the classifiers, the classifiers tested, the data training and testing carried out, the integration of the classifiers into the CADe system, as well as the experimental database used. The third section describes the results obtained with the proposed methods, and in the last section, the main conclusions are drawn. Methods and materials Several studies dealing with the breast tissue classification problem have been described in the literature. These studies are based on: (a) the use of gray-level histograms and (b) texture information extracted from different regions. Our proposal is to apply texture analysis to the whole breast. Thus, all mammograms are pre-processed to identify the breast region and remove the background and possible labels. This is illustrated in Fig. 1 for the different tissue types considered. Feature extraction Most studies on texture classification are based on statistical features obtained from the image [15–17]. Here, we analyze 241 features, both 1st- and 2nd-order texture

123

Table 1 First-order statistics Features

Definition

1. Mean

 N −1

2. Mode 3. Variance 4. 1st quartile 5. 2nd quartile 6. 3rd quartile 7. Interquartile Range

i=0

i h(i)

i|h(i) = max(h)  N −1 2 n=0 (i − μ) h(i) N 4 , even_N N +1 4 , odd_n 2N 4 , even_N 2N +1 4 , odd_n 3N 4 , even_N 3N +1 4 , odd_N

#6 - #4

8. Minimum

Min(h)

9. Maximum

Max(h)

10. Value Range

Max(h(i)) − Min(h(i))  N −1 i=0 h(i)log(h(i)) 1  N −1 3 n=0 (i − μ) h(i) σ3 1  N −1 4 n=0 (i − μ) h(i) σ4

11. Entropy 12. Asymmetry 13. Kurtosis Where: h is the image histogram N is the number of gray levels

statistics, drawn from the pre-processed image histogram and the co-occurrence matrix-based features. The latter are the Haralick’s coefficients, [18], and they are calculated for a distance parameter, d, equal to 1, 3 and 5 at 0◦ , 45◦ , 90◦ and 135◦ . These features are shown in Tables 1 and 2.

Int J CARS (2011) 6:309–318 Table 2 Second-order statistics

311

Features

Definition

1. Energy

 N −1  N −1

2. Contrast

 N −1

i=0

2 j=0 p(i, j)  N −1  N −1 2

n=0 n  N −1  N −1 i=0

3. Correlation

j=0

 N −1  N −1

5. Sum average

2(N −1)

6. Sum entropy

2(N −1)

8. Entropy 9. Difference variance 10. Difference entropy 11. Correlation information 1

j=0

 p(i, j) , |i − j| = n

(i j) p(i, j)−μx μ y σx σ y

4. Variance

7. Sum variance

i=0

(i − μ)2 p(i, j)

i=0

j=0

i=0

i px+y (i)

px+y (i)log( px+y (i, j)) i=0 2(N −1) − i=0 (i − Sum Entr opy)2 px+y (i)  N −1  N −1 − i=0 j=0 p(i, j)log( p(i, j))  N −1 2 i=0 i px−y (i)  N −1 − i=0 px−y (i)log( px−y (i, j)) H X Y −H X Y 1 max(H X,H Y )



13. Homogeneity 1

1 − ex p(−2(H X Y 2 − H X Y )  N −1  N −1 p(i, j)

14. Homogeneity 2

 N −1  N −1

15. Cluster shade

 N −1  N −1

16. Cluster prominence

 N −1  N −1

17. Autocorrelation

 N −1  N −1

18. Dissimilarity

 N −1  N −1

19. Maximum probability

max( p(i, j)), i = 0 . . . N − 1, j = 0 . . . N − 1

12. Correlation information 2

i=0 i=0

j=0 1+(i− j)2 p(i, j) j=0 1+|i− j|

i=0

j=0

(i + j − μx − μ y )3 p(i, j)

i=0

j=0

(i + j − μx − μ y )4 p(i, j)

i=0

j=0

(i j) p(i, j)

i=0

j=0

|i − j| p(i, j)

Where: p is the image co-occurrence matrix N is the number of gray levels  N −1  N −1  −1 px (i) = Nj=0 p(i, j) μx = i=0 j=0 i p(i, j)  N −1  N −1  N −1 p y ( j) = i=0 p(i, j) μ y = i=0 j=0 j p(i, j)   N −1 N −1 2 2 σx = p (i)(i − μ ) σ = x x y i=0 j=0 p y (i)(i − μ y )  N −1  N −1 px+y (k) = i=0 j=0 p(i, j), i + j = k, k = 0 . . . 2(N − 1)  N −1  N −1 px−y (k) = i=0 j=0 p(i, j), |i − j| = k, k = 0 . . . N − 1  N −1  N −1 H X Y = − i=0 j=0 p(i, j)log( p(i, j))  N −1  N −1 H X Y 1 = − i=0 j=0 p(i, j)log( px (i) p y ( j))  N −1  N −1 H X Y 1 = − i=0 j=0 px (i) p y ( j)log( px (i) p y ( j))

Moreover, to reduce and select the feature space, a principal component analysis (PCA) was applied. This mathematical procedure transforms a number of variables that can be correlated into a smaller set of uncorrelated variables called principal components. Different tests were performed by varying the number of components from the space reduced by the PCA. This number of components varies between 10 and 240 at intervals of 10.

The average error rates for all classifiers were measured at each interval. These rates range between 42% and 45% with the best improvement of 12% in some cases. The minimum error corresponds to the reduction of the space to 20 components by means of the PCA. Results are shown in Table 4. The eigenvectors and eigenvalues obtained were analyzed in order to find the most representative features according to the PCA. The discriminatory power of these features was also

123

312 Table 3 Selected features

Int J CARS (2011) 6:309–318

Features

PCA selected

Feature ranking

Variance

d = 5 and 135◦ , 90◦ , 0◦

d = 5, 3 and 45◦ , 135◦ , 0◦ d = 5 and 45◦

3rd quartile Entropy

d = 5 and

Kurtosis

d = 5 and 0◦

Energy

d = 5 and 90◦

Contrast

d = 5 and 135◦ d = 5 and

45◦

Sum average

d = 5 and

45◦ , 90◦

Difference variance

d = 5 and 90◦

Sum variance

d = 5 and 45◦ , 135◦ , 0◦ d = 5 and 135◦ d = 5 and 135◦

135◦ , 90◦

Difference entropy

d = 5 and

Correlation information 1

d = 5 and 135◦

Correlation information 2

d = 5 and 135◦

Homogeneity 1

d = 5 and 45◦

Homogeneity 2

d = 5 and 90◦ , 0◦

Cluster shade

d = 1, 3 and 0◦ , 90◦ , 135◦ , 45◦

Autocorrelation

d = 5 and 135◦ , 45◦ , 0◦

Maximum probability

analyzed using a feature ranking of individual performance for each classification method. This evaluation is based on the results from intra-cluster and inter-cluster distances between the four tissue types. These distances measure the variability within and between different classes. This identifies the features that maximize these values. The twenty most significant features for the PCA and for the feature ranking are indicated in Table 3. Classifiers, data training and testing Different classification methods were tested on the selected features. These are the following: support vector machine (SVM) with polynomial, Minkowski distance, exponential, radial_basis and sigmoid kernels, neural networks (feedforward, backpropagation, perceptron and radial basis) (NN), k-NN with k equal to 1, linear Bayes normal (LBN), quadratic (QD), loglinear classifier (LOGL), naive Bayes classifier (NAIVEB) and tree classifier with two layers ({T.I ∪ T.II, T.III ∪ T.IV} and {{T.I, T.II}; {T.III, T.IV}}), [19,20]. The best results obtained for the SVM are with a polynomial kernel and for the NN are with the backpropagation (BPNN), and these are shown here. To train and test classifiers, two methods were used: (1) A combination of hold-out and re-substitution methods and (2) the k-fold cross-validation method with k = 10. The re-substitution method designs the classifier on the complete dataset and tests it on the complete dataset too. This is an optimistically biased method. The hold-out method on the contrary is a pessimistically biased method that splits the dataset into halves and then uses one half for training and

123

135◦

d = 5 and 90◦ , 0◦

the other half for testing [19]. The combination proposed is accomplished in two stages. In the first stage, the data are randomly divided into two groups containing the same number of samples. One of these groups is randomly selected to train the classifiers. Once the classifiers are trained, the tests are performed on the complete dataset. The k-fold cross-validation method consists of randomly dividing the data into k different groups containing approximately the same number of samples. One of these groups is selected to train the classifier, and tests are performed on the rest of the groups. The process is then repeated with the other k-1 groups of the dataset, and an average error of classification is obtained. The performance of these classifiers is shown and discussed in “Results and discussion”. Integration into a CADe system The automatic tissue classification method has been integrated into a CADe system developed by the authors. Different detection methods for the different lesions have been integrated into the CAD system. The methods may potentially be applied to all lesions and tissues. However, after running the tests, we concluded that their efficiency was tissue and lesion type dependent. The methods implemented have been B-splines, wavelet, adaptive filtering and fuzzy k-means [1,21]. Thus, prior to the detection algorithm, the tissue classification is applied. It is necessary to adjust the input parameters to control the sensitivity of the algorithm depending on the tissue type, especially in areas of high density, in order to reduce

Int J CARS (2011) 6:309–318

313

Fig. 2 B-spline Filtering. a–d 1st column: Selected region obtained from the original images corresponding to a mammography with T.I (a) and T.III (d) tissues. b–e 2nd column: highlighted contrast by B-splines applied to X -axis. c–f 3rd column: highlighted contrast by B-splines applied to Y -axis

false positive detections in these areas. These input parameters are the number of clusters in the fuzzy k-means, the number of iterations in the wavelet method and the angular rate in the adaptive filtering. These parameters should be increased when processing T.IV and proportionally decreased with the other types. Also in the case of T.III and T.IV, wavelets and B-spline algorithms are applied in conjunction with the fuzzy and the adaptive filtering, respectively. In terms of lesions, adaptive filtering and B-splines are best used for microcalcifications, wavelets for distortions and fuzzy k-means for mass lesions. One of the detection algorithms is based on B-spline filtering. The images have been processed with the first derivative of a cubic spline model. It is applied to both X and Y -axis directions obtaining new coefficients that are re-scaled from 0 to 255 for visualization purposes. The results are shown in Fig. 2 for both axes. It is possible to see how the output simulates a relief of the image. This is due to the intensity changes produced on the image when converting from discrete to continuous coefficients with the B-spline transform. The method has also been compared to wavelet analysis. Several wavelet transforms were tested, and the Debauchies transform was found to give the best results, with 20 coefficients (DB20) over high frequencies, after 3 iterations for T.I and T.II, and 7 iterations for T.III and T.IV. The next section illustrates these results.

by local hospitals is being considered. However, the SFM dataset was used to compare our results with those of other authors and results shown on this paper are based on this one. Both datasets were labeled according to the BIRADS categories by expert clinicians from the General Hospital of Ciudad Real. The image sizes are 3328 ∗ 4084 and 1024 ∗ 1024, respectively for the FFDM and SFM datasets. The MIAS database contains images from right and left medial-lateral projections (RMLO, LMLO) of 161 different cases.

Results and discussion Tables 4 and 5 show the results of the classifiers with and without PCA for the SFM dataset using: (a) a combined holdout and re-substitution method and (b) a 10-fold cross-valida-

Table 4 Agreement of classifiers using combined hold-out and Re-substitution method

(a)

Experimental database A dataset composed of 322 SFM obtained from the MIAS public database was considered. Another dataset composed of 1418 full-field digital mammography (FFDM) provided

(b)

123

314

Int J CARS (2011) 6:309–318

Table 5 Agreement of classifiers using 10-fold cross-validation method

Table 6 2-layer tree classifier, % of mammograms correctly classified Types

1st Layer (%)

2nd Layer (%)

Hold-out ∪ Re-substitution (a) k-NN with PCA T.I

(a)

94

85

T.II

90

T.III

86

87

T.IV

64

10-fold cross-validation (b) LBN+SVM with PCA

(b)

T.I

91

76

89

78

T.II

tion method, respectively. The results of Table 4b, c are given with 20 features, since the PCA obtained the best results with a selection of 20 features. The best classifiers are shown in red for those values ≥80% and green for those values ≥75%. On average, and weighted with respect to the number of mammograms of each type, the classification with PCA provides better results. The best classifier using hold-out and re-substitution, to test and train the classifiers, is the k-NN, with 79% agreement, but using 10-fold cross-validation, the best one is LBN, with up to 69% agreement. Analyzing by tissue type, the best classifiers using hold-out and re-substitution are the SVM for T.I, k-NN for T.II and T.IV and BPNN for T.III. And the best ones using 10-fold cross-validation are the k-NN for T.I and T.II, the QD for T.III and SVM for T.IV. A 2-layer tree classifier was also tested using the best classifiers obtained previously. That is, the first layer is composed of the {T.I ∪ T.II, T.III ∪ T.IV} and the second one of {{T.I, T.II}}, {{T.IIII, T.IV}. The results improved upon the previous ones, obtaining up to 91% in the first layer and 84% in the second layer with k-NN and PCA by means of hold-out and re-substitution for testing and training. And up to 90% in the first layer with LBN and 75% in the second layer with SVM by means of 10-fold cross-validation. The final results are shown in Table 6. Table 7 also shows the confusion matrices for these classifiers, that is, an average of 85% true positive detection (TPD) for T.I, 90% for T.II, 87% for T.III and 64% for T.IV, and 3.8% false positive detection (FPD) for T.I, 7.7% for T.II, 7.4% for T.III and 3.2% for T.IV, with hold-out and re-substitution for testing and training. If 10-fold cross-validation is used, we obtain 76% TPD for T.I, 80% for T.II, 78% for T.III and 57% for T.IV, and 6.7% FPD for T.I, 10% for T.II, 10.8% for T.III and 5.7% for T.IV. The same methods were also used for the classification of both benign and malignant lesions, with and without previous tissue type classification in order to test how the feature selection and the classification of tissue types affected the classification into benign and malignant categories. This classification was done just with 10-fold cross-validation for testing and training and with a similar quantum of data for

123

80

T.III T.IV

57

Table 7 Confusion matrices for the tree classifier Types

Estimated T.I

True total T.II

T.III

T.IV

Hold-out ∪ Re-substitution (a) k-NN with PCA T.I

71

9

3

1

84

T.II

3

92

5

2

102

T.III

3

4

80

5

92

T.IV

3

4

9

28

44

10-fold cross-validation (b) LBN+SVM with PCA T.I

64

12

7

1

84

T.II

11

82

6

3

102

T.III

3

5

72

12

92

T.IV

2

5

12

25

44

Table 8 SFM benign or malign classification Types

SVM (%)

BPNN (%)

k-NN (%)

LBN (%)

QD (%)

Benign

83

73

75

91

72

Malign

64

67

69

73

81

benign and malignant cases, i.e. 52 cases. Table 8 shows the results of the classifiers with PCA. The best overall result is obtained by the LBN with up to 82% agreement. Table 9 shows the results of the lesion classification after tissue type classification. The tissue type classification is done with the tree classifier, and the best results for the benign and malign classification stage is also shown in Table 9. The best overall result is obtained with the SVM and in total, a 80% agreement is obtained. The results with and without tissue type classification are similar but we cannot conclude anything about this test due to the small number of cases.

Int J CARS (2011) 6:309–318 Table 9 SFM benign or malign classification with a previous tissue type classification

315

Types

True benign

True malign

Estimated benign

Estimated malign

TPD benign

TPD malign

Classifier

Total (%)

T.I.

14

14

13

15

10

11

SVM

75

T.II.

20

20

25

15

18

13

LOGL

77

T.III.

11

11

9

13

8

10

SVM

82

T.IV.

7

7

9

5

7

5

SVM

86

Fig. 3 Selected regions obtained from the original images to illustrate a wrong selection of parameters in the CADe algorithms and the B-spline. a Adaptive filtering γ = 5; b wavelet DB20, iter = 3; c B-spline filtering

(a)

(a)

(b)

(c)

(b)

(c)

(d)

Fig. 4 Lesion detection for different tissue type with Fuzzy k-means algorithm

There are just three works in the literature presenting breast tissue classification according to BIRADS categories on SFM. Their overall correct classification is about 71% [15], 76% [22] without tissue segmentation and 82% [6] with it. Our approach reflects up to 84% of samples correctly classified by means of the 2-Layer Tree Classifier using k-NN with PCA and hold-out and re-substitution combined method

for training and testing. Using 10-fold cross-validation and LBN+SVM with PCA classifiers, the 75% of samples are correctly classified. These classification methods have been integrated into a CADe system and applied prior to the detection algorithms. Figures 4, 5 and 6 illustrate the result of the detection algorithms after tissue type classification. The figures show the

123

316

Int J CARS (2011) 6:309–318

(a)

(b)

(c)

(d)

Fig. 5 Lesion detection for different tissue type with wavelet processing

original image with the spatial location of the lesion or distortion, the lesion type, the tissue type, the detected, marked in black, and the parameters used for each algorithm, those are, the number of clusters, c, in the fuzzy k-means, the number of iterations, iter in the wavelet method and the angular rate γ in the adaptive filtering. A wrong selection of γ in the adaptive filtering can give a high number of FPD, and a wrong number of iterations in the wavelet algorithm prevent for visualizing the lesion. This is illustrated in Fig. 3a, b, respectively, for a selected area of the mammography. Figure 3c illustrates the result of the B-spline algorithm used in conjunction with the adaptive filtering when processing T.III and T.IV tissues. Table 10 shows the detection performance with and without breast tissue classification. Thus, the percent of TPD and FPD for the dataset used are shown with the different breast tissue types and the implemented CADe algorithms, where only a fully detected lesion count as truly detected. The fuzzy k-means has been used for masses, the adaptive filtering for calcifications and the wavelets for architectural distortion and asymmetries. In Table 10, it is possible to see how in general the TPD increases and the FPD decreases with tissue type classification, where the main differences are in the adaptive filtering and wavelet processing for T.III and T.IV. The fuzzy k-means algorithm does not perform differently with or without tissue type classification for T.I, increasing with tissue type classification 2% the TPD for T.II, 10% for T.III and 6% for

123

T.IV. It decreases 2% the FPD for T.II, 1% for T.III and 8% for T.IV. The wavelet processing increases with tissue type classification 5% the TPD for T.I, 1% for T.II, 69% for T.III and 73% for T.IV. It decreases 8% the FPD for T.I, 21%. The adaptive filtering does not perform differently for T.I, increasing with tissue type classification 3% the TPD for T.II and decreases 2% for T.III and 5% for T.IV. It decreases 9% the FPD for T.II, 56% for T.III and 60% for T.IV. It is worth mentioning the comments made by the clinicians: The tools improve the resolution, in terms of the detectability of lesions, and additionally, they are able to distinguish their degrees of attenuation. The ability of wavelets to homogenize the background and of B-spline filtering to provide contrast and relief were judged to be quite useful. Both wavelets and B-spline work well in analyzing the resolution, which means that they properly characterize the border of the region of interest without being constrained by the density level of the tissue. They project the image onto a gray background that throws into relief and highlights the spicules, distortions and parenchyma. The B-spline transform keeps the original size of the calcium nodes.

Conclusions In this work, a hierarchical procedure based on k-NN, LBN and SVM together with PCA on texture features has

Int J CARS (2011) 6:309–318

317

(b)

(a)

(c)

(d)

Fig. 6 Lesion detection for different tissue type with adaptive filtering

Table 10 TPD and FPD for the CADe algorithms with and without breast tissue classification Types

T.I (%)

T.II (%)

(a) % TPD without breast tissue classification Fuzzy k-Means 89 78

T.III (%) 10

T.IV (%) 4

Wavelets

65

64

10

5

Adaptive filtering

85

85

82

75 10

(b) % FPD without breast tissue classification Fuzzy k-Means

8

9

6

Wavelets

11

23

7

4

Adaptive filtering

10

25

73

62 10

(c) % TPD with breast tissue classification Fuzzy k-Means

89

80

20

Wavelets

70

65

79

78

Adaptive filtering

85

82

80

70

(d) % FPD with breast tissue classification Fuzzy k-Means 8 7 Wavelets Adaptive filtering

5

2

3

2

4

5

10

16

17

15

been proposed for breast tissue classification. Our approach reflects up to 84% of samples correctly classified for a SFM dataset, which improve previous results reported in the literature. The method has been integrated within a CADe system developed by the authors. The tissue type classification prior

to the detection is used to choose the right algorithm to carry out the lesion segmentation and to properly tune the parameters of the algorithms. The tissue type classification increases the performance mainly for T.III and T.IV lesion detection. The processing tools implemented into the CADe system have been tested and validated by expert clinicians at Hospital General de Ciudad Real. The filtering presented has been shown to be successful in highlighting breast lesions on different types of tissues. The ability to detect suspicious lesions on dense and heterogeneous tissues, a continuing problem for radiologists due to the low contrast of these tissues, has been tested by several clinicians. Further tests are being carried out to improve the classification results for all tissue types and datasets and to extend the CAD system. These tests include using a FFDM dataset and further statistical analysis of the features considered. Acknowledgments The authors acknowledge partial financial support from the Spanish Research Ministry and Junta de Comunidades de Castilla-La Mancha through projects RETIC COMBIOMED and PI-2006/01.1.

References 1. Bueno G (2008) In: Fuzzy systems and deformable models. Series in medical physics and biomedical engineering. Taylor and Francis group, London, pp 305–329 (Intelligent and Adaptive Systems in Medicine)

123

318 2. Boyd N, Dite G, Stone J et al (2002) Reliability of mammographic density, a risk factor for breast cancer. New Engl J Med 347(12):886–894 3. Ursin G, Hovanessian-Larsen L, Parisky YR et al (2005) Greatly increased occurrence of breast cancers in areas of mammographically dense tissue. Breast Cancer Res 7(5):605–608 4. Brem R, Hoffmeister J, Rapelyea J et al (2005) Impact of breast density on computer-aided detection for breast cancer. Am J Roentgenol 184:439–444 5. Wolfe JN (1976) Risk for breast cancer development determined by mammographic parenchymal pattern. Cancer 37:2486–2492 6. Oliver A, Freixenet J, Martí R et al (2008) A novel breast tissue density classification methodology. IEEE Trans Info Tech Biomed 12:55–65 7. Yafee M, Boyd N (2005) Mammographic breast density and cancer risk: the radiological view. Gynecol Endocrinol 21(Supplement 1):6–11 8. Koutras A, Christoyianni I, Georgoulas G, Dermatas E (2006) Computer aided classification of mammographic tissue using independent component analysis and support vector machines. Lect Notes Comput Sci 4132(1):568–577 9. Gorgel P, Sertbas A, Kilic N, Ucan O, Osman O (2009) Mammographic mass classification using wavelet based support vector machine. J Electr Electron Eng 9(1):867–875 10. Chang R, Wu W, Moon WK, Chou Y, Chen D (2003) Support vector machines for diagnosis of breast tumors on us images. Acad Radiol 10(2):189–197 11. Mavroforakis M, Georgios H, Dimitropoulos N, Cavouras D, Theodoridis S (2006) Mammographic masses characterization based on localized texture and dataset fractal analysis using linear, neural and support vector machine classifiers. Artif Intell Med 37(2):145–162 12. Fu JC, Lee SK, Wong STC, Yeh JY, Wang AH, Wu HK (2005) Image segmentation feature selection and pattern classification for mammographic microcalcifications. Comput Med Imaging Graph 29:419–429

123

Int J CARS (2011) 6:309–318 13. Christoyianni I, Koutras A, Dermatas E, Kokkinakis G (2001) Breast tissue classification in mammograms using ica mixture models. Lect Notes Comput Sci 2130(1):554–560 14. American College of Radiology (2003) Breast imaging reporting and data system atlas (BIRADS). ACR, Reston, Va 15. Bovis K, Singh S (2002) Classification of mammographic breast density using a combined classifier paradigm. In: 4th international workshop on digital mammography, pp 177–180 16. Bosch A, Munoz X, Oliver A, Marti J (2006) Modeling and classifying breast tissue density in mammograms. In: Proceedings IEEE computer society conference on computer vision and pattern recognition, vol 21, pp 1552–1558 17. Oliver A, Lladó X, Martí R, Freixenet J, Zwiggelaar R (2007) Classifying mammograms using texture information. In: Proceedings medical image understanding and analysis, pp 223–227 18. Haralick R, Sternberg S, Zhuang X (1987) Image analysis using mathematical morphology. IEEE Trans Pattern Anal Mach Intell 9(4):532–550 19. Kuncheva Ludmila I (2004) Combining pattern classifiers. Wiley, New York 20. Duda RO, Hart PE, Stork DG (2001) Pattern Classification. Wiley, New York 21. Bueno G, Ruiz M, Sánchez S (2006) B-spline filtering for automatic detection of calcification lesions in mammograms. In: Proceedings of the Intern. conference on information optics, WIO’06. pp 60–70 22. Petroudi S, Kadir T, Brady M (2003) Automatic classification of mammographic parenchymal patterns: a statistical approach. In: Proceedings IEEE conference engineering medicine Biology Society vol 1, pp 798–801

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.