Fuzzy systems design via ensembles of ANFIS

Share Embed


Descrição do Produto

Fuzzy Systems Design via Ensembles of ANFIS Clodoaldo Ap. M. Lima, André L. V. Coelho, Fernando J. Von Zuben Department of Computer Engineering and Industrial Automation (DCA) School of Electrical and Computer Engineering (FEEC) State University of Campinas - Unicamp {moraes,coelho,vonzuben}@dca.fee.unicamp.br

Abstract – Neurofuzzy networks come to be a powerful alternative strategy to develop fuzzy systems, since they are capable of learning and providing IF-THEN fuzzy rules in linguistic or explicit form. Amongst such models, ANFIS has been recognized as a reference framework, mainly for its flexible and adaptive character. In this paper, we extend ANFIS theory by experimenting with a multi-net approach wherein two or more differently structured ANFIS instances are coupled to play together. Ensembles of ANFIS (E-ANFIS) enhance ANFIS performance skills as well as alleviate some of its computational bottlenecks. Moreover, it promotes the automatic configuration of different ANFIS units and the a posteriori selective combination of their outputs. Experiments conducted to assess E-ANFIS generalization capability are also presented.

I. INTRODUCTION Recently, there is much interest in the synergy between neural networks (NNs) and fuzzy systems, giving birth to a range of architectural models applied to different classes of problems. Under this premise, neurofuzzy networks (NFNs) [1]-[4] have been devised as an alternative means to design and configure fuzzy rule-based systems, showing qualitative and quantitative improvement over other conventional techniques such as trial-and-error. The majority of neurofuzzy models, however, address only parametric identification and learning, leaving to the designer the liability of choosing a priori the sets of fuzzy rules and the shape characteristics of the input/output membership functions (MFs). In order to cope also with such structure learning duties, the adaptive-network-based fuzzy inference system (ANFIS) [5][6] was conceived, whereby an adaptive rule-based framework implements a fuzzy inferential reasoning process. By using a hybrid learning procedure, ANFIS can build an input-output mapping based both on human knowledge and stipulated input-output pairs. By other means, ensembles of neural networks (ENNs) [7][10] involve the generation and linear/non-linear combination of a pool of individual NNs designed to produce complementary aspects of the solution. This is typically done through the variation of some configuration parameters and/or division of training data. Such ensembles should properly integrate the knowledge embedded in the component NNs (devised to tackle subtasks), and have frequently produced more accurate and robust models. In this context, Hashem and Schmeiser [11] have shown that optimal linear combinations (OLCs) of NNs may be achieved through the weighted sum of their outputs, wherein proper aggregation weights should be estimated from the input data set to minimize the mean squared error (MSE) associated with the model input distribution. The resulting 0-7803-7280-8/02/$10.00 ©2002 IEEE

combination is called MSE-OLC. Although this approach is similar to creating a large NN with trained subnetworks operating in parallel, it presents more computational advantages since [12] (i) when combining NNs, their connection-weights are set fixed and the aggregation weights are computed by performing simple (fast) matrix manipulations; and (ii) when training one large NN, there is a large set of parameters to be concurrently estimated as well as the tendency of data overfitting. In this work, we introduce the concept of ensembles of ANFIS (E-ANFIS, for short), employing the selection criteria proposed in the work of Hashem [12] to improve ANFIS’ generalization capability. To our best knowledge, this is the first attempt on enhancing ANFIS with a multi-net approach. In such investigation, we disregard all parametric issues relating to input/output MFs and focus on approximation, identification, and prediction problems. Yet E-ANFIS may be employed as well for classification problems with predefined groups inasmuch as it can be formulated as an approximation problem, and the decision value is acquired via class membership probabilities. In order to assess our approach, we compare simple averaging (equal weights for all component NNs) and weighted averaging (MSE-OLC) combination criteria with each other, and with the conventional best single ANFIS approach. Whereas the former does not require any data for estimation purposes, so do the weighted averaging and the best ANFIS methods. In the testing examples, the desired target function is made known in such a way that we could measure the E-ANFIS accuracy regarding the MSE related to this function. This paper is organized as follows. A brief overview on ANFIS is given in Section II, whereas Section III presents the definition of E-ANFIS. In Section IV, we formalize the linear combination problem and describe the selection criteria. Experimental results are presented and discussed in Section V, and Section VI brings final remarks. II. BACKGROUND ON ANFIS Neurofuzzy networks (NFNs) implement fuzzy rules and inference within NN architectures. They have all the properties of standard NNs but differ slightly in the following features [1]-[3]: (i) learning becomes capturing knowledge in the form of IF-THEN fuzzy rules; (ii) adaptation alters existing rules when the NFN is trained and the modified rulebase is extracted (this can be implemented by using fixed or adaptable membership functions with adaptable rules); and (iii) generalization should be better due to the more complex modeling of the problem.

Fig. 1 : (a) TSK fuzzy inference process; (b) Type-3 equivalent ANFIS

Within this scenario, ANFIS has gained much attention for its flexibility of incorporating knowledge coming both from human expertise as well as input/output training samples. Its applicability has concentrated on the synthesis of controllers (automated tuning of parameters) and models (identification and prediction). One feature of ANFIS is that it combines the gradient descent method and the least squares estimate (LSE) to identify its parameters. LSE is used in the forward pass (offline learning) when attempting to minimize the error between the actual state and the desired state of the adaptive network. In the backward pass, the error rates (the derivative of the error measure with respect to each node output) propagate from the output end towards the input end, and the parameters are updated by the gradient descent method. Such hybrid-training rule ensures that the dimensionality of the gradient descent search is drastically reduced, bringing about substantially lower convergence times. ANFIS structure (Fig. 1b) is a weightless multi-layer array of five different elements [5]: • Layer 1: Input data are fuzzified and neuron values are represented by parameterized membership functions; • Layer 2: The activation of fuzzy rules is calculated via differentiable T-norms (usually, the soft-min or product); • Layer 3: A normalization (arithmetic division) operation is realized over the rules’ matching values; • Layer 4: The consequent part is obtained via linear regression or multiplication between the normalized activation level and the output of the respective rule; • Layer 5: The NFN output is produced by an algebraic sum over all rules’ outputs. It is worth to note that ANFIS does not require that the designer choose a priori the number of hidden nodes, since this is automatically obtained from the number of input vectors. As well, due to the nature of the fuzzy rules under consideration, an ANFIS can only have one output, which limits its applicability to problems with one solution per time. Moreover, several types of MFs may be employed for the neurons of the input (output) layer. However, these functions usually cannot be setup at run-time (only the values of the rules can be). Thus, the prior selection of which type of MF to apply in accordance with the problem at hand turns to be a critical issue when one makes use of ANFIS. E-ANFIS tries to fill part of this gap. The list below includes the types of MF we have considered in our experiments: i. (DSIGMF) Composed of the difference between two sigmoid membership functions, each one described as 0-7803-7280-8/02/$10.00 ©2002 IEEE

µ Ai =

1 1 + exp( − a i ( x − ci ))

ii. (GAUSS2MF) A two-sided version of GAUSSMF. iii. (GAUSSMF) A Gaussian membership function given by  (x − ci )2   µ ( x) = exp − Ai

 

 

ai

(PIMF) A π-shaped membership function implemented in the MATLAB environment. v. (PSIGMF) The product of two sigmoid MFs. vi. (TRAPMF) The trapezoidal MF. vii. (TRIMF) The triangular membership function. viii. (GBELLMF) The “generalized bell curve” MF given by

iv.

µ Ai =

1  x − c i 1 +   ai 

   

2

   

bi

Jang [5] proposed three ANFIS styles (each representing a different fuzzy reasoning process) where the consequent parts are set to a monotonic function, a fuzzy set, or a linear combination, respectively. It is well-known that the real power of this NFN lies on its ability to represent TakagiSugeno rules [13] of the form: IF x is AM and y is BN THEN fK = pKx + qKy + rK, where x and y are input variables, AM and B N are membership functions, fK is the output variable, and pK, qK, and rK are consequent parameters (see Fig. 1a). III. ENSEMBLES OF ANFIS Whereas a typical ANFIS employs only one network archetype for a single fuzzy inference system, we investigate here the combination of M networks as an ensemble of ANFIS. The motivation underlying such approach is to endow a group of fuzzy mappings (which are separately trained and distinctly configured) with the capability of playing together to cope with huge problems. E-ANFIS, thus, may be classified as a cooperative fuzzy neural network [9], since it encompasses an architectural aggregate in which the performance of an NFN is contingent to the performance of its peers. In this novel approach, the weighted output is given by: y = f ( x) =

M

∑ ak Fk ( x, Sk ) + bk ,

(1)

k =1

where π k = {a k , bk } is obtained via Hashem’s expression (Section IV) and ak, bk, Fk, and Sk are, respectively, the weights, bias, output and set of parameters of the k-th component NFN. In this formulation, πk is not a function of x, since this would produce, conversely, a hierarchical mixture

of fuzzy experts [14]. For k = 1, ..., M we recover Fk by means of the hybrid-training algorithm proposed by Jang [5]. IV. COMBINATION CRITERIA In this section, we concentrate on the selection criteria proposed by Hashem [12], which are addressed after a brief revision of the basic and generalized ensemble methods. A. Basic Ensemble Method (BEM) BEM subsumes the combination of a population of regression estimates of a function f(x), defined as f(x)= E[y|x]. By this method, we first assume that we have a training set, given by A = {( xi , yi )iN=1} , and a cross-validation set, given by Cv = {( xl , yl )lN=cv1 } , whose elements are all independently- and identically distributed random variables. Then, it is conjectured that we employ A to generate a set of approximation functions (F = {fi (x)}) of f(x) to serve as the basis for discovering the best f(x) approximation. A common choice is to pick the best estimator that minimizes the MSE relative to f(x), known as f(x)BEM. According to Perrone and Cooper [15], we may define f(x)BEM as f ( x) BEM =

1 M

M

∑ fi ( x) ,

i =1

which comes to be a simple average defined over F. B. Generalized Ensemble Method (GEM) The idea underlying GEM is to generate an estimate that is as low as (or smaller than) f(x)BEM and that avoids overfitting. According to [15], we may define f(x)GEM as M

f ( x )GEM = ∑ α i f i ( x) , i =1

that is, f(x)GEM is a weighted averaging over the elements of F, where the sum of all α ' s may be constrained or not to be one. The aim is to choose the best set of α ' s that minimizes the MSE relative to f(x). C. Combination Weights Hashem [12] has proposed four MSE-OLC variations to tackle the problem of finding the best set of weight factors. They differ in some aspects, such as the inclusion of a constant parcel and/or the constraint that the sum of the weights should be equal to one. The constant term corresponds to a bias of the component network outputs. By other means, restraining the weights combination to be one may sometimes be suitable for improving the ensemble generalization capability. Among the proposed variations, Hashem [11] has considered the unconstrained MSE-OLC with a constant parcel as the variant that produces the lower MSE results. Hence, in this paper, we will only display the results achieved with the unconstrained weighted average with a constant term, even though, in some circumstances, some other variants would be equally effective. D. Selection of the Component Networks For the BEM estimator, as we increase the size of the component ANFIS population, the assumption that all 0-7803-7280-8/02/$10.00 ©2002 IEEE

mi ( x) ≡ f ( x ) − f i ( x ) (deviations from the true solution) are mutually independent does not hold anymore. When this assumption fails, adding more NFNs to the group incurs loss of computational resources since this will not improve the ensemble performance (moreover, this can be harmful in the sense that we include NFNs with very bad performance, jeopardizing the execution of the whole BEM estimator). Hence, the best choice would be to find out the optimal subset of the population F over which we could calculate the average. However, looking at all 2 M −1 non-empty F subsets might be unfeasible for large values of M. Instead, a more promising alternative is to order the population elements in consonance with the MSE growth and then generate a set of BEM estimates by progressively combining the ordered F elements. The best estimate, thus, may be achieved. In this way, we can ascertain that the BEM estimator is as good as the best component network. This process may be refined by considering the difference in MSE when we pass from a BEM estimator, with a population of K elements, to a BEM estimator with a population of K+1 elements. From this comparison, it is advocated that we may only include a new component NFN to the group if the following inequality is satisfied: 2 (2) ( 2 K + 1) MSE [ fˆN ] > 2 ∑ E [mnew mi ] + E[mnew ], i ≠ new

where MSE[ fˆN ] is the MSE of the BEM estimator with M NFNs, E[· ] refers to the mathematical expectation operator, and mnew is the error that should be produced by the new individual NFN to be inserted into the ensemble. If this criterion is not satisfied, we discard the current NFN and apply the same comparative process to the next (not tested) NFN in the ordered sequence. V. RESULTS In the following, we assess the E-ANFIS approach with respect to four case-study problems. For each analysis, three data sets were generated: one for training; another for the selection of the component NFNs and computation of the balancing weights; and another to test the E-ANFIS performance. Furthermore, in all experiments we have performed 50 executions of the component NFN selection process in order to verify the influence of the selection data set over the overall performance. For assessing the E-ANFIS proposal, we adopted the following algorithm: 1. Generate and train (using the training data set) an EANFIS wherein each MF type (see the list in Section II) is assigned to a different NFN component. 2. Calculate the NFN outputs for the selection data set. 3. Select the best ANFIS instances via Perrone and Cooper's criterion (eq. (2)). 4. Calculate the weight factors (π's) using the MSE-OLC method. 5. Obtain the output of the E-ANFIS (and, consequently, the output of the ANFIS components) for the training and test data sets.

The quality of the results was estimated by comparing (via MSE) the output values produced by the E-ANFIS structures with the desired ones available in the test data set.

showed the major MSE improvement since its number of component NFNs passed from three to four. TABLE II – RESULTS FOR BEST E-ANFIS OVER FIFTY RUNS Number of Rules 9

A. Modeling a Two-Input Nonlinear Function Here, we applied E-ANFIS to model a non-linear sync equation that was investigated in the work of Jang [5] sin( x ) sin( y ) z = sinc( x, y ) = × x y The sampled input data space was set to correspond to the range [-10,10]x[-10,10], the training data set was configured with 121 uniformly distributed pairs, and the selection and test data sets were configured with 625 randomly chosen pairs. Each single component ANFIS (with a proper MF type) was set to contain either nine, or 16, or 25 fuzzy rules with three, four, or five linguistic terms (MFs) being assigned to each input variable, respectively. All component NFNs were separately trained for 250 cycles. For each of the fifty selection data sets we generate an ensemble. Table I shows the mean MSE value, its standard deviation, as well as the minimum/maximum MSE values for the fifty E-ANFIS. The average and maximum numbers of component NFNs were two and three, respectively.

i ii iii iv v vi vii viii 16 i ii iii iv v vi vii viii 25 i ii iii iv v vi vii viii E-ANFIS (Simple Average) E-ANFIS (Weighted Average)

TABLE I – E-ANFIS PERFORMANCE FOR FIFTY DISTINCT SELECTION DATA SETS

Mean SD Min Max

Number of component NFNs 2 0.197 2 3

Simple Average MSE for MSE for training test 0.0000273 0.0001042 0.0000335 0.0000305 0.0000161 0.0000956 0.0001901 0.0002354

Weighted Average MSE for MSE for training test 0.0000214 0.0000593 0.0000056 0.0000023 0.0000134 0.0000512 0.0000534 0.0000633

Table II presents the following configuration parameters for the best E-ANFIS: (i) number of rules in each component NFN; (ii) their respective MF types; (iii) E-ANFIS training and test MSE results; (iv) percentage of improvement (PI) of E-ANFIS (with weighted average combination criterion) over each single ANFIS instance, given by E − Een (3) PI % = NC

0-7803-7280-8/02/$10.00 ©2002 IEEE

Training

PI %

0.0018194 0.0016497 0.0016289 0.0016330 0.0013499 0.0016304 0.0070579 0.0010096 0.0006547 0.0011936 0.0049338 0.0033379 0.0005807 0.0020938 0.0002000 0.0000941 0.0000293 0.0000184 0.0000160 0.0000311 0.0000293 0.0000301 0.0000186 0.0000257 0.0001901

99.25 99.17 99.16 99.16 98.99 99.16 99.80 98.65 97.92 98.86 99.72 99.59 97.66 99.35 93.22 85.59 53.74 26.39 15.47 56.39 53.74 54.94 27.09 47.28 92.86*

0.0000135

-----

Test

PI %

0.0018100 97.17 0.0021531 97.62 0.0040565 98.73 0.0025200 97.96 0.0026273 98.05 0.0029873 98.28 0.0090910 99.43 0.2209194 99.97 0.0133696 99.61 0.0012106 95.76 0.0130061 99.60 0.0068972 99.25 0.0205541 99.75 0.0080501 99.36 0.0017657 97.09 0.0023744 97.84 0.0020718 97.52 0.0017529 97.07 0.0013905 96.31 0.0011778 95.65 0.0020718 97.52 0.0011015 95.35 0.0004510 88.64 0.0024722 97.92 0.0002354 78.24∗ 0.0000512

------

TABLE III – RESULTS ACHIEVED BY VARYING THE SIZE OF THE SELECTION DATA SET Size of the selection data set 625 pairs of data 225 pairs of data

E NC

where Enc is the MSE for the component NFN and Een is the MSE for E-ANFIS. The ensembles were composed mostly of component ANFIS with MFs of the types (TRAPMF) and (TRIMF), and rarely with (PIMF) and (GAUSSMF). The configuration of best E-ANFIS was: {(4, (GAUSSMF)), (5, (TRIMF)), (5, (TRAPMF))}, where (X,Y) indicates an NFN with Y-type MF and X-partitioned fuzzy input variables. From Table II, we may observe that the smaller (larger) training and test PI were 15.47% (99.8%) and 88.64% (99.97%), respectively. Furthering such investigation, we also realized other 50 simulations to verify which was the influence of the size of the selection data set over the EANFIS performance. Table III brings the results achieved for the best fifty ensembles and we may notice that this influence is negligible. For the 225-pairs selection data set, E-ANFIS

MF Type

121 pairs of data

MSE in the test data set E-ANFIS Features

Simple Average

Weighted Average

3 NFNs, each with 5, 4, 5 MFs of types (vii), (iii), and (vi) 4 NFNs, each with 5, 4, 5, and 5 MFs of types (vii), (iii), (ii), and (vi) 3 NFNs, each with 5, 4, 5 MFs of types (vii), (iii), and (vi)

0.0002354

0.0000512

0.0002426

0.0000374

0.0002354

0.0000482

B. Modeling a Three-Input Nonlinear Function The training data set in this example was produced by output = (1 + x 0.5 + y −1 + z −1.5 ). Each single component ANFIS (with a proper membership function type) was set to contain either 16 or 25 fuzzy rules with two or three linguistic terms (membership functions) being assigned to each input variable, respectively. The sampled input data space for training/selection was set to correspond to the range [1,6]×[1,6]×[1,6], being the training and selection data sets configured, respectively, with 216 uniformly distributed pairs and 216 randomly chosen pairs. The test data set was composed of 125 uniformly distributed *

Percentage relative to the E-ANFIS with weighted average.

samples in the input range [1.5,5.5]×[1.5,5.5]×[1.5,5.5]. All component NFNs were trained for 200 epochs. For each of the fifty selection data sets we generate an ensemble. Table IV shows the mean MSE value, its standard deviation, as well as the minimum/maximum MSE values for the fifty E-ANFIS. We may observe that the number of component NFNs did not vary in consonance with the varying selection data set, attesting the robustness of the selection criterion. TABLE IV – E-ANFIS PERFORMANCE FOR FIFTY DISTINCT SELECTION DATA SETS

Mean SD Min Max

Number of component NFNs 1 0 1 1

Simple Average MSE for MSE for training test 0.0015026 0.0097641 0 0 0.0015026 0.0097641 0.0015026 0.0097641

Weighted Average MSE for MSE for training test 0.0052724 0.0054280 0.0011693 0.0001206 0.0025190 0.0053114 0.0080504 0.0060030

TABLE V – RESULTS FOR BEST E-ANFIS OVER FIFTY RUNS Number of Rules 8

MF Type i ii iii iv v vi vii viii 27 i ii iii iv v vi vii viii E-ANFIS (Simple Average) E-ANFIS (Weighted Average)

Training

PI % (105)

Test

PI %

0.01003199 0.00150269 0.00331757 0.01981911 0.01003199 0.01378490 0.00240201 0.00217098 0.00000084 0.00000223 0.03630591 0.00009619 0.00000084 0.00274039 0.00293493 0.00000126 0.00150269

0.00036899 -0.00321258 -0.00090809 0.00068059 0.00036899 0.00054078 -0.00163538 -0.00191584 -7.45941823 -2.82955572 0.00082564 -0.06480655 -7.45981567 -0.00130996 -0.00115685 -5.00214562 -0.00321258

0.3958196 0.0097641 0.1536644 0.3756125 0.3958196 0.3170604 0.3043055 0.0562541 4.9015580 3.8157850 0.7143405 0.9655484 4.9014900 0.2400819 0.0187887 0.6185281 0.0097641

98.65 45.60 96.54 98.58 98.65 98.32 98.25 90.55 99.89 99.86 99.25 99.44 99.89 97.78 71.73 99.14 45.60

0.00633023

-------

0.005311468

------

Table V brings the configuration parameters for the best EANFIS over the fifty simulations. Some component ANFIS with 27 rules have produced overfitting for the training data sets (the negative signal indicates inferior performance), which incurs bad results for the test data sets. In all simulations, the generated E-ANFIS was formed by only one component NFN. The best ensemble configuration was {(2, (GAUSSMF))} and its test PI over the best single NFN was 45,6% (due to a constant parcel and the weight factors). C. Identification in Control Systems This experiment is akin to the one realized by Jang [5] in the application of ANFIS to identify a non-linear component of a control system. The controlled plant is governed by the following difference equation: y (k + 1) = 0.3 y (k ) + 0.6 y ( k − 1) + f (u (k ))

where u(k) and y(k) are, respectively, the input/output for instant k, and the unknown function f(.) has the format of

0-7803-7280-8/02/$10.00 ©2002 IEEE

f (u) = 0.6sin(π u) + 0.3sin(3π u ) + 0.1sin(5π u ) In order to identify the plant, we employ a series-parallel model set by yˆ (k + 1) = 0.3 yˆ (k ) + 0.6 yˆ (k − 1) + F (u (k )) where F(.) is the function to be provided by E-ANFIS. Each single component NFN (with a proper membership function type) was set to contain from four to seven linguistic terms (MFs) being assigned to each input variable. The input for the plant/model was uniformly distributed in the range [-1,1] by generating 250 training pairs. For the selection data sets, the input was randomly chosen from the range [-1,1], while for the test data sets the input was u(k)=sin(2πk/50) for 0≤k≤500 and u(k)=0.5sin(2πk/250)+ 0.5sin(2πk/25) for 0≤k≤1000. All component NFNs were trained for 200 epochs. For each of the fifty selection data sets we generate an ensemble. Table VI shows the mean MSE value, its standard deviation, as well as the minimum/maximum MSE values for the fifty E-ANFIS. Due to the lower MSE values attained by E-ANFIS, the results are in a 10-7 scale. As observed in the prior experiment, the number of component NFNs was not sensitive to the variation of the selection data sets. TABLE VI – E-ANFIS PERFORMANCE FOR FIFTY DISTINCT SELECTION DATA SETS

Mean SD Min Max

Number of component NFNs 3 0 3 3

Simple Average MSE for MSE for training (10-7) test (10-7) 4.307012 5.517211 0 0 4.307012 5.517211 4.307012 5.517211

Weighted Average MSE for test (10-7) 4.035924 4.892691 0.047095 0.151525 3.976288 4.618581 4.173894 5.369257 MSE for training (10-7)

Due to space limitation, we will not display the comparative analysis between the component NFNs and the best E-ANFIS, whose configuration was {(7, (GAUSS2MF)), (6, (GAUSS2MF)), (7, (GBELLMF))}. Its training (test) PI over the best single NFN was 73.66% (67.50%). D. Predicting Chaotic Dynamics The prior experiments confirm that E-ANFIS may be employed to properly model highly non-linear functions. Here, we assess E-ANFIS on the prediction of a chaotic time series generated via the Mackey-Glass differential delay equation, which is defined as x (t ) =

0.2 x (t − τ ) − 0.1x(t ) 1 + x10 (t − τ )

The aim is to anticipate the value of a future point in k=t+P having as background the points generated up to k=t. The standard method for this type of prediction is to create a mapping from D points of the time series spaced ∆ apart, that is, {x(t-(D-1)∆), ..., x(t-∆), x(t)}, to a predicted future value x(t+P). To obtain the time series value at each integer point, we follow Jang’s [5] methodology and applied the fourthorder Runge-Kutta method to find the numerical solution. The time step used in the method is 0.1, initial condition x(0)=1.2, τ=17, and x(t) is thus derived for 0 ≤ t ≤ 1200 . (We assume x(t)=0 for t
Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.