A prognostics enhanced reconfigurable control architecture

Share Embed

Descrição do Produto

18th Mediterranean Conference on Control & Automation Congress Palace Hotel, Marrakech, Morocco June 23-25, 2010

A Prognostics Enhanced Reconfigurable Control Architecture Douglas Brown, Brian Bole, and George Vachtsevanos

Abstract— This paper introduces a control architecture incorporating prognostic information to extend the Remaining Useful Life (RUL) of a system while ensuring stability and performance requirements. A Model Predictive Control (MPC) framework is chosen to utilize the existing production controller by adjusting its reference points, thereby lowering its impact on the system. The MPC arrives at a control law by minimizing a quadratic cost function of control effort and tracking error while enforcing hard and soft constraints. Prognostic information is included in the cost function through the use of soft constraints whereas absolute boundary conditions (eg. input limits, tracking error) are incorporated as hard constraints. A relationship is provided between the terminal cost associated with the prognostic constraints and the quadratic costs. Asymptotic stability of the control architecture is demonstrated for the case of nominal system dynamics. The proposed fault-tolerant control design is applicable to a variety of application domains. An EMA is used to illustrate the efficacy of the proposed approach.

A. Fault Detection and Diagnosis (FDD) Fault Detection and Diagnosis consists of two elements, fault detection and fault diagnosis. The goal of the fault detection element is to apply validated technologies to detect anomalies from adverse events throughout the system [1]. Whereas, the fault diagnosis element is developed to integrate and validate technologies to determine the causal factors, the nature and severity of an adverse event (or fault identification) and to distinguish that event within a family of potential adverse events (fault isolation) [1]. These definitions can be combined to arrive at, Definition 2.1 (Fault Detection and Diagnosis [2]): The course of action by which a fault (or failure) is detected and later identified. B. Failure Prognosis & Long-Term Prediction

I. BACKGROUND The emergence of complex and autonomous systems, such as modern aircraft, Unmanned Aerial Vehicles (UAVs), automated industrial processes, among many others, is driving the development and implementation of new control technologies that are aimed to accommodate incipient failures and maintain a stable system operation for the duration of the emergency. The motivation for the research began in the area of avionics and flight control systems for the purpose to improve the reliability and safety of aircraft. In the scope of this work, reliability is defined as, Definition 1.1 (Reliability): The probability that a system will perform within specified constraints for a given period of time. This paper is organized as follows. Section II reviews the state of the art for fault diagnosis, failure prognosis and fault tolerant control architectures; Section III presents the proposed reconfiguration strategy; Section IV evaluates the feasibly of the proposed approach using an EMA example; and Section V highlights major accomplishments and future work. This work was supported supported by the U.S. Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program. D. Brown is with the Georgia Institute of Technology, Atlanta, GA 30332 USA [email protected]. B. Bole is with the Georgia Institute of Technology, Atlanta, GA 30332 USA [email protected]. G. Vachtsevanos is a Professor Emeritus at the Georgia Institute of Technology, Atlanta GA 30332 USA and Chief Scientist at Impact Technologies, LLC, Rochester, NY 14623 USA. [email protected].

978-1-4244-8092-0/10/$26.00 ©2010 IEEE


Several approaches to prognosis have been explored, such as: model-based, data-driven and hybrid methods. In recent years particle filtering has been widely accepted as a popular approach to prognosis. Detailed work regarding particle filtering based prognosis can be found in the literature by M. Orchard et al. [3]. In the scope of this paper, Definition 2.2 (Prognosis [2]): The ability to predict accurately and precisely the remaining useful life (RUL) of a failing component or subsystem. C. Fault–Tolerant Control (FTC) Strategies Modern technological systems rely on sophisticated control systems to meet increased performance and safety requirements. In the scope of this paper, FTC is defined as, Definition 2.3 (Fault Tolearant Control [4]): Control systems that possess the ability to accommodate system component failures automatically [while] maintaining overall system stability and acceptable performance. Traditionally, FTC systems are classified into two categories: passive and active [4]. Passive Fault Tolerant Control Systems (PFTCS), are designed to make the closed loop system robust against system uncertainties and anticipated faults [5]. For this reason PFTCS have a limited fault-tolerant capability. Alternatively, Active FTC Systems (AFTCS) react to the system component failures by reconfiguring control actions to maintain stability and acceptable system performance. In such control systems, the controller compensates for the effects of faults by selecting a pre-computed control law or synthesizing a new control scheme on-line. The


remainder of this section will focus on a specific class of AFTCS based on Finite Horizon Optimal Control (FHOC). a) Finite Horizon Optimal Control: A control design using mathematical optimization methods over a finite time horizon, T , to derive control laws given a set of inequality constraints ψ. An optimal control includes a cost functional, J, that is a function of the state, x, control input, u, a Lagrangian (or cost operator) L and the endpoint cost Φ. Consider a generalized non-linear system,  x (t + 1) = fm (x (t) , u (t) , w (t) , t) (1) y (t) = hm (x (t) , u (t) , v (t) , t) ,

Past ymax Setpoint


Measured Estimated


Prediction Horizon, P

–4 –3 –2 –1

(a) Past umax

J = Φ (x (t0 ) , t0 , x (t0 + T ) , t0 + T ) + ˆ t0 +T L (x (t) , u (t) , t) dt, (2)


Future Measured Estimated

Control Horizon, M

–4 –3 –2 –1

k +1 +2 +3 +4 +5 +6 +7 +8 +9 +10

Sampling Instants



with the imposed inequality constraint,

Fig. 1: MPC controller state at the k th sampling instant. (3)

b) Model Predictive Control (MPC): Control in which the current control action is obtained by solving on-line, at each sampling instant, a finite horizon open-loop optimal control problem, using the current state of the plant as the initial state; the optimization yields an optimal control sequence and the first control in this sequence is applied to the plant. [6], [7]. MPC generates a discrete-time controller which takes action at regularly spaced, discrete time instances. The interval separating successive sampling instants is the sampling period, ∆t. An illustration of the MPC sampling instants and corresponding control actions is provided in Fig. 1, where the latest measured output, yk , and previous measurements, yk−1 , yk−2 , . . . , are known [8]. To calculate the next control input the controller operates in two phases, estimation and optimization [9], 1) Estimation. The controller updates the true value of the controlled variable, yk and any internal variables that influence the future trend, (i.e. yk+1 , . . . ,yk+P ). 2) Optimization. Values of set points, measured disturbances, and constraints are specified over a finite horizon of future sampling instants, k + 1, k + 2, . . . , k + P where P ∈ Z+ . The controller computes M modes uk , uk+1 , . . . , uk+M −1 , where 1 ≤ M ≤ P is referred to as the control horizon. The MPC is obtained by solving the optimization problem, ˆ h i T (r − y) Q (r − y) + ∆uT R∆u dt + ρ 2 , (4)



k +1 +2 +3 +4 +5 +6 +7 +8 +9 +10

Sampling Instants

where the vectors w and v correspond to the process noise and observation noise and fm and hm are non-linear mappings respectively. A cost function, J, subject to the dynamics in (1) can be expressed as,

ψ (x (t) , u (t) , t) ≤ 0.



where the variables r, y and ∆u correspond to the input reference, plant output and control correction. The weight matrices Q and R are defined a-priori as the inverse of the

maximum allowable tracking error and control correction, respectively. III. RECONFIGURATION STRATEGY The main elements of the proposed low-level reconfigurable control architecture are shown in Fig. 2. The control architecture is comprised two controllers: the original production controller and the reconfigurable controller. A. Requirements The objective of the reconfigurable controller is to tradeoff RUL for performance. This can only be achieved if a relationship exists between RUL and the internal states of the system. Therefore, new terminology needs to be defined to describe systems for which PHM-based reconfigurable control can be realized. First, recall the definitions for a controllable and observable system [10]. These notions can be extended to define desirable properties of a control system with respect to RUL. Consider the following propositions for a system that is RUL controllable and RUL observable, Proposition 3.1 (RUL Controllability): A system is RUL controllable at time t0 if there exists an input sequence u (t) ∈ U on the interval t ∈ [t0 , tRU L ] such that any initial RUL estimate tRUL (t0 ) can be driven to any desired RUL value, tRUL (tf ) in a finite time interval 0 < tf − t0 < tRU L (t0 ). Proposition 3.2 (RUL Observability): A system is RUL observable at time t0 if for any initial state in the state space x (t0 ) ∈ X and a given control sequence u (t) ∈ U defined on the interval t ∈ [t0 , tRUL ] the initial RUL estimate, tRUL (t0 ), can be determined.


Request fault status


Fault detected?


Production Controller (Nominal Operation)


Nominal Control

ǫ x (t)

Hard boundary constraint x (t) ≤ kxmax k

RUL Soft boundary constraint x (t) ≤ xRUL max + ǫVmax

Target boundary x (t) ≤ xRUL max

Yes Restructure state space model (A, B, C)

Requirements satisfied?




Reconfigured Controller

Evaluate RUL and performance requirements

Fig. 3: State space illustrating soft and hard constraints with respect to the slack variable . For simplicity, the  negative boundaries are omitted i.e. xmin = 0, xRUL = 0 . min

Reconfigurable Control

Fig. 2: Flowchart of low-level reconfigurable control architecture.

B. Formulation The scope of this paper will focus on a linear MIMO system represented by the state transition, control and observation matrices, A ∈ Rn×n , B ∈ Rn×m and C ∈ Rp×n ,  x˙ = Ax + Bu , (5) y = Cx where u ∈ Rm , x ∈ Rn and y ∈ Rp correspond to the input, state and output vectors respectively. The system is assumed to be controllable and observable. Next, recall the MPC cost function from (4). The terminal cost can be incorporated into the inequality constraint by substituting ∆up with the following vector concatenation,  T  z = ∆uT . (6) p The corresponding cost function can now be written as, ˆ h i T J= (r − y) Q (r − y) + zT Rz z dt, tk+1



where Rz = diag (R, ρ ) ∈ R(p+1)×(p+1) and the new inequality constraints are represented by, Mz zp ≤ cz .


Now, prognosis can be implicitly incorporated into the cost function as a soft constraint. Moreover, inequalities can be written to impose penalties on state values using the slack variable  as prescribed by (9) and illustrated in Fig. 3.  RUL RUL RUL x (t) ∈ xRUL (9) min − Vmin , xmax + Vmax

C. Stability One of the problems in MPC that has received an increased attention over the years consists in guaranteeing closed-loop stability for the controlled system. The usual approach to ensure stability in MPC is to consider the value function of the MPC cost as a candidate Lyapunov function [11]. Then, if the system dynamics are continuous, the classical Lyapunov stability theory [12] can be used to prove that the MPC control law is stable [13]. Lyapunov’s direct method makes use of a Lyapunov function V (x) . The conditions which V (x) must satisfy in order to be a Lyapunov function are as follows (i) V (x) is continuous (ii) V˙ (x) is continuous and (iii) V (x) is a positive definite function. Once a candidate Lyapunov function is found, stability can be addressed using Lyapunov’s direct method, Theorem 3.1 (Lyapunov’s Direct Method [10]): If a positive definite function V (x) can be found such that V˙ (x) is negative definite in the neighborhood Ω of the origin x0 , then the region in the neighborhood Ω is asymptotically stable. Consider the candidate Lyapunov function, V (x, z, t),  1 T V (x, t) = x Qx + zT Rz z . (10) 2 Let the rate of the vector variable z be restricted such T T δ , where that z˙ = δuT p

 δup = [δu1 , δu2 , . . . , δup ] RUL and δ = kxmax k − xmax . Next, let the control input be expressed as u = −kx where k is the feedback gain determined from the MPC. Then, V˙ (x, t) can be expressed as,  T δ . (11) V˙ (x, t) = xT Q (A − Bk) x + zT Rz δuT p By invoking Theorem 3.1 using (10) and expanding the resulting algebraic quantity gives the following necessary condition for stability,  xT Q (A − Bk) x + ∆uT (12) p R δup + (ρ ) δ < 0. Notice, if R ≡ 0 and ρ = 0, then the only necessary condition for stability is (A − Bk) to have all of its eigenvalues in the left-half plane. Similarly, the general MPC solution is shown to be stable if (12) holds. For a comprehensive overview on stability of MPC in discretetime, refer to Mayne et al. [14].


D. Constraints / Weights

Reconfigurable Control

1) Weights: Weights for the optimal control problem are represented as Q, R and ρ . The weights of each term relate to the emphasis placed on the performance of the system tracking error, level of control reconfiguration, and prognosis, respectively. 2) Hard Constraints: The real-valued constraints ymin , ymax , umin , umax , ∆umin , and ∆umax set the absolute lower and upper bounds on the variables y, u and ∆up , respectively. 3) Soft Constraints: The prognosis-based constraints on RUL the internal states xmax min and xmax are introduced as “soft” boundaries through a slack variable . Violations in the soft boundaries are introduced as a quadratic terminal cost in RUL RUL (4). The constants Vmin and Vmax are non-negative entries which represent the concern for relaxing the corresponding constraint; the larger V RUL , the softer the constraint. For example, V RUL = 0 implies that the constraint is a hard constraint that cannot be violated.

Initialize soft constraints, RUL xRUL min and xmax

Update soft constriants

Evalute tRUL

No Is tRUL < tmission ?


Soft constraints bounds reached?



Perform MPC Update


Evaluate Performance


Performance satisfied?

No Redistribute Control

Fig. 4: Flowchart of MPC reconfiguration with soft constraint adaptation.

E. RUL Adaptation The functionality of the MPC routine is given by the RUL flowchart in Fig. (4). Soft constraints xRUL min and xmax are initialized, the RUL of the failing motor is evaluated and the RUL requirements are checked to assess if the mission can be accomplished; if not, the soft constraints are updated to relax performance requirements in the MPC. Then, the MPC computes the next control sequence. After the control sequence is applied, the performance is evaluated and compared to the required performance. If the performance requirements are satisfied the control sequence is reiterated. However, if the requirements are not satisfied, or the soft boundaries can no longer be adapted, a control redistribution algorithm is activated at the middle-level of the control hierarchy.


Maximum Operation Temperature Allowed (◦ C)


105 130 155 180

with the explicit solution, ˆ t 1 1 Twa (t) = e− Rwa Cwa (t−τ ) Ploss (τ ) dτ. Cwa 0


IV. EVALUATION A. Thermal Model

B. Prognosis Model

In the case of the brushless DC motor, the winding temperature is related to the power loss in the copper windings, assuming the copper losses are the primary source of power loss. A first order thermo-electrical model can be used to describe the relationship between power loss in in the copper windings with respect to the winding-to-ambient temperature [15], [16], represented as Twa and defined as,

The electrical endurance qualities of insulation materials are affected by temperature and time. In 1930, Montsinger [17] introduced the concept of the ten-degree rule. This rule stated that the thermal life of insulation is halved for each increase of 10 (◦ K) in the exposure temperature. Using this concept, the life of insulation aged at elevated temperatures was expressed as follows [18],   Ea , (16) L = L0 exp kB Tw

Twa = Tw − Ta ,


where the symbols Tw and Ta correspond to the winding temperature and ambient temperature respectively. The symbols Cwa and Rwa refer to the collective thermal capacitance and thermal resistance of the windings, accordingly. The equivalent state space representation can be written as,     1 1 ˙ Twa (t) + Ploss (t) , (14) Twa = − Rwa Cwa Cwa

where L is the life in units of time (hr), L0 a constant or proportionality, Ea the activation energy (eV), Tw the winding temperature (◦ K), and kB = 8.617×10−5 (eV/◦ K) the Boltzmann constant. Insulation systems are rated by standard NEMA (National Electrical Manufacturers Association) classifications according to maximum allowable operating temperatures, provided in Table I.


The percentage of life remaining, Lplr (t), can be traced by accumulating the ratio of RUL during each operating point with the operating temperature,   ˆ t Ea 1 exp − dτ. (17) Lplr (t) = 1 − L0 0 kB Tw (τ )


Now, the RUL of the windings can be determined for a specific winding temperature, Twa , by multiplying (16) by Lpcl ,   Ea L (t) = L0 exp Lplr (t) . (18) kB Tw (t) Finally, this expression can be used with to project the RUL of the motor windings for a specific commanded input. C. Actuator Model A high-fidelity 5th order state-space model was developed to represent higher order dynamics for a closedloop actuator position controller. Eq. (5)was used to represent the actuator dynamics. The internal state x = [ im θm ωm θ` ω` ]T ∈ R5 is defined by the motor current, motor position, motor speed, load position and load speed; the control input u = [ θref Tm Tload ]T ∈ R2 is defined by the reference position, external motor torque (i.e. friction) and external load disturbance; and the control output ym = [ θ` im ]T ∈ R2 is defined by the load position and motor current. The transition matrix, A ∈ R5×5 , is defined as as   Rtt kp1 kp2 ke +kp1 −L

  A=  


0 kt Jm 0 0






bm −J m

kcs Ncl Jm Ncm






kcs Ncl J` Ncm


2 k +kcs Ncl − ` J`

− J`




kcs 2 m Ncm




   (19)  

The control and observation matrices B ∈ R5×3 and C ∈ R2×5 are provided in (20) and (21), respectively. 

 B=

kp1 kp2 Ltt

Ncm Ncl

0 0


 0 1

0 0 0 − J1m 0 0

0 0 0

0 0

0 0

0 0

1 0

T 0  0  − J1`



D. Simulation Results The time-evolution of the turn-to-turn winding faults for different operating conditions were simulated in Simulink using (18) with the modeling parameters in Table II. The RUL estimates were generated for different motor currents. The initial fault condition was set to Lplr = 5% for each instance. The expected RUL computed for each operating condition is provided in Table III. Notice, as the operating current decreases, the estimated RUL increases. The expected RUL is inversely proportional to the magnitude of the operating current. Thus, the RUL can be





b` bm kB kcs ke k` kp1 kp2 kp3 kt Cwa Ea J` Jm Ltt Ncl Ncm R0 Rtt Rwa

in · lbf/rad/s in · lbf/rad/s eV/◦ K rad/rad V/rad/s in · lbf/rad V/rad/s s−1 rad/rad in · lbf/A W/◦ K/s eV in · lbf · s2 in · lbf · s2 H – – Ω Ω ◦ K/W

2.50 × 10−1 1.00 × 10−4 8.62 × 10−5 1.00 × 105 1.10 × 10−1 2.00 × 10−3 1 1 100 1.01 5.00 × 10−5 7.00 × 10−1 2.00 × 10−3 2.10 × 10−3 3.00 × 10−4 1 1 1.60 × 10−1 1.60 × 10−1 7.50 × 10−1

Load damping Motor damping Boltzmann’s const. Coupling stiffness Back-emf coef. Load stiffness Controller gain "" "" Motor torque coef. Thermal cap. Activation energy Load inertia Motor inertia Turn-to-turn ind. Load coupling Motor coupling Nominal res. Turn-to-turn res. Thermal res.


Expected RUL [min]

RUL Increase

20 25 30 35 40

2200 310 41.0 6.00 0.90

2444 344.4 45.56 6.667 1.000

extended by reducing the operating current. The MPC controller discussed earlier takes advantage of this relationship by reducing the operating current magnitude based on the RUL requirement. The degree of relaxation is dependent on the weight matrices chosen during the controller design phase. To demonstrate the feasibility of the approach the MPC toolbox in MATLAB was used to expedite the design process. The MPC controller contained the variables listed in Table IV. In addition, each constraint has an associated cost as defined in Table V. Results for three different fault scenarios were generated using the MPC with the control parameters in Table II with the corresponding boundaries and weights defined in Tables IV and V. The results are provided in Fig. 5. Notice, as the RUL reduces (left-to-right), the MPC places more emphasis on reducing the magnitude of the motor current. As a consequence, the rise time of the actuator position increases while increasing the RUL.







θref θl im

Input Output Output

Hard Hard Soft

-120° -120° -40A

120° 120° 40A

Reference pos. Actuator pos. Motor current




Fig. 5: Simulation results for the reconfigurable control with (a) Lplr = 10% (b) Lplr = 5% and (c) Lplr = 1%. TABLE V: C OST FUNCTION WEIGHTING FACTORS USED IN THE SIMULATION . Symbol θref − θl im 

Description Position Error Motor Current Soft Boundary Violation


DoS 2

[(30/π) /100] [1/40]2 1

0 0 1

V. CONCLUSION AND FUTURE WORK Fault-tolerant and reconfigurable control strategies for improved critical system reliability and survivability under fault/failure conditions has attracted the attention of the controls community in recent years. To apply these technologies it is essential the system health status be monitored continuously and incipient failures be tracked so that remedial action can be taken as soon as possible to assure its safety. Control reconfiguration at the component level, constitutes the first level of the hierarchical framework for fault-tolerance. The reconfigurable control framework was evaluated using an EMA Simulink model. The results acquired from the simulation demonstrated the feasibility of the approach. Finally, complexity issues must be addressed for specific application domains. Other modules of the integrated faulttolerant control hierarchy, such as the control redistribution, mission adaptation, etc., are not addressed in this paper but they contribute significantly towards the development of high-confidence systems. VI. ACKNOWLEDGMENTS This research was made with Government support under and awarded by DoD, Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a. R EFERENCES [1] A. N. Srivastava, R. W. Mah, and C. Meyer, “Integrated vehicle health management – automated detection, diagnosis, prognosis to enable mitigation of adverse events during flight,” National Aeronautics and Space Administration, Technical Plan Version 2.02, December 2008.

[2] G. Vachtsevanos, F. Lewis, M. Roemer, A. Hess, and B. Wu, Intelligent Fault Diagnosis and Prognosis for Engineering Systems. Hoboken, NJ, USA: John Wiley & Sons, 2006, iSBN 987-0-0471-72999-0. [3] M. Orchard, “A particle filtering-based framework for on-line fault diagnosis and failure prognosis,” Ph.D. dissertation, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA, November 2007. [4] Y. Zhang and J. Jiang, “Bibliographical review on reconfigurable faulttolerant control systems,” Annual Reviews in Control, vol. 32, no. 2, pp. 229–252, March 2008. [5] J. S. Eterno, J. L. Weiss, D. O. Looze, and A. S. Willsky, “Design issues for fault-restructurable aircraft control,” in IEEE Conference on Decision and Control, 1985, pp. 900–905. [6] W. H. Kwon, A. N. Bruckstein, and T. Kailath, “Stabilizing state feedback design via the moving horizon method,” International Journal of Control, vol. 37, no. 3, pp. 631–643, 1983. [7] W. Kwon and A. Pearson, “A modified quadratic cost problem and feedback stabilization of a linear system,” IEEE Transactions on Automatic Control, vol. 22, no. 5, pp. 838–842, October 1977. [8] D. Q. Mayne and H. Michalska, “Receding horizon control of nonlinear systems,” IEEE Transactions on Automatic Control, vol. 35, no. 7, pp. 814–824, 1990. [9] C. E. Garcià, D. M. Prett, and M. Morari, “Model predictive control: Theory and practice – a survey,” Automatica, vol. 25, no. 3, pp. 335– 348, May 1989. [10] W. L. Brogan, Modern Control Theory, 3rd ed. Englewood Cliffs, New Jersey 07632: Prentice-Hall, 1991, iSBN 0-13-589763-7. [11] M. Lazar, “Model predictive control of hybrid systems: Stability and robustness,” Ph.D. dissertation, Technishe Universiteit Eindhoven, September 7th 2006. [12] R. E. Kalman and J. E. Bertram, “Control system analysis and design via the second method of lyapunov, ii: Discrete-time systems,” ASME Journal of Basic Engineering, vol. 82, pp. 394–400, 1960. [13] S. S. Keerthi and E. G. Gilbert, “Optimal, infinite horizon feedback laws for a general class of constrained discrete time systems: Stability and moving-horizon approximations,” Journal of Optimization Theory and Applications, vol. 57, no. 2, pp. 265–293, 1988. [14] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Constrained model predictive control: Stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, 2000. [15] L. U. Gokdere, A. Bogdanov, S. L. Chiu, K. J. Keller, and J. Vian, “Adaptive control of actuator lifetime,” in IEEE Aerospace Conference, March 2006. [16] H. Nestler and P. K. Sattler, “On-line estimation of temperatures in electrical machines by an observer,” Electric Power Components and Systems, vol. 21, no. 1, pp. 39–50, January 1993. [17] V. M. Montsinger, “Loading transformers by temperature,” Transactions of the American Institute of Electrical Engineers, vol. 32, 1913. [18] A Review of Equipment Aging Theory and Technology, EPRI NP-1558 Std., 1980.


Lihat lebih banyak...


Copyright © 2017 DADOSPDF Inc.