A guide to dynamical analysis

Share Embed


Descrição do Produto

A Guide to Dynamical Analysis PAUL E. RAPP

Department of Physiology, The Medical College of Pennsylvania, Philadelphia, PA

Abstract--The number and variety of methods used in dynamical analysis has increased dramatically during the last fifteen years, and the limitations of these methods, especially when applied to noisy biological data, are now becoming apparent. Their misapplication can easily produce fallacious results. The purpose of this introduction is to identify promising new methods and to describe safeguards that can be used to protect against false conclusions.

Introduction

Dynamical analysis offers the prospect of bringing powerful analytical methods to the analysis of experimental data. Its power lies in its generality. It is equally applicable to economics, psychology, physics and biology. Power and generality carry a price. In this case the price is the potential for misapplication. Used naively, these methods routinely produce erroneous results. The magnitude of this potential for computational mischief should not be underestimated. This is particularly the case in applications to biology and psychology (Ruelle, 1990; Rapp, 1993). Errors of application do not result in acceptable quantitative inaccuracies. Rather, large scale qualitative errors can result in fallacious conclusions. For example, the application of recently developed computational safeguards make it clear that the case for chaotic behavior in neurophysiological systems is shrinking rather than growing. At present, there is, in my view, no convincing evidence of chaotic behavior in the mammalian central nervous system. This includes, it should be noted, previously published results from my laboratory (Rapp, et al., 1989). This does not mean that chaos is not present. Improved methods may well provide a convincing demonstration, but the evidence now presented in support of this conclusion is increasingly seen to be defective. The emphasis on chaotic behavior has been both unfortunate and unnecessary. Independently of the specific issue of chaos, dynamical analysis has much to offer investigators who want to measure temporal and spatial changes in complex dynamical behavior. The importance of these analytical technologies will grow, and they will become an essential complement to classical procedures. In this article, I will present an outline of the methodology of dynamical analysis with references to the primary literature. This is a very incomplete review of the literature. Given the pace of current research, it is already out of date. Though the generalization of these methods to multivariate data is now an important research area, the discussion presented here will be limited to the analysis of single channel data. Along the way, I will point out some commonly encountered problems. In most instances these are mistakes that I have personally tested at considerable cost in time, effort and embarrassment. The article is divided into three sections. A discussion of general procedures for dyAddress for correspondence: Paul E. Rapp, Dept. of Physiology, The Medical College of PA, 3300 Henry Ave., Philadephia, PA 19129.

Integrative Physiological and Behavioral Science, July-September, 1994, Vol. 29, No. 3, 311-327. 311

312

RAPP

namical analysis is given in the first section. The second section describes several different dynamical measures. The interpretation of the results is discussed in the third section.

General Procedures for Dynamical Analysis Never Skimp on Housekeeping Thirty percent of research is housekeeping. Data must be scrupulously documented. It is essential to check data before and after transfer between computers. Compatibility of data packaging protocols should be checked. The first few data sets should be taken through the entire analysis path before large-scale data acquisition begins. Gain settings and digitizer frequencies that seem reasonable at the beginning of an experiment are often found to be unacceptable when analysis begins. I know of an entire summer's data that were unusable because of a mis-set amplifier gain. This was discovered only months later by the group's mathematician who found that meaningful analysis was impossible at the lower gain setting. Will the proposed analysis be feasible with noisy data? To address this question, the experiments should be simulated numerically. This should include computational experiments in which artificial data are corrupted by progressively increased noise levels. Most digitizers have an analog-out option. This should be used to test the experimental apparatus. Use the computer to send some well-known signal, for example the Rfssler attractor, to the sequence of filters, amplifiers and digitizers that will be used in the experiment. Data analysis should then be performed. If the results are not consistent with the known properties of the test signal, you've got a problem. The analog output should also be used to send a random time series to your apparatus. If the analysis of the results gives a false-positive indication of meaningful dynamical structure, you've got an even bigger problem. The filters present in EEG systems are ideally suited for generating precisely this kind of disaster.

Test For Digitizer Saturation Determine the maximum and minimum value of the signal and the number of times that each appears. If, for example, the maximum value 2048 appears several hundred times, it's clear that peaks were cut by a saturated 12-bit digitizer. The range of the digitized signal should also be determined. Is the resolution of the dynamic range sufficient to meet the demands of the analysis? This should be checked against simulated experiments.

Visually Inspect the Data The data should be checked for spikes (some digitizers introduce artifactual spikes into the original signal) and silent intervals in which the signal was lost. Both of these are discouragingly common problems.

Determine the Natural Time Scale of the Signal Is the digitizer frequency high enough? This question can be addressed by calculating the autocorrelation function. A good measure of a signal's time scale would be the time to the first minimum of the autocorrelation function. There are, however, practical difficulties

DYNAMICAL ANALYSIS

313

associated with this measurement. When noisy data are examined, the first minimum may be a local minimum that gives an unrealistically short value. Conversely, the global minimum may give an unreasonably long estimate. Because of these problems, many investigators use the autocorrelation time, the time for the autocorrelation function to drop to 1/e of its original value, as a measure of the signal's time scale. The autocorrelation time has been found to be more robust to noise and to the length of the data set, and can give a useful guide to the minimum acceptable digitizer rate. When the experiment is being performed, the maximum sustainable digitizer rate should be used. It should be noted, however, that over-sampling can produce misleading results. Over-sampling can, and should, be corrected during analysis. The autocorrelation function can be determined by direct integration of its defining integral or via the Fourier transform using the Wiener-Khinchin theorem. Calculation of the autocorrelation function should not be done blindly. In particular, the data should be zero padded to minimize the effects of nonperiodicity in the data (Press, et al. 1986). The extrema of the mutual information function (Fraser, 1989a; Fraser and Swinney, 1986) provides an independent measure of the natural time scale of the signal and can be compared against the autocorrelation results.

Determine the Spectral Properties of the Signal Near the end of his life Schoenberg observed that there "was a lot of great music waiting to be written in C-major." In a similar spirit, nonlinear dynamicists should not underestimate the importance of spectral analysis. Spectral measures are central to all areas of physics and biology. Clinical applications include the analysis of EEG signals from schizophrenics (Schellenberg, et al., 1992; Nagase, et al., 1992) and patients presenting senile dementia of Alzheimer's type (Sloan and Fenton, 1993; Kuskowski, et al., 1993). Both power versus frequency and log power versus log frequency spectra should be examined. In the case of EEGs where there is a substantial history of empirical examination of spectra, the fraction of spectral power in the alpha band and the frequency of the band with maximum spectral power should be determined. A "close the eyes and push the FFT button" approach to the calculation of spectra can produce unfortunate results. Data should be appropriately windowed (Press, et al., 1986). Calculations with a Parzen window and a Welch window should be compared. (Unfortunately the term "window" is used with a completely different definition in the context of embedding data.) Power spectrum estimation using maximum entropy spectral analysis (Childers, 1978) is particularly effective in resolving sharp peaks in spectra and should be considered if these features are likely to be present.

Test for Stationarity in the Time Domain Most dynamical measures include an assumption of stationarity in their definitions, and most investigators, including myself, routinely ignore these conditions to their sorrow. Nonstationarity is a defining characteristic of biological signals. The nonstationarity of the EEG, for example, is well-known (reviewed by: Ferber, 1987; Jensen and Cheng, 1988; Lopes da Silva, 1978). Segmentation techniques (Praetorius, et al., 1977; Bodenstein and Praetorius, 1972; Hasman, et al., 1978; Michael and Houchin, 1978; Barlow, et al., 1981; Barlow, 1985) with their associated uncertainties thus become another element in the list of technical problems that should be addressed. Though there is no completely satisfactory

314

a~a,P

resolution of this problem, it still seems helpful to see if the time series under examination satisfies a criterion of spectral stationarity based on the autocorrelation function (Michael and Houchin, 1979). In most instances the answer is no, and the analysis will proceed anyway. Are spectral measures and classical statistical measures the appropriate criteria for assessing the stationarity of nonlinear systems? Most particularly, are these appropriate criteria if the final goal is to make quantitative measures of embedded data? Alternative criteria will be discussed in a subsequent section.

Construct Phase Portraits of the Data With Different Lags Let {vb v2, ..., VNDATA}denote the time series. A two-dimensional phase portrait is the diagram of connected (x, y) pairs formed by plotting (x, y) = (vj, Vj+L)where L is called the lag. These diagrams give a quick visual indication of the two-dimensional geometry of the time series. It is often of interest to compare phase portraits obtained under different experimental conditions. Because of the uncertainties of interpretation, the appearance of differences in the portraits is not a definitive indication of significant change in the dynamical system that generated the signal. It is, however, mildly encouraging.

Find an Appropriate Embedding for the Data An N-dimensional embedding of the time series is the set of points {xj} in R N formed by the rule: xj = (vj, Vj+L, Vj+2L, ..., Vj+(N-1)L) Dynamical analysis can be defined operationally as the analysis of the geometrical properties of set {Xj}. The procedure was introduced by Takens (1980), Marl6 (1980) and Packard, et al. (1980). These results have been reviewed by Noakes (1991) and extended by a number of investigators including Sauer, Yorke and Casdagli (1991). The choice of embedding parameters L and N is crucial to the subsequent analysis. Failure to construct a proper embedding can result in false-positive indications of dynamical structure, or in the unnecessary failure to resolve low-dimensional structures present in the data. In the case of the correlation dimension measured by the Grassberger-Procaccia algorithm (Grassberger and Procaccia, 1983), it was shown (Albano, et al., 1988) that the crucial parameter is not N and L separately but the embedding window (N-1)L. There appears, however, to be two distinct types of embedding problem since Kaplan and Glass (1992, 1993) have presented a measure of deterministic structure that, given a sufficiently large embedding dimension, is sensitive only to the lag. It is therefore necessary to consider both the data and the measure when constructing an embedding. There is, at present, no general consensus concerning the best procedure for constructing an embedding. Some of the many possibilities have been reviewed by Abarbanel, et al. (1993) and by Buzug and Pfister (1992). The candidate procedures are: a. The autocorrelation function (Albano et al., 1988). b. Mutual information (Fraser, 1989a, Martinerie, et al., 1992). c. Higher order autocorrelation functions (Albano, et al., 1991). d. False nearest neighbors and its variants (Abarbanel and Kennel, 1993; Kennel, Brown and Abarbanel, 1992; Gao and Zheng, 1993; Liebert and Schuster, 1988; Schuster, 1989).

DYNAMICAL ANALYSIS

e. f. g. h.

315

Legendre polynomials (Gibson, et al., 1992). Continuity statistic (Wayland, et al., 1993). Diagonal expansion (Rosenstein, et al., 1993b). Singular value spectra (Fraser, 1989b; Kember and Fowler, 1993).

Resolution of uncertainties about embedding methodology is a primary activity in the dynamical analysis community.

Construct Phase Portraits in the Principle Component Space Once the data are embedded, it is possible to examine the phase portraits of the principal components. The motivation is the same as for the previous class of phase portraits. Let E, the embedding matrix, be the matrix formed by the rows Xj previously defined. The singular value decomposition of E is determined using the Gollub-Reinsch algorithm (Gollub and Reinsch, 1970, 1971): E = VDU T D is the matrix of singular values ordered such that DjaDj§ and U is the associated orthogonal transformation. Let E" = EU. The first two columns of matrix E" contain the first two principal components. The connected (x,y) plot of (x,y) = (E'jl, E'j2), j=l, ..., NDATA gives a two-dimensional phase portrait of the principal components of the signal. As used here, the singular value decomposition is a noise reduction procedure. Phase portraits in the principal component coordinates can often give a cleaner idea of the parameter-dependent changes in dynamical behavior.

Test for Stationarity in the Embedding Space I previously suggested that tests of the time series' spectral properties might not be the most appropriate criterion of stationarity. Eckmann, Kamphorfst and Ruelle (1987) made a very pertinent point: if the end result is a measurement of the embedded data, then why not test the stationarity of the geometry of this set? They introduced a simple and elegant graphical method called recurrence diagrams for doing this. Consider an arbitrary point in the embedding space, Xk. Let the one hundred nearest neighbors to Xk to denoted Xk~, Xk2, .... , Xkl00. Points are plotted at (k,kl) . . . . . (k,kl00). The process of discovering and plotting the coordinates of the nearest neighbors is repeated for every point in the embedding space. If large data sets of several thousand points are examined, and if enough nearest neighbors (typically several hundred) are plotted, then the individual points merge together to form patterns. Some of their mathematical properties have been described by Koebbe and Mayer-Kress (1992) and by Zbilut and Weber (1992). As Eckmann, Kamphorst and Ruelle demonstrate, subtle departures from stationarity that would not be detected easily by other methods produce readily discernible distortions in the recurrence diagrams. The procedure has been applied to biological data by Babloyantz (1989) and by Zbilut, et al. (1991, 1992).

Consider Applying a Geometrical Filter Filtering time series in the time domain is a potentially dangerous activity. There has been confusion in the literature about the effects of filtering on dynamical analysis. Badii,

316

RAPP

et al. (1988) examined the effect of filtering on dimension estimates of data obtained from a laser and from a Rayleigh-Bernard convection experiment. They found that filtering increased the dimension estimates. Specifically, they concluded that dimension increases with decreasing bandwidth. Mitschke, et al. (1988) also found that filtering could increase dimension estimates, and subsequently reported a difference in the effects of causal and acausal filters (Mitschek, 1990). In contrast, Lo and Principe (1989) obtained the opposite results and found that filtering noisy EEG signals decreases the estimated dimension. Broomhead, Huke and Muldoon (1992) have shown that finite-order, nonrecursive filters leave invariant quantities, like dimension, that can be estimated using embedding techniques. However, they take care to stress the important distinction between that which is theoretically true and practical issues of numerical estimation using finite data sets. More recent results (Rapp, et al., t993a) indicate that filtered noise can mimic lowdimensional chaotic attractors, and that surrogate data, which will be described in the next section, can be used to protect against spurious attributions of structure caused by inappropriate filters. Geometrical filters operate not in the time domain but in the embedding space. The ideas date back to the work of Froehling, et al. (1981). Some of the many important papers in the development of geometrical filters are by Eckmann and Ruelle (1985), Farmer and Sidorowich (1987, 1991), Kostelich and Yorke (1988), Broomhead and King (1986), as well as many others. Recent algorithms have been described by Schreiber and Grassberger (1991) and by Sauer (1992). It is important to note recent results of Mees and Judd (1993) showing that these filters, like time domain filters, can be misapplied and produce misleading results. Caution must be exercised.

All Dynamical Measures Should be Validated with Surrogate Data The method of surrogate data (Theiler, et al., 1992) is an important tool in bringing rigor to dynamical analysis. In this method, surrogate data sets are constructed from the original data. The procedure used to construct surrogates depends on the null hypothesis being addressed. The same measure is then applied to the original data and to the surrogates. If the results are different, the null hypothesis can be rejected. Since Theiler's landmark paper, several different classes of surrogates have been invented. The three types of surrogate presented in the 1992 paper remain the most important. Algorithm Zero surrogates address the null hypothesis: the original time series is indistinguishable from uncorrelated noise. These surrogates are formed by a random shuffle of the original data. Algorithm One surrogates address the null hypothesis: the original time series is indistinguishable from linearly filtered noise. These surrogates are formed in a three-step process. The Fourier transform of the original time series is calculated, the phases of this transform are randomized, and then the inverse transform is calculated. This inverse is an Algorithm One surrogate of the original data. Algorithm Two surrogates address the null hypothesis: the original time series is indistinguishable from linearly filtered noise that has been transformed by a static, monotone nonlinearity. Let {vj, j=l, ..., NDATA} be the original time series. Let {yj, j = l . . . . , NDATA} be a Gaussian distributed set of random numbers having the same rank structure as the original time series. Let {yj', j = l . . . . , NDATA} be an Algorithm One surrogate of {yj}. Let {vj', j = l . . . . . NDATA} be a shuffled variant of {vj} that has the same rank structure as {yj'}. Time series {vj'} is an Algorithm Two surrogate of {vj}. The distinction between Algorithm One and Algorithm Two surrogates is an important

DYNAMICALANALYSIS

317

one. It is possible to construct examples using natural data in which Algorithm One surrogates give a false-positive indication of meaningful structure, which is rejected by Algorithm Two surrogates (Rapp, et al., 1994b). There are several possible statistical procedures for treating the results of surrogate calculations. In the 1992 paper (Theiler, et al., 1992), the following procedure was used. Let Morig be the value of the measure obtained with the original data set. Let be the average value obtained with surrogates. Let SDsurr denote the standard deviation of this mean. S is defined as ]Morig-l/SDsurr and gives a quantitative measure of the separation between the results obtained with the original time series and its surrogates. There are at least three alternative statistical procedures that can be used (Rapp, et al., 1994b): a Monte Carlo estimate of the probability of the null hypothesis, a nonparametric criterion for rejecting the null hypothesis based on work by Barnard (1963) and Hope (1968), and a parametric criterion, The importance of the method of surrogate data is its generality. It can be used in combination with any of the measures described in subsequent sections of this chapter. Dynamical analysis can be summarized as a five-step procedure. a. The data are embedded appropriately. b. Stationarity is investigated using recurrence diagrams. c. A geometrical filter can be applied. d. A dynamical measure is applied to the original data. e. Results are validated with comparisons to different classes of surrogate data. The discussion now continues with a presentation of dynamical measures.

Dynamical Measures Global Measures of Dimension Dimension estimation and other measures are reviewed by Grassberger, Schreiber and Schaffrath (1991). The most popular approach to the estimation of dimension is the Grassberger-Procaccia algorithm (Grassberger and Procaccia, 1983). Unfortunately this procedure can give misleading results if it is not used with care (Eckmann and Ruelle, 1992). A list of safeguards that can protect against some forms of failure has been published in Rapp, et al. (1989). These safeguards are not sufficient. For example, filtered noise can satisfy these conditions (Rapp, et al., 1993a). When applying this algorithm, it is essential to incorporate Theiler's modifications to the original procedure (Theiler, 1986). This correction removes some of the downward biasing of dimension estimates that can result from artifactual correlations in the embedding space. All dimension calculations must be confirmed with surrogate calculations. A very large literature has directed attention to the importance of increasing the rigor of dimension estimates: (Greenside, et al., 1982; Smith, 1988; Ellner, 1988; Brock and Sayers, 1988; Somorjai and Ali, 1988; Ramsey and Yuan, 1989; Havstad and Ehlers, 1989; Nerenberg and Essex, 1990; Ruelle, 1990; Gershenfeld, 1992; Berliner, 1992; Smith, 1992; Brock and Potter, 1992). Of the many possibilities now available, a variant of dimension estimation proposed by Judd (Judd, 1992; Judd and Mees, 1991) seems particularly promising. The method provides an estimate of dimension and an estimate of the uncertainty in its value. The dimension estimate is not based on the correlation integrals used in the Grassberger-Procaccia algo-

318

RAPP

rithm, but on the distribution of interpoint distances that is used to calculate these integrals. (It has been pointed out to me that early variants of one of the essential steps in Judd's analysis were given in Termonia and Alexandrowicz (1983) and in Badii and Politi (1985, 1987).) Local Intrinsic Dimension

Broomhead and King (1986) published results that were interpreted by some to suggest that the singular value spectrum of the embedding matrix could provide a useful approximation of an attractor's dimension. This is not the case (Mees, et al., 1987). Broomhead, Jones and King (1987) subsequently demonstrated that the local topological dimension of an embedded set can be estimated from the singular value spectrum of appropriately constructed submatrices. An independent empirical implementation of this idea, the local intrinsic dimension, was published by Passamante and his colleagues (Passamante, et al., 1989; Hediger, et al., 1990). The results obtained with this method show good agreement with Grassberger-Procaccia estimates. Passamante et al. (1989) report that their procedure is more robust to noise and has lower data requirements. The local intrinsic dimension can also be used in analogy with the pointwise dimension, discussed below, to provide a continuous measure of dynamical behavior through time. Pointwise Dimension

Farmer, Ott and Yorke, (1983) introduced the pointwise dimension as an alternative dynamical measure that could be used in the analysis of nonstationary signals (see also variants by Holzfuss and Mayer-Kress, 1986; Skinner, et al., 1993). Rather than measuring the global dimension of a system's attractor, the dimension is measured from a specified point in the embedding space. Because this method produces a time-dependent measure of dynamical complexity, it is particularly promising as a tool for monitoring rapidly changing biological processes (Mitra and Skinner, 1992; Molnar and Skinner, 1992). It should be pointed out that it is possible to apply a procedure like Judd's at each point in the embedding space, thus combining the benefits of a time-dependent measure with increasing precision. Lyapunov Exponents

Lyapunov exponents provide a quantitative measure of the rate of separation of trajectories in phase space. Recently the definition has been generalized (Dressier and Farmer, 1992; Wiesel, 1992). Early algorithms for estimating Lyapunov exponents were published by Sano and Sawada (1985), Wolf, et al. (1985) and Eckmann, et al. (1986). Very large stationary data sets are required to estimate Lyapunov exponents using these early algorithms. Eckmann and Ruelle (1992) have estimated that if N data points are required to estimate the correlation dimension, then N 2 points are required to estimate the Lyapunov exponent. While the degree to which these estimated data requirements are valid is in dispute (Essex and Nerenberg, 1992), the qualitative substance of the Eckmann-Ruelle results is certainly consistent with my empirical experience. I have not been able to measure Lyapunov exponents with our biological data. Several laboratories have published algorithms that have reduced data requirements (Abarbanel, et al., 1991; Bryan, Brown and Abarbanel, 1990; Ellner, et al., 1991; Zeng, et al., 1992; Parlitz, 1992; Rosenstein, et al.,

DYNAMICALANALYSIS

319

1993a), and the question of calculating Lyapunov exponents with noisy, nonstationary data merits reexamination.

Measures of Deterministic Structure The object of this class of measures is to detect deterministic nonlinear dynamics in a noisy time series. Previous approaches to this problem have been published by Brock and Deckert (1989), Kennel, et al., (1992) and Nychka, et al., (1992). (Tests of nonlinear predictability, which are described in the next section, are closely related.) The methods of Kaplan and Glass (1992, 1993) and Wayland, et al., (1993) are now being applied to a number of problems. Both methods are based on the same central idea: in a deterministic system a change of state is a single valued function of the state itself. Kaplan and Glass constructed a statistic, which has a limiting value of 1 for completely deterministic systems and 0 for random numbers. Their procedure begins by partitioning the embedding space into non-overlapping hypercubes. The appropriateness of the partition can be important to the success of the method. One-dimensional partition has been examined by Mosteller and Tukey (1977, page 49) and by Bendat and Piersal (1966, page 284). An adaptive partition that maximizes information content has been constructed by Rissanen (1992). A possible improvement to the Kaplan-Glass method might be obtained if the Rissanen partition was generalized to higher dimensions. Wayland, et al., (1993) avoids the problem of partitioning but provides less specific information.

Measures of Predictability The degree to which a time series can be predicted is itself a measure of its deterministic structure. Predictability may change with changes in physiological state, and therefore may be valuable as a diagnostic instrument and as a means of monitoring treatment. Several nonlinear prediction algorithms have been published (Farmer and Sidorowich, 1987; Casdagli, 1991; Sugihara and May, 1990; Sugihara, Grenfell and May, 1990; Linsay, 1991; Mees, 1991; several others are presented in the book edited by Casdagli and Eubank, 1992). The method published by Sugihara and May is a simplified variant of methods published by Casdagli (1989) and Farmer and Sidorowich (1987). In this method, the future of a point in N dimensional embedding space is predicted using the simplex of N+I nearest neighbors. A statistic based on predictive error gives a measure of deterministic structure in the signal.

Entropy Measures of higher order entropy have been applied successfully to a number of biological problems (Klemm and Sherry, 1982; Sherry and Klemm, 1984; Rapp, et al., 1993b). Its applicability at higher orders has been limited by the availability of large data sets. Recent theoretical work (Schmitt, et al., 1993) has produced an algorithm for estimating higher order entropy that reduces the data requirements significantly.

Algorithmic Complexity and Epsilon Machines Complexity measures need not be based on properties of embedded data. In the simplest implementation, the original time series itself is reduced to a symbol sequence. Analysis is

320

RAPP

performed on the resulting sequence. There are several alternative definitions of complexity (Kolmogorov, 1965; reviewed by Chaitin, 1987). These classical definitions fall into the general category of algorithmic complexity. A message is algorithmically complex if it is incompressible; that is, if the minimal description is about the same size as the original message (Bennet, 1986). A specific example is the context free grammar complexity (Jim6nez-Montafio, 1984) used in the analysis of psychotherapy protocols (Rapp, et al., 1990) and single cortical unit spike trains (Rapp, et al., 1994a). The highest value of algorithmic complexity is obtained with random messages. Huberman and Hogg (1986) have argued that a more meaningful definition of complexity describes a property intermediate between totally ordered systems and random systems. They have constructed a measure of complexity that addresses this issue. In a similar spirit Crutchfield and Young (1989; Crutchfield, 1992) have constructed epsilon machines, which provide a quantitative measure of complexity that is low for periodic and random systems and high for chaotic systems. The feasibility of application to biological data has not yet been tested. A problem shared by all measures of symbolic dynamics is the appropriateness of the procedure used to reduce the original dynamical data to a symbol sequence. This problem is related to the embedding partition discussed in the context of the Kaplan-Glass test of determinism. Kolmogorov (1958) proved that the efficacy of the partition of continuous data into a finite symbol set depends on establishing a generating partition. This theorem does not, however, provide a general procedure for constructing generating partitions. An empirical computational approach to this problem can be constructed using surrogate data. For any given measure on a symbol sequence, we operationally define the better of two partitions as the partition that gives the larger resolution of a data set from its surrogates. This approach makes it possible to test against spurious results due to faulty symbolic partitioning (Rapp, et al., 1994b).

Interpreting the Results of Dynamical Analysis A comparison of the results obtained with the original data against the results obtained with different classes of surrogates is the first step in the statistical interpretation of dynamical analysis. The S statistic that characterizes the separation between the data and its surrogates can be used to identify distinct classes of behavior (Rapp, et al., 1994a). Conventional statistical measures of significance should be used to determine if values of dynamical measures obtained in different experimental conditions have the same mean (ttest), the same variance (f-test) and if the distributions of values are different (chi-square test and the Kolmogorov-Smirnov test). As part of the interpretive process we must address three questions: a. Did it tell us anything? b. Did it tell us anything we didn't already know? c. Did it tell us anything useful? These three miserable questions can be most readily addressed by comparing the values of dynamical measures obtained in contrasting experimental conditions. As a specific example, I'll focus on dimension calculations with EEG signals recorded in our laboratory from normal, adult subjects at rest and during periods of cognitive activity. The design of the experiment and a description of the recording procedures is given in Rapp, et al. (1989).

DYNAMICALANALYSIS

321

Did the analysis tell us anything? That is, are the values of dimension significantly different in the two cases? When dimensions from the 1989 study were recomputed with an algorithm that incorporated Theiler's correction for artifactual correlations (Theiler, 1986), it was found that the answer is no. In most cases the removal of artifactual correlations resulted in nonconvergence of the dimension estimation algorithm, and no lowdimensional structure was discernable. In those limited cases where a dimension estimate was possible, the values of dimension obtained in the resting and in the cognitively active state were not significantly different. In a t-test, the probability of the null hypothesis (the probability that the difference in the average value of dimension obtained in the two cases is due to random variation) was p=.385. Let us suppose that the answer was yes. Had that been the case, we would have proceeded to the second question: did the calculations tell us anything that we didn't already know? This can be restated in the present context in the following way: could an equal or even greater statistical separation between the two groups of EEGs be obtained with a more readily computed measure? In our enthusiasm for mathematically interesting measures like dimensions and Lyapunov exponents, we often ignore robust, easily computed classical measures. In the case of our EEG data, the average first minimum of the autocorrelation function in the two conditions was significantly different (p=.001). If the object of the exercise is to find an empirical measure that could discriminate between EEGs obtained indifferent physiological states, and this is usually the cited motivation, we would almost certainly be better off computing something simple like the autocorrelation function. When we are honest enough to ask if dynamical analysis tells us anything we didn't already know, the answer will probably be "No, not really." Did the examination of the EEGs tell us anything useful? To address this question, we must examine the results of t-tests with care. In the literature, it is often suggested that a low value of p indicates that the measurement will be useful in a classification procedure. For example, consider again the first minimum of the autocorrelation function where p=.001. Does this mean that this measurement can successfully classify the EEG signals between the rest and active state? No, it does not. The null hypothesis of the t-test asks if the difference of average values of a given measure obtained from two groups is due to random variation. It is possible to reject this hypothesis with a very low p value even though the distributions overlap significantly. When significant overlap occurs, and it does in this case, the measured variable will not be particularly successful in a classification. The question of success as a classifying variable requires tests drawn from discriminant analysis (Lachenbruch, 1975). In our test case the difference between the first minimum of the autocorrelation function in the rest and active state gave p=.001 in a t-test. However, the error rate in a pairwise classification using this variable was p=.35. (Recall that the maximum error rate in a pairwise discrimination is .5.) The first successes in the application of dynamical analysis to experimental data were obtained in experiments with fluids (Gollub and Swinney, 1975; Gollub and Benson, 1980; Libchaber and Mauer, 1982) and with simple electronic circuits (Gollub, Romer and Socolar, 1980; Chua, Komuro and Matsumoto, 1986). In contrast with the central nervous system, these are well-defined systems that can be experimentally constrained to a limited number of behaviors. Armed with the experience gained during the past ten years, it now seems that proceeding from simple electronic circuits to the mammalian central nervous system in a single step was a conceit approaching madness. It may be possible, however, to construct a way forward for the dynamical analysis of biological systems by combining two strategies. First, experience should be gained in the application of these measures to

322

RAPP

well-defined biological subsystems. An encouraging example of this approach has been presented by Yip, et al. (1994) who has shown low-dimensional behavior in renal blood flow in calculations with the Grassberger-Procaccia algorithm. The results were highly significant when compared against both Algorithm One and Algorithm Two surrogates. Second, less-demanding measures should be considered. Dynamical analysis has tended to focus on two measures both of which require long, high-quality stationary time series: the correlation dimension and the Lyapunov exponent. Greater success may be obtained with measures that tell us less about the system, but also have less-stringent data requirements. For example, coarse-grained measures were successfully applied to electromyographic data that could not be analyzed by sophisticated measures like dimension (Rapp, et al., 1993b). Similarly, the algorithmic complexity of neural spike trains obtained from single cortical units before and after the topical application of penicillin was significantly different in the two physiological states, and, in some instances, displayed significant separation from values obtained from Algorithm One and Algorithm Two surrogates. Calculations of algorithmic complexity identified distinct classes of neural behavior that could not be found by an examination of the interspike interval distribution (Rapp, et al., 1994a). The first ten years of the dynamical analysis of biological data has been characterized by difficulties and disappointments, but much has been learned. The prospects for the application of improved measures to more realistically defined problems are encouraging. Acknowledgments I would like to acknowledge NIH Grants NS19716 and NS32983 and research support from the University of Western Australia. I would also like to thank the colleagues who have participated in this work: A.M. Albano, T.R. Bashore, K. Judd, J. Martinerie, A.I. Mees, T.I. Schmah, J. Theiler and I.D. Zimmerman.

References Abarbanel, H.D.I., Brown, R. and Kennel, M.B. (1991). Variation of Lyapunov exponents on a strange attractor. Journal Nonlinear Science, 1: 175-199. Abarbanel, H.D.I., Brown, R., Sidorowich, J.J. and Tsimiring, L.S. (1993). The analysis of observed chaotic data in physical systems. Reviews of Modern Physics, 65, 1331-1392. Abarbanel, H.D.I., and Kennel, M.B. (1993). Local false nearest neighbors and dynamical dimensions from observed chaotic data. Physical Review, 47E, 3057-3068. Albano, A.M., Muench, J., Schwartz, C., Mees, A.I. and Rapp, P.E. (1988). Singular-value decomposition and the Grassberger-Procaccia algorithm. Physical Review, 38A, 3017-3026. Albano, A.M., Passamante, A. and Farrell, M.E. (1991). Using higher-order correlations to define an embedding window. Physica, 54D, 85-97. Babloyantz, A. (1989). Some remarks on the nonlinear analysis of physiological time series. In: Measures of Complexity and Chaos. N.B. Abraham, A.M. Albano, A. Passamante and P.E. Rapp (Eds.), pp. 51-62. New York: Plenum Press. Badii, R., Broggi, G., Derighetti, B., Ravani, M., Cilibertti, S., Politi, A. and Rubio, M,A. (1988). Dimension increase in filtered chaotic signals. Physical Review Letters, 60, 979-982. Badii, R. and Politi, A. (1985). Statistical description of chaotic attractors: The dimension function. Journal of Statistical Physics, 40, 725-750. Badii, R. and Politi, A. (1987). Renyi dimensions from local expansion rates. Physical Review, 35A, 12881293. Barlow, J.S. (1985). Methods of analysis of nonstationary EEGs with emphasis on segmentation techniques: A comparative review. Journal of Clinical Neurophysiology, 2, 267-304. Barlow, J.S., Creutzfeldt, O.D., Michael, D., Houchin, J. and Epelbaum, H, (1981). Automatic adaptive segmentation of clinical EEGs. Electroencephalography and Clinical Neurophysiology, 51, 512-525. Barnard, G.A. (1963). Discussion on the spectral analysis of point processes by M.S. Bartlett. Journal of the

DYNAMICAL ANALYSIS

323

Royal Statistical Society, 25B, 294. Bendat, J.B. and Piersol, A.G. (1966). Measurement and Analysis of Random Data. New York: John Wiley. Bennett, C.H. (1986). On the nature and origin of complexity in discrete, homogeneous, locally-interacting systems. Foundations Physics., 16, 585. Berliner, L.M. (1992). Statistics, probability and chaos. Statistical Sciences, 7, 69-122. Bodenstein, G. and Praetorious, H.M. (1977). Feature extraction from the electroencephalogram by adaptive segmentation. Proceedings IEEE, 65, 642-652. Brock, W.A. and Dechert, W.E. (1989). Statistical inference theory for measures of complexity in chaos theory and nonlinear science. In: Measures of Complexity and Chaos. N.B. Abraham, A.M. Albano, A. Passamante and P.E. Rapp, (Eds.), pp. 79-98. New York: Plenum Press. Brock, W.A. and Potter, S.M. (1992). Diagnostic testing for nonlinearity, chaos and general dependence in time series data. In: Nonlinear Modeling and Forecasting. M. Casdagli and S. Eubank, (Eds.), pp. 137-162. Reading, MA: Addison-Wesley. Brock, W.A. and Sayers, C.L. (1988). Is the business cycle characterized by deterministic chaos. Journal of Monetary Economics, 22, 71-90. ' Broomhead, D.S., Huke, J.P. and Muldoon, M.R. (1991). Linear filters and nonlinear systems. Journal of the Royal Statistical Society, 54B, 373-382. Broomhead, D.S., Jones, R. and King, G.P. (1987). Topological dimension and local coordinates from time series data. Journal d'Physique, 20A, L563-L569. Broomhead, D.S. and King, G.P. (1986). Extracting qualitative dynamics from experimental data. Physica, 20D, 217-236. Bryan, P., Brown, R. and Abarbanel, H.D.I. (1990). Lyapunov exponents from observed time series. Physical Review Letters, 65, 1523-1526. Buzug, T. and Pfister, G. (1992). Comparison of algorithms calculating optimal embedding parameters for delay time coordinates Physica, 58D, 127-137. Casdagli, M. (1989). Nonlinear prediction of chaotic time series. Physica, 35D, 335-356. Casdagli, M. (1991). Chaos and deterministic versus stochastic nonlinear modelling. Journal of the RoyalStatistical Society, 54B, 303-328. Casdagli, M. and Eubank, S., (Eds.). (1992). Nonlinear Modeling and Forecasting. Reading, MA: Addison Wesley. Chaitin, G.J. (1987). Algorithmic Information Theory. Cambridge: Cambridge University Press. Childers, D.G. (1978). Modern Spectrum Analysis. New York: IEEE Press. Chua, L.O., Komuro, M. and Matsumoto, T. (1986). The double scroll family. IEEE Transactions on Circuits and Systems, CAS-33, 1073-1117. Crutchfield, J.P. (1992). Semantics and thermodynamics. In: Nonlinear Modeling and Forecasting. M. Casdagli and S. Eubank, (Eds.), pp. 317-360. Reading, MA: Addison Wesley. Crutchfield, J.P. and Young, K. (1989). Inferring statistical complexity. Physical Review Letters, 63, 105-108. Dressier, U. and Farmer, J.D. (1992). Generalized Lyapunov exponents corresponding to higher derivatives. Physica, 59D, 365-377. Eckmann, J.-P., Kamphorst, S.O. and Ruelle, D. (1987). Recurrence plots of dynamical systems. Europhysics Letters, 1,973-977. Eckmann, J.-P., Kamphorst, S.O. and Ruelle, D. and Cilibertti, S. (1986). Liapunov exponents from time series. Physical Review, 34A, 4971-4979. Eckmann, J.-P., and Ruelle, D. (1985). Ergodic theory of chaos and strange attractors. Reviews of Modern Physics, 57, 617-656. Eckmann, J.-P., and Ruelle, D. (1992). Fundamental limitations for estimating dimensions and Liapunov exponents in dynamical systems. Physica, 56D, 185-187. Ellner, S. (1988). Estimating attractor dimensions from limited data. A new method with error estimates. Physics Letters, 133A, 128--133. Ellner, S., Gallant, A.R., McCaffrey, D. and Nychka, D. (1991). Convergence rates and data requirements for Jacobian-based estimates of Lyapunov exponents from data. Physics Letters, 153A, 357-363. Essex, C. and Nerenberg, M.A.H. (1991). Comments on "Deterministic chaos: The science and the fiction" by David Ruelle. Proceedings of the Royal Society (London), 435A, 287-292. Farmer, J.D., Ott, E. and Yorke, J.A. (1983). The dimension of chaotic attractors., Physica, 7D, 153-180. Farmer, J.D. and Sidorowich, J.J. (1987). Predicting chaotic time series. Physical Review Letters, 59, 845--848. Farmer, J.D. and Sidorowich, J.J. (1988). Exploiting chaos to predict the future and reduce noise. In: Evolution, Learning and Cognition, Y.C. Lee, (Ed.), pp. 277-300., Singapore: World Scientific.

324

RAPP

Ferber, G. (1987). Treatment of some nonstationarities in the EEG. Neuropsychobiology, 17, 100-104. Fraser, A.M. (1989a). Information storage and entropy in strange attractors. IEEE Transactions on Information Theory, 35, 245-262. Fraser, A.M. (1989b). Reconstructing attractors from scalar time series: A comparison of singular system and redundancy criteria. Physica, 34D, 391--404. Fraser, A.M. and Swinney, H.L. (1986). Independent coordinates for strange attractors from mutual information. Physical Review, 33A, 1134-1140. Froehling, H., Crutchfield, J.P., Farmer, D., Packard, N.H. and Shaw, R. (1981). On determining the dimension of chaotic flows. Physica, 3D, 605--617. Gao, J., and Zheng, Z. (1993). Local exponential divergence plot and optimal embedding of a chaotic time series. Physics Letters, 181A, 153-158. Gerschenfeld, N.A. (1992). Dimension measurement on high dimensional systems. Physica, 55D, 135-154. Gibson, J.F.. Farmer, J.D., Casdagli, M. and Eubank, S. (1992). An analytic approach to practical state space construction. Physica, 57D, 1-30. Gollub, J.P. and Benson, S.V. (1980). Many routes to turbulent convection, Journal of Fluid Mechanics, 100, 449--470. Gollub, J.P., Romer, E.G. and Socolar, J.E. (1980). Trajectory divergence for coupled relaxation oscillators: Measurements and models. Journal of Statistical Physics, 23, 321-333. Gollub, J.P. and Swinney, H.L. (1975). Onset of turbulence in a rotating fluid. Physical Review Letters, 35, 927-930. Golub, G.H. and Reinsch, C. (1970). Singular value decomposition and least squares solutions. Numerical Mathematics, 14, 403--420. Golub, G.H. and Reinsch, C. (1971). Singular value decomposition and least squares solutions. Handbook for Automatic Computation. Vol. II. Linear Algebra. Heidelberg: Springer. Grassberger, P. and Procaccia, I. (1983). Measuring the strangeness of strange attractors. Physica, 9D, 189208. Grassberger, P., Schreiber, T. and Schaffrath, C. (1991). Nonlinear time sequence analysis. International Journal of Bifurcation and Chaos, 1,521-548. Greenside, H.S., Wolf, A., Swift, J. and Pignataro, T. (1982). Impracticality of a box counting algorithm for calculating the dimensionality of strange attractors. Physical Review, 25A, 3453-3456. Hasman, A., Jansen, B.H., Landeweerd, G.H. and van Blokland-Vogelsang, A.W. (1978). Demonstration of segmentation techniques for EEG records. International Journal of BioMedical Computing, 9, 311-321. Havstad, J.W. and Ehlers, C.L. (1989). Attractor dimension of nonstationary dynamical systems from small data sets. Physical Review, 39A, 845-853. Hediger, T., Passamante, A. and Farrell, M.E. (1990). Characterizing attractors using local intrinsic dimension calculated by singular value decomposition and information-theoretic criteria. Physical Review, 41A, 53255332. Holzfuss, J. and Mayer-Kress, G. (1986). An approach to error-estimation in the application of dimension algorithms. In: Dimensions and Entropies in Chaotic Systems. G. Mayer-Kress, (Ed.), pp. 114-122. Berlin: Springer-Verlag. Hope, A.C.A. (1968). A simplified Monte Carlo significance test procedure. Journal of the Royal Statistical Society, 30B, 582-598. Huberman, B.A. and Hogg, T. (1986). Complexity and adaptation. Physica, 22D, 376--384. Jensen, B.H. and Cheng, W.K. (1988). Structural EEG analysis: An explorative study. International Journal of Biomedical Computing, 23, 221-237. Jim6nez-Montafio, M.A. (1984). On the syntactic structure of protein sequences and the concept of complexity. Bulletin of Mathematical Biology, 46, 641--659. Judd, K. (1992). An improved estimator of dimension and some comments on providing confidence intervals. Physica, 56D, 216--228. Judd, K. and Mees, A.I. (1991). Estimating dimensions with confidence. International Journal of Bifurcation and Chaos, 1,467--470. Kaplan, D.T. and Glass, L. (1992). Direct test for determinism in a time series. Physical Review Letters, 68, 427--430. Kaplan, D.T. and Glass, L. (1993). Coarse-grained embeddings of time series: Random walks, Gaussian random processes, and deterministic chaos. Physica, 64D, 431-454. Kember, G. and Fowler, A.C. (1993). A correlation function for choosing time delays in phase portrait reconstructions. Physics Letters, 179A, 72-80.

DYNAMICAL ANALYSXS

325

Kennel, M.D., Brown, R. and Abarbanel, H.D.I. (1992). Determining minimum embedding dimension using a geometrical construction. Physical Review, 45A, 3404-3411. Klemm, W.R. and Sherry, C.J. (1982). Do neurons process information by relative intervals in spike trains? Neuroscience and Biobehavioral Research, 6, 429--437. Koebbe, M. and Mayer-Kress, G. (1992). Use of recurrence plots in the analysis of time series data. In: Nonlinear Modeling and Forecasting, M. Casdagli and S. Eubank (Eds.), pp. 361-378. Reading, MA: Addison Wesley. Kolmogorov, A.N. (1958). A metric invariant of transient dynamical systems and automorphisms in Lebsegue spaces. Dokl. Acad. Nauk USSR. 119, 861--864. (English summary: Mathematical Reviews, 21,386.) Kolmogorov, A.N. (1965). Three approaches to the definition of the concept of quantity of information. IEEE Transactions on Information Theory, IT14, 662-669. Kostelich, E.J. and Yorke, J.A. (1988). Noise reduction in dynamical systems. Physical Reviews, 38A, 16491652. Kuskowski, M.A., Mortimer, J.A., Morley, G.K., Malone, S.M. and Okaya, A.J. (1993). Rate of cognitive decline in Alzheimer's disease is associated with EEG alpha power. Biological Psychiatry, 33, 659-662. Lachenbruch, P.A. (1975). DiscriminantAnalysis. New York: Hafner Press. Libchaber, A. (1983). Experimental aspects of the period doubling scenario. Lecture Notes in Physics, 179, 157-164. Libchaber, A. and Matter, J.(1982).A Rayleigh-Benard experiment: Helium in a small box. In: Nonlinear Phenomena at Phase Transitions and Instabilities, T. Riske (Ed.), pp. 259-286. New York: Plenum. Liebert, W. and Schuster, H.G. (1988). Proper choice for time delay for the analysis of chaotic time series. Physics Letters A., 142, 107-11 i. Linsay, P.S. (1991). An efficient method of forecasting chaotic time series using linear interpolation. Physics Letters, 153A, 353-356. Lopes da Silva, F.H. (1978). Analysis of EEG non-stationarities. In." Contemporary Clinical Neurophysiology, (EEG Suppl. No. 34). W.A. Cobb and H. van Duijn (Eds.), pp. 165-179. Amsterdam: Elsevier. Marl6, R. (1980). On the dimension of the compact invariant sets of certain nonlinear maps. In: Dynamical Systems and Turbulence. Lecture Notes in Mathematics. Volume 898. D.A. Rand and L.S. Young (Eds.), pp. 230-242. New York: Springer-Verlag. Martineri6, J., Albano, A.M., Mees, A.I. and Rapp, P.E. (1992). Mutual information, strange attractors and optimal estimation of dimension. Physical Review, 45A, 7058-7064. Mees, A.I. (1991). Dynamical systems and tesselations: Detecting determinism in data. International Journal of Bifurcation and Chaos, 1,777-794. Mees, A.I. and Judd, K. (1993). Dangers of geometric filtering. Physica, 68D, 427--436. Mees, A.I., Rapp, P.E. and Jennings, L.S. (1987). Singular value decomposition and embedding dimension. PhysicalReview, 36A, 340-346. Michael, D. and Houchin, J. (1979). Automatic EEG analysis: A segmentation procedure based on the autocorrelation function. Electroencephalography and Clinical Neurophysiology, 46, 232-239. Mitra, M., and Skinner, J.E. (1992). Low-dimensional chaos maps learning in a model neuropil (olfactory bulb). Integrative Physiological and Behavioral Science, 27, 304-322. Mitschke, F. (1990). Acausal filters for chaotic signals. Physical Review, 41A, 1169-1171. Mitschke, F., Miiller, M. and Lange, W. (1988b). Measuring filtered chaotic signals. Physical Review, 37A, 4518---4521. Molnar, M. and Skinner, J.E. (1992). Low-dimensional chaos in event-related brain potentials. International Journal of Neuroscience, 66, 263-276. Mosteller, F. and Tukey, J.W. (1977), Data Analysis and Regression. Reading, MA: Addison-Wesley. Nagase, Y., Okubo, Y., Matsuura, M. and Kojima, T. (1992). Topographical changes in alpha power in medicated and unmedicated schizophrenics during digits span reverse matching test. Biological Psychiatry, 32, 870--879. Nerenberg, M.A.H. and Essex, C. (1990). Correlation dimension and systematic geometric effects. Physical Review, 42A, 7065-7074. Noakes, L. (1991). The Takens embedding theorem. International Journal of Bifurcation and Chaos, 1, 867872. Nychka, D., Ellner, S., Gallant, A.R. and McCaffrey, D. (1992). Finding chaos in noisy systems. Journal of the Royal Statistical Society, 52B, 399-426. Packard, N.H., Crutchfield, J.P., Farmer, J.D. and Shaw, R.S. (1980). Geometry from a time series. Physical Review Letters, 45, 712-716.

326

RAPe

Parlitz, U. (1992). Identification of true and spurious Lyapunov exponents from time series. International Journal of Bifurcation and Chaos, 2, 155-166. Passamante, A., Hediger, T. and Gollub, M. (1989). Fractual dimension and local intrinsic dimension. Physical Review, 39A, 3640-3645. Press, W.H., Flannery, B.P., Teukolsky, S.A. and Vetterling, W.T. (1986). Numerical Recipes. TheArt o]'Scientific Computing. Cambridge: Cambridge University Press. Ramsey, J.B. and Yuan, H.-J. (1989). Bias and error bars in dimension calculations and their evaluation in some simple models. Physics Letters, 134A, 287-297. Rapp, P.E. (1993). Chaos in the neurosciences: Cautionary tales from the frontier. The Biologist (London: Institute of Biology), 40, 89-94. Rapp, P.E., Bashore, T.R., Martineri6, J.M., Albano, A.M. and Mees, A.I. (1989). Dynamics of brain electrical activity. Brain Topography, 2, 99-118. Rapp, P.E., Jim~nez-Montafio, M.A., Langs~ R.J. and Thomson, L. (1990). Quantitative characterization of patient-therapist communication. Mathematical Biosciences, 105, 207-227. Rapp, P.E., Albano, A.M., Schmah, T.I. and Farwell, L.A. (1993a). Filtered noise can mimic low dimensional chaotic attractors. Physical Review, 47E, 2289-2297. Rapp, P.E., Goldberg, G., Albano, A.M., Janicki, M.B., Murphy, D., Niemeyer, E. and Jim6nez-Montafio, M.A. (1993b). Using coarse-grained measures to characterize electromyographic signals. International Journal of Bifurcation and Chaos, 3, 525-542. Rapp, P.E., Zimmerman, I.D., Vining, E.P., Cohen, N., Albano, A.M. and Jimgnez-Montafio, M.A. (1994a). The algorithmic complexity of neural spike trains increases during focal seizures. Journal of Neuroscience (in press). Rapp, P.E., Albano, A.M., Zimmerman, I.D. and Jim6nez-Montafio, M.A. (1994b). Phase-randomized surrogates can produce spurious identifications of non-random structure. Physics Letters (in press). Rissanen, Y. (1992). Stochastic Complexity in Statistical Inquiry. Singapore: World Scientific. Rosenstein, M.T., Collins, J.J. and DeLuca, C.J. (1993a). A practical method for calculating largest Lyapunov exponents from small data sets. Physica, 65D, 117-134. Rosenstein, M.T., Collins, J.J. and DeLuca, C.J. (1993b). Reconstruction expansion as a geometry-based framework for choosing proper delay times. Physica, 73D, 82-98. Sano, M. and Sawada, Y. (1985). Measurement of the Lyapunov spectrum from chaotic time series. Physical Review Letters, 55, 1082. Sauer, T. (1992). A noise reduction method for signals from nonlinear systems. Physica, 58D, 193-201. Sauer, T., Yorke, J.A. and Casdagli, M. (1991). Embedology. Journal of Statistical Physics, 65, 579-616. Schellenberg, R., Schwarz, A., Knorr, W. and Haufe, C. (1992). EEG-brain mapping. A method to optimize therapy in schizophrenics using absolute power and center frequency values. Schizophrenia Research, 8, 21-29. Schmitt, A.O., Herzel, H. and Ebeling, W. (1993). A new method to calculate higher-order entropies from finite samples. Europhysics Letters, 23,303-309. Schreiber, T. and Grassberger, P. (1991). A simple noise reduction method for real data. Physics Letters, 160A, 411--418. Sherry, C.J. and Klemm, W.R. (1984). What is the meaningful measure of neuronal spike train activity? Journal of Neuroscience Methods, 10, 205-213. Skinner, J.E., Pratt, C.M. and Vybiral, T. (1993). A reduction in the correlation dimension of heartbeat intervals precedes imminent ventricular-fibrillation in human-subjects. American Heart Journal, 125, 731743. Sloan, E.P. and Fenton, G.W. (1993). EEG power spectra and cognitive change in geriatric psychiatry: A longitudinal study. Electroencephalography and Clinical Neurophysiology, 86, 361-367. Smith, L.A. (1988). Intrinsic limits on dimension calculations. Physics Letters, 133A, 283-288. Smith, R.L. (1992). Estimating dimension in noisy chaotic time series. Journal of the Royal Statistical Society, 54B, 329-351. Somorjai, R.L. and Ali, M.K. (1988). An efficient algorithm for estimating dimensionalities. Canadian Journal of Chemistry, 66, 97%982. Sugihara, G., Grenfell, B. and May, R.M. (1990). Distinguishing error from chaos in ecological time series. Philosophical Transactions Royal Society, 330B, 235-251. Sugihara, G. and May, R.M. (1990). Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series. Nature, (London) 344, 734-741. Takens, F. (1980). Detecting strange attractors in turbulence. Lecture Notes in Mathematics. Volume 898. D.A.

DYNAMICAL ANALYSIS

327

Rand and L.S. Young (Eds.), pp. 365-381. New York: Springer-Verlag. Termonia, Y. and Alexandrowicz, Z. (1983). Fractal dimension of strange attractors from radius versus size of arbitrary clusters. Physical Review Letters, 51, 1265-1268. Theiler, J. (1986). Spurious dimensions from correlation algorithms applied to limited time series data. Physical Review, 34A, 2427-2433. Theiler, J., Eubank, S., Longtin, A., Galdrikian, B. and Farmer, J.D. (1992). Testing for nonlinearity in time series: The method of surrogate data. Physica, 58D, 77-94. Wayland, R., Bromley, D., Pickett, D. and Passamante, A. (1993). Recognizing determinism in a time series. Physical Review Letters, 70, 580-582. Wiesel, W.E. (1992). Extended Lyapunov exponents. Physical Review, 46A, 7480-7491. Wolf, A., Swift, J.B., Swinney, H.L. and Vastano, J.A. (1985). Determining Lyapunov exponents from a time series. Physica, 16D, 285-317. Yip, K.-P., Marsh, D.J. and Holstein-Rathlou, N.-H. (1994). Low dimensional chaos in renal blood flow control in genetic and experimental hypertension. Physica D, in press. Zbilut, J.P., Koebbe, M., Loeb, H. and Mayer-Kress, G. (1991). Use of recurrence plots in the analysis of heart beat intervals. Proceedings Computers in Cardiology, Washington: IEEE. Zbilut, J.P. and Webber, C.L. (1992). Embeddings and delays derived from quantification of recurrence plots. Physics Letters, 171A, 199-203. Zeng, X., Eykholt, R. and Pielke, R.A. (1991). Estimating the Lyapunov exponent spectrum from short time series of low precision. Physical Review Letters, 66, 3229-3232.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.