CHAOS SEARCH IN FOURIER AMPLITUDE SENSITIVITY TEST

Work in Artificial Intelligence often involves search algorithms. In many complicated problems, however, local search algorithms may fail to converge into global optimization and global search procedures are needed. In this paper, we investigate the Fourier Amplitude Sensitivity Test (FAST) as an example of a global sensitivity analysis tool for complex, non-linear dynamical systems. FAST was originally developed based on the Fourier series expansion of a model output and on the assumption that samples of model inputs are uniformly distributed in a highdimensional parameter space. In order to compute sensitivity indices, the parameter space needs to be searched utilizing an appropriate (space-filling) search curve. In FAST, search curves are defined through learning functions, selection of which will heavily affect the global searching capacity and computational efficiency. This paper explores the characterization of learning functions involved in FAST and derives the underlying dynamical relationships with chaos search, which can provide new learning algorithms. This contribution has proven the general link that exists between chaos search and FAST, which helps us exploit the ergodicity of chaos search in artificial intelligence applications.


INTRODUCTION
System models in artificial intelligence (AI) involve very complex and nonlinear dynamics, sometimes with the nature of chaos.In chaotic systems, it is well-known that the learning trajectory can be simple or complex depending on initial conditions.Further, system models contain many parameters, whose influence on the output can be diverse and different reflecting the underlying complex mechanisms to learn.In order to learn the contribution of each model parameter to the variance of the output uncertainty, variance-based global sensitivity analysis techniques are used.Fourier Amplitude Sensitivity Test (FAST) is one of such methods, which provides an analysis of variance (ANOVA)-like decomposition of output variances (Cukier, Fortuin, Schuler, Petschek, & Schaibly, 1973).Based on these output variances, main effect (i.e., first-order sensitivity) and total effect indices can be efficiently computed.Since the ANOVA-like FAST decomposition is model independent, use of the method does not require that the structure of system models should be known.
FAST was initially developed for functional models whose input parameters were uniformly distributed.Cukier et al. and Schaibly and Shuler (1973) formulated the mathematical procedure to transform the sine curve based on the Fourier series expansion into the uniform distribution of model input.The procedure was afterwards upgraded in an effort to obtain the uniform distribution of the input parameters (Cukier, Levine, & Schuler, 1978).Based on the study given by Cukier et al. (1978), Collins and Avissar (1994) developed the transformation procedure for models with non-uniform distributions.In particular, Saltelli, Ratto Andres, Campolongo, Cariboni, Gatelli, Saitana and Tarantola et al. (1999) proposed a popular methodology known as the extended FAST, which attains true uniform distribution of input parameters.
In the practical implementation of FAST, it is important to select an appropriate learning function that governs the search curve from which samples are taken for model evaluations.For models with continuous distributions, generalized transformation procedures may be available.However, the transformation function is based on the pre-specified Probability Density Function (PDF) of the input parameters and it can cause errors when the PDF is not accurately known or only partially known.Such errors may render FAST to produce incorrect results in model uncertainty assessments, especially for complex non-linear systems.Hence, characterization of the learning function is needed.Fang Gertner, Shinkoreva, Wang and Anderson (2003) proposed a method to improve the generalized transformation procedure for learning.sequences derived from the Logistic map is a Chebyshev-type (U-shaped) one with very high density near the two ends of the sample interval (0,1), and low density at the middle region, indicating that the marginal distribution is not uniform.This type of sample distribution may limit the global-learning capacity and computational efficiency.Since the PDF of chaotic sequences based on the piecewise-linear Tent map (a set of tent-shaped lines) is uniform in (0,1), the Tent map is commonly used in chaos optimization (Yang, Cheng, Solitons & Fractals, 2007).
To the author's knowledge, however, the general link between FAST and chaos search has not been explored so far.In this paper, a periodic sampling approach of FAST is interpreted and analysed within the technical framework of chaos search.In particular, it is shown for the first time that the performance (i.e., global-learning capacity and efficiency) of FAST is essentially equivalent to that of the chaos search procedures derived from the Logistic and Tent maps.This is based on the observation that sampled parameters by the FAST method proposed by Koda et al. (1979) and chaotic sequences from the Logistic map both follow the same Chebyshev distribution.Also, sampled parameters by the extended FAST and chaotic sequences from the Tent map both follow the uniform distribution.
This study intends to provide more flexibility in the practical implementation of FAST by way of expanding the class of search curves to chaotic maps.One of the major problems associated with the periodic sampling approach of the traditional FAST would be the non-ergodicity of the periodic search curves involved.For ergodic exploration or learning of the parameter space, a new random sampling method is proposed based on the chaotic sequences derived from the Tent map.The present result is encouraging in that FAST can be applied to problems where input parameters are modelled by chaotic variables and efficiently searched based on the Logistic and Tent maps.This paper can help us better understand the FAST methodology and provide a fundamental basis to exploit the unique properties (e.g., ergodicity, pseudo-randomness, irregularity, etc.) of chaos for further advancements of the method.

REVIEW OF FOURIER AMPLITUDE SENSITIVITY TEST (FAST)
The principle of FAST is that a model can be expanded into a Fourier series.In FAST, combined with characteristic frequencies assigned to input parameters, Fourier coefficients are used to estimate the model output variances including partial variances due to higher order interaction effects.Then, similar to the ANOVA decomposition, it becomes possible to decompose the total output variance into partial variances accounting for the contributions of individual input parameters.Based on this decomposition, various sensitivity indices can be estimated efficiently (Saltelli, 2008).An advantage of the classical FAST is that the evaluation of sensitivity indices can be carried out independently for each model factor using just one simulation because all the terms in the Fourier expansion are mutually orthogonal.Thus, the main computation involved is the simultaneous evaluation of the Fourier coefficients.
In FAST, the parameter space needs to be searched by utilizing an appropriate search curve which can move arbitrary close to any point in the input space, i.e. space-filling curve.Then, Fourier analysis can be performed along the search curve that traverses the entire parameter space.This is based on the ergodic theorem as detailed by Weyl (1938) which allows the calculation of high-dimensional integrals involved in the evaluation of Fourier coefficients through an equivalent one-dimensional integral with respect to a scalar search variable.
A search curve is defined through a set of parametric functions.
(1) where i x denotes the sample of the i-th input parameter, i g is the learning function to be determined, i ω is the distinct, incommensurate characteristic frequency assigned to i x , and s denotes the scalar auxiliary variable whose range is determined for the sample size required for the Fourier analysis.A set of frequencies is said to be incommensurate if none of them may be obtained as a linear combination of the other frequencies with integer coefficients.If this is the case then the search curve will never repeat itself and fill the entire parameter space.The learning function i g converts the multi-dimensional search in the original domain of input parameters into one-dimensional search in the common auxiliary variable s.As s varies, all input parameters vary simultaneously in their own range of variance at rates according to the characteristic frequencies assigned to them.
In the numerical implementation, however, an incommensurate set of characteristic frequencies cannot be used since it would require an evaluation of integrals over an infinite interval, which is not computationally feasible.Hence, an appropriate set of integer frequencies is used instead.Consequences of this technical treatment are that the search curve is no longer a spacefilling one, but becomes a periodic curve with period π 2 , and an approximate integration can be effectively performed.Conditions and criteria have been extensively studied for the proper selection of characteristic frequencies and the minimum sample size required for the analysis.Efforts have been also made to deal with the numerical errors associated with aliasing and interferences introduced by using integer frequencies.It is, however, beyond the scope of this contribution to describe all the technical details about the practical implementation of FAST, and the readers are referred to, e.g.[1][2][3][4][5][6][7].
We hereafter assume that all input parameters are statistically independent.In order to ensure that the parameter i x traverses its range in accordance with the PDF assigned to it, the learning function i g must satisfy the following differential equation [4,12]: is the PDF.Then, it is possible to transform the input parameters with the PDF i p into the onedimensional sample space with regard to the auxiliary variable s, and, for real applications, conventional grid-sampling techniques are used to estimate the Fourier coefficients.Consequently, in order to explore the dynamical characterization of learning functions or, equivalently, the search curves involved in FAST, Eq. ( 2) must be used to solve ) (u g i .

LEARNING FUNCTIONS IN FOURIER AMPLITUDE SENSITIVITY TEST (FAST)
In the history of FAST development, the following function has been initially proposed, Cukier, Fortuin, Schuler, Petschek and Schaibly: ) where i x denotes the central value (best available estimate), and i ν is the bound at the end-point of the range of uncertainty for i x , that is specified as a part of the input data.Because of the exponential function involved in Eq. ( 3), the distribution is long-tailed and positively skewed, not uniform at all.
In an effort to obtain uniform distribution of the input parameters, Koda Mc Rae and Seinfeld proposed the following learning function, ω 's are positive integers and the common auxiliary variable is allowed to vary as π π , Eq. ( 4) generates sampled input parameter for all i x according to an expected PDF.Further, taking into account the symmetry of the properties associated with the evaluation of the Fourier coefficients, we may restrict the range of periodic search curve from ) , ( Without loss of generality and for notational simplicity, in the rest of the paper, we confine our arguments to the case where the domain of parameter uncertainty is given by the unit hypercube, . Then, typically with , Eq. ( 4) is rewritten as: By substituting Eq. ( 5) with Eq. ( 2), we can obtain the PDF associated with this learning function as: which is referred to as the Chebyshev distribution and shown in Fig. 1.In Fig. 1, it is observed that the distribution is symmetric and nearly uniform around the middle region of the interval.Near the two ends of the sample interval (0,1) of the U-shaped function, however, the density is high and accordingly the sampled input parameter is highly represented while at the middle region it is poorly represented.Saltelli et al. (1999) proposed another learning function: which is a set of straight lines oscillating periodically on (0,1)×(0,1) at the corresponding characteristic frequency i ω .This method is known as the extended FAST and samples of model input parameters derived from Eq. ( 7) attain uniform distribution.We may note, assuming uniform distribution with and Eq. ( 7) is immediately obtained as a solution to Eq. ( 8).While classical FAST provides first-order sensitivities (or main effects) at a given computational cost (which is independent of the number of model factors), the extended FAST allows the calculation of the total effect indices, at a cost that is proportional to the number of model factors (Saltelli et al., 1999).
For non-uniform distributions, Lu and Mohanty (2003) later showed that the periodic search function can be derived using the parameter's inverse cumulative distribution function (ICDF: ), and the generic learning function is represented as: Hence, appropriate search functions for any stochastic variable with standard distribution can be obtained through Eq. ( 9).

CHAOS SEARCH AND FOURIER AMPLITUDE SENSITIVITY TEST (FAST)
In this section, the periodic sampling approach of FAST is interpreted and analysed within the technical framework of chaos search based on the Logistic and Tent maps.In particular, a new random sampling method is proposed for the implementation of chaos search based on the Tent map.

Chaos search based on the Logistic map and FAST
As an emerging new tool for global optimization, chaos search may be the latest development in the chaos optimization arena (Yang, Li & Cheng, 2007).
In chaos search, the Logistic map is the most commonly used generator of chaotic sequences since it is the simplest learning function that exhibits sensitive dependence on initial conditions, which is a typical characteristic of chaos.The trajectory of the resulting chaotic sequences constitutes the search curve that traverses the entire space of interest ergodically, similar to the space-filling curve used in FAST.
The Logistic map, as a prototype of a one-dimensional map with chaos, is given by where n y is the chaotic variable, µ is the control parameter, and n denotes the iteration step.We suppose For this case, Eq. ( 11) has the closed-form solution of the type:

period doubling).
Based on the duality argument, it is possible to find the invariant probability density function of the Logistic map ) ( y p y as follows (May, 1976): which is again the Chebyshev distribution shown in Fig. 1.For a chaotic dynamical system with absolutely continuous distributions, the invariant PDF may generally be available and it describes the "steady state" of the chaotic map (Hall & Wolff, 1988).It should be noticed that the invariant PDF of the Logistic map, Eq. ( 14), is identically equal to Eq. ( 6).Thus, the PDF of the sampled input parameters of the FAST method proposed by Koda et al. (1979) and the invariant PDF of chaotic sequences from the Logistic map follow the same Chebyshev distribution, which suggests that there exists a strong technical link between the two methods.
Using Eqs. ( 12) and ( 14), the Lyapunov exponent λ can be calculated as: Lyapunov exponents can be viewed as generalizations of eigen values that are well-defined for chaotic dynamics.Positive values indicate directions of average local exponential expansion in a sense that the system evolution has a sensitive dependence on initial conditions and neighbouring learning trajectories separate exponentially fast, which is the signature of chaoticity.Hence, Eq. ( 15) implies that the chaotic variables with the nature of pseudorandomness can be derived from the Logistic map.
In an analogy to FAST, along with the Logistic map as an ergodic (i.e., spacefilling) search curve, we suppose that the chaotic sequences } { n y derived from Eq. ( 10) are used to model input parameters.Then, according to the arguments developed above, one may conclude that the performance (i.e., global-learning capacity and efficiency) of chaos search based on the Logistic map is essentially equivalent to that of the FAST method based on Eq. ( 5).Indeed, with the same PDF as sampled sequences, basic search characteristics of the two methods would be identical since the distribution of the sampled sequences is expressed as Eq. ( 6), namely, Chebyshev function.

Chaos search based on the Tent map by the random sampling method
In order to expand the class of search curves involved in FAST, we propose the following learning function h based on the Logistic map, where y is the chaotic variable derived from Eqs. ( 11) and ( 12).From Eq. ( 16), y can be immediately recovered as where 1 − h denotes the inverse function of h.Note that the functional form of Eq. ( 17) is similar to the closed-form solution of the Logistic map given in Eq. ( 13).
It is well-known that the Logistic map and the Tent map can be transformed into each other, and there is a relationship of topological conjugation between them.Using Eq. ( 16) in Eqs. ( 11) and ( 12), we can formally transform the Logistic map to the Tent map as follows: ) ( where n x (with ) denotes the chaotic variable at the n-th iteration, and: which is the piecewise-linear (tent-shaped) function.Then, in an analogy to FAST, along with the Tent map as the ergodic search function, chaotic sequences } { n x derived from Eq. ( 18) may be utilized to model input parameters.
It may be noticed that Eqs. ( 18) and ( 19) can be obtained directly from the period doubling property of the chaotic sequences based on the Logistic map as: Further, the invariant PDF of the Tent map .Note that Eq. ( 21) can be easily obtained through direct differentiation of Eq. ( 17) as ( 22) where ) ( y p y is the invariant PDF of the Logistic map given in Eq. ( 14).Hence, the invariant PDF of the Tent map is proven to be a uniform function, which is also the case for the PDF associated with the extended FAST.
It is well-known that the Tent map can be extended to a class of piecewiselinear functions including a set of straight lines oscillating periodically on (0,1)×(0,1) at a search frequency i ω .For integer frequencies, plots of the piecewise-linear function (i.e., learning function) and the corresponding search frequency are given in Fig. 3.Note that the Tent map is shown in Fig. 3 with the unit characteristic frequency, i.e., 1 = ω .In the subsequent analysis, we take into consideration the search frequencies } { i ω that shall be assigned to input parameters, and propose a new chaos search procedure based on the Tent map by the random sampling method.
points in the vicinity of major uncertainty location, chaos search based on the Logistic map may be useful for problems with non-uniform parameter distributions of the Chebyshev-type.Most often, however, the complex model structure and the location of the measured disturbances will not be available beforehand, and it is difficult to predict the exact parameter uncertainties of actual problems.Hence, it may be most practical to use chaos search based on the Tent map with uniform PDF, or equivalently, the extended FAST.
A sample trajectory of chaotic variables derived from the Logistic or Tent map can move over the entire space of interest ergodically.The unique properties of chaotic variables are ergodicity, pseudo-randomness, and irregularity.The present study is the first attempt to exploit them in a practical implementation of FAST.One of the major problems associated with a periodic sampling approach of the traditional FAST would be the non-ergodicity of the periodic search curves involved.The present study can help us better understand the FAST methodology and provide a fundamental basis to exploit the ergodic property of chaos for further advancements of the method.

CONCLUSION
In this contribution, we have investigated the dynamical characterization of the learning functions involved in the Fourier Amplitude Sensitivity Test (FAST), and expanded the class of search curves to include the chaotic Logistic and Tent maps.A new random sampling approach is also proposed based on the chaotic sequences derived from the Tent map.In particular, it is shown for the first time that the performance of chaos search based on the Logistic map is essentially equivalent to that of the FAST method proposed by Koda et al. (1979).It is also demonstrated that a representative implementation of chaos search based on the Tent map by the grid sampling method coincides with the extended FAST procedure proposed by Saltelli et al. (1999).Thus, we have established the general link that exists between chaos search and FAST.
We may note that the present formalism may be straightforwardly extended to problems where input parameters are modelled by chaotic variables derived from the chaotic maps of other types.It is expected that the trend towards increased use of FAST will continue, driven by the growing needs for more advanced global-sensitivity analysis tools which can learn from complex dynamical systems.The present approach based on chaos search has proven to be promising for further advancements of the learning algorithms exploiting the unique properties (e.g., ergodicity, pseudo-randomness, etc.) of chaos.
Since non-chaotic learning trajectories are usually asymptotic and too crude for complex problems, we expect that the chaos search will become more popular in AI applications.

Figure 2 .
Figure 2. Bifurcation diagram of the logistic map.

Figure 3 .
Figure 3. Piecewise-linear functions with corresponding search frequency.
. (21), we have used the relationship that holds for the density transformation between the invariant PDFs of the Tent and Logistic maps, i.e.,