EMPIRICAL MODE DECOMPOSITION BASED ON THETA METHOD FOR FORECASTING DAILY STOCK PRICE

Forecasting is a challenging task as time series data exhibit many features that cannot be captured by a single model. Therefore, many researchers have proposed various hybrid models in order to accommodate these features to improve forecasting results. This work proposed a hybrid method between Empirical Mode Decomposition (EMD) and Theta methods by considering better forecasting potentiality. Both EMD and Theta are efficient methods in their own ground of tasks for decomposition and forecasting, respectively. Combining them to obtain a better synergic outcome deserves consideration. EMD decomposed the training data from each of the five Financial Times Stock Exchange 100 Index (FTSE 100 Index) companies’ stock price time series data into Intrinsic Mode Functions (IMF) and residue. Then, the Theta method forecasted each decomposed subseries. Considering different forecast horizons, the effectiveness of this hybridisation was evaluated through values of conventional error measures found for test data and forecast data, which were obtained by adding forecast results for all component counterparts extracted from the EMD process. This study found that the proposed method produced better forecast accuracy than the other three classic methods and the hybrid EMD-ARIMA models.


INTRODUCTION
The challenging task of time series forecasting is a very active as well as an important research area. In many phenomena, the past, present, and future events are correlated intrinsically with various degrees of randomness. Nevertheless, some events are highly unpredictable, whereas some are relatively easy to predict (Makridakis, 1986). Both capturing data characteristics and fitting data with appropriate methods according to characteristics are the methods for better forecasting. Nowadays, there are many statistical as well as machine learning models for time series forecasting. However, hybrid models are also up-and-coming in many cases, and this work proposes an Empirical Mode Decomposition (EMD)-Theta hybrid model.
Choosing or finding the best model for a particular or similar type of time series data is a challenging but essential consideration. Chatfield (1988) discussed competitive effectiveness encompassing the strengths as well as weaknesses of models and approaches regarding aspects of judgmental, univariate, multivariate, automatic, and non-automatic forecasting with a focus on forecasting competitions. It contributed to draw a significant scenic landscape of the contemporary forecasting research works along with a future indication of better forecasting model finding approaches.
There is still much scope and importance of improving forecasting in the field of econometrics and finance where hybridisation can be of particular consideration. In this article, the potential EMD-Theta hybrid model is presented with an evaluation of five Financial Times Stock Exchange 100 Index (FTSE 100 Index) companies along with a comparison of performance with some other classical methods as well as the hybrid EMD-Autoregressive Integrated Moving Average (ARIMA) method. It was evident that the proposed model performed better than other models. This research work found that EMD-Theta overcame limitations and performed over the demerits of other employed models. One potential demerit or limitation of the EMD-Theta model is that EMD can be inapplicable for some time series suffering from end effect and mode mixing, which can be remedied by considering its improved variants. Therefore, EMD-Theta hybridisation has the robustness of better forecasting, which indicates its potential furtherance in research and applications.

LITERATURE REVIEW
Time series forecasting belongs to many research fields where financial and economic types hold a broad and widespread concern and application. The early model developed in time series forecasting is the Autoregressive (AR) model and Moving Average (MA) model. Then, a more developed combined approach of ARIMA was introduced, which is notably followed by the Box-Jenkins approach (Box & Jenkins, 1970;Cholette, 1982). Later on, more modifications evolved and many other models were developed. ARIMA is a superior version of Autoregressive Moving Average (ARMA), which came out of further works (Wold, 1938;Whittle, 1951) as a suitable method for stationary series. ARMA is a consequential combination of two other methods, AR, introduced by Yule (1927) and MA, a work of Slutzky (1937). The method is better written as ARIMA (p,d,q) where p, d, and q stand for or are related to the number of autoregressive terms, the order of integration, and the number of moving average terms, respectively. The value of d is obtained by taking differences of the terms one or more times until the series turns to be stationary.
Then, the series is applied for the ARMA process, which can be generally written as in Equation 1: (1) where and are past values and past deviations or errors, respectively. Using lag operator that operates as Equation (1) can be written as Equation 2: (2) Along with obtaining the values of p, d, and q, and the fitted values for the coefficients of all terms for better or appropriate model selection and forecasting, the Box-Jenkins approach is followed through some necessary steps. Although the ARIMA method is a classic approach, it is still being applied in different new research areas like a recent work of Zhao et al. (2019) for resource prediction on Kubernetes, an open-source cluster management software. One of the merits of ARIMA is that it performs quite satisfactorily multivariate, automatic, and non-automatic forecasting with a focus on forecasting competitions. It contributed to draw a significant scenic landscape of the contemporary forecasting research works along with a future indication of better forecasting model finding approaches.
There is still much scope and importance of improving forecasting in the field of econometrics and finance where hybridisation can be of particular consideration. In this article, the potential EMD-Theta hybrid model is presented with an evaluation of five Financial Times Stock Exchange 100 Index (FTSE 100 Index) companies along with a comparison of performance with some other classical methods as well as the hybrid EMD-Autoregressive Integrated Moving Average (ARIMA) method. It was evident that the proposed model performed better than other models. This research work found that EMD-Theta overcame limitations and performed over the demerits of other employed models. One potential demerit or limitation of the EMD-Theta model is that EMD can be inapplicable for some time series suffering from end effect and mode mixing, which can be remedied by considering its improved variants. Therefore, EMD-Theta hybridisation has the robustness of better forecasting, which indicates its potential furtherance in research and applications.

LITERATURE REVIEW
Time series forecasting belongs to many research fields where financial and economic types hold a broad and widespread concern and application. The early model developed in time series forecasting is the Autoregressive (AR) model and Moving Average (MA) model. Then, a more developed combined approach of ARIMA was introduced, which is notably followed by the Box-Jenkins approach (Box & Jenkins, 1970;Cholette, 1982). Later on, more modifications evolved and many other models were developed. ARIMA is a superior version of Autoregressive Moving Average (ARMA), which came out of further works (Wold, 1938;Whittle, 1951) as a suitable method for stationary series. ARMA is a consequential combination of two other methods, AR, introduced by Yule (1927) and MA, a work of Slutzky (1937). The method is better written as ARIMA (p,d,q) where p, d, and q stand for or are related to the number of autoregressive terms, the order of integration, and the number of moving average terms, respectively. The value of d is obtained by taking differences of the terms one or more times until the series turns to be stationary.
Then, the series is applied for the ARMA process, which can be generally written as in Equation (1): � = � ��� + � ��� + ⋯ + � ��� + � + � ��� − � ��� − ⋯ − � ��� (1) where � and � are past values and past deviations or errors, respectively. Using lag operato as � � = ��� , Equation (1) can be written as Equation (2): Along with obtaining the values of p, d, and q, and the fitted values for the coeffic for better or appropriate model selection and forecasting, the Box-Jenkins approach is f some necessary steps. Although the ARIMA method is a classic approach, it is still being ap new research areas like a recent work of Zhao et al. (2019) for resource prediction on Kub source cluster management software. One of the merits of ARIMA is that it performs quite non-stationary time series, which can be easily and firmly transformed into stationar mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-linear especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by th of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions in where � and � are past values and past deviations or errors, respectively. Using lag opera as � � = ��� , Equation (1) can be written as Equation (2): Along with obtaining the values of p, d, and q, and the fitted values for the coeffi for better or appropriate model selection and forecasting, the Box-Jenkins approach is some necessary steps. Although the ARIMA method is a classic approach, it is still being a new research areas like a recent work of Zhao et al. (2019) for resource prediction on Ku source cluster management software. One of the merits of ARIMA is that it performs quit non-stationary time series, which can be easily and firmly transformed into station mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-linea especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions in rs, respectively. Using lag operator � that operates 2): (2) nd the fitted values for the coefficients of all terms g, the Box-Jenkins approach is followed through classic approach, it is still being applied in different 9) for resource prediction on Kubernetes, an openf ARIMA is that it performs quite satisfactorily for irmly transformed into stationary. However, its hly non-stationary and non-linear time series data, vature. l forecasting were developed by the seminal works where � and � are past values and past deviations or err as � � = ��� , Equation (1) can be written as Equation Along with obtaining the values of p, d, and q, for better or appropriate model selection and forecasti some necessary steps. Although the ARIMA method is a new research areas like a recent work of Zhao et al. (20 source cluster management software. One of the merits non-stationary time series, which can be easily and mentionable demerit is that ARIMA tends to fail for hi especially with turbulent characteristics and dynamic cu Smoothing methods for better data fitting as we of Brown (1956), Holt (2004), and Winters (1960). O made by Gardner (1985) and Gardner (2006). Expo where � and � are past values and past deviations or errors, respectively. Using lag as � � = ��� , Equation (1) can be written as Equation (2): Along with obtaining the values of p, d, and q, and the fitted values for the for better or appropriate model selection and forecasting, the Box-Jenkins approa some necessary steps. Although the ARIMA method is a classic approach, it is still b new research areas like a recent work of Zhao et al. (2019) for resource prediction o source cluster management software. One of the merits of ARIMA is that it perform non-stationary time series, which can be easily and firmly transformed into s mentionable demerit is that ARIMA tends to fail for highly non-stationary and non especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were develope of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributi for non-stationary time series, which can be easily and firmly transformed into stationary. However, its mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-linear time series data, especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by the seminal works of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions in this model were made by Gardner (1985) and Gardner (2006). Exponentially Weighted Moving Average (EWMA), a smoothing technique for time series data fitting, is a work developed by Brown (1956) that historically originated in the 17 th century by Denis Poisson in dealing with his numerical analysis problem related to weighted averaging and exponential windowing. A general EWMA model is represented by Equations 3 and 4: (3) where and are respectively smoothing parameter, original sequence terms, and exponentially decreasing sequence terms, which are found by convex combinations with original terms. By recursive use, Equations (3) and 4 can jointly be rewritten as Equation 5: (5) Equation 5 reveals which has exponential value decrease property towards distant past values. By adding all smoothing weight, the related cumulative distribution function can be of the form For effective or optimal smoothing in the EWMA process, the best value of is a requirement for least deviation or error of fitted data with original data. The Marquardt procedure and other conventional advanced search approaches, as well as manual tuning, are employed to obtain this value. Relevantly, the EWMA method was further developed for an extension for double and triple smoothing by Holt (1957) and Winters (1960).
The Theta method was developed, introduced, and described along with background mathematics by Assimakopoulos and Nikolopoulos (2000) and Hyndman and Billah (2003). The scholars simplified and suggested different approaches for derivation with the same or similar result found. They claimed that the performance of the Theta method was similar to simple exponential smoothing (SES) with a drift. Some other research contributions involving the Theta method are works of Pagourtzi et al. (2008), Nikolopoulos et al. (2011), and Thomakos and Nikolopoulos (2014). In a work by Petropoulos et al. (2019) of inventory performance considering different forecasting methods where � and � are past values and past deviations or errors, respectively. Using lag operator � that ope as � � = ��� , Equation (1) can be written as Equation (2): Along with obtaining the values of p, d, and q, and the fitted values for the coefficients of all t for better or appropriate model selection and forecasting, the Box-Jenkins approach is followed thro some necessary steps. Although the ARIMA method is a classic approach, it is still being applied in diffe new research areas like a recent work of Zhao et al. (2019) for resource prediction on Kubernetes, an o source cluster management software. One of the merits of ARIMA is that it performs quite satisfactoril non-stationary time series, which can be easily and firmly transformed into stationary. However mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-linear time series especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by the seminal w of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions in this model made by Gardner (1985) and Gardner (2006). Exponentially Weighted Moving Average (EWMA smoothing technique for time series data fitting, is a work developed by Brown (1956) that histori originated in the 17 th century by Denis Poisson in dealing with his numerical analysis problem relate weighted averaging and exponential windowing. A general EWMA model is represented by Equation and (4): where , � , and � are respectively smoothing parameter, original sequence terms, and exponent decreasing sequence terms, which are found by convex combinations with original terms. By recursive Equations (3) and (4) can jointly be rewritten as Equation (5): Equation (5) reveals ( ) = (1 − ) � , which has exponential value decrease property tow distant past values. By adding all smoothing weight, the related cumulative distribution function can b the form ( ) = 1 − (1 − ) � . For effective or optimal smoothing in the EWMA process, the best valu is a requirement for least deviation or error of fitted data with original data. The Marquardt procedure other conventional advanced search approaches, as well as manual tuning, are employed to obtain this v Relevantly, the EWMA method was further developed for an extension for double and triple smoothin Holt (1957) and Winters (1960).
The Theta method was developed, introduced, and described along with background mathem by Assimakopoulos and Nikolopoulos (2000) and Hyndman and Billah (2003). The scholars simplified suggested different approaches for derivation with the same or similar result found. They claimed tha performance of the Theta method was similar to simple exponential smoothing (SES) with a drift. S other research contributions involving the Theta method are works of Pagourtzi et al. (2008), Nikolopo Along with obtaining the values of p, d, and q, and the fitted values for the coefficients o for better or appropriate model selection and forecasting, the Box-Jenkins approach is followe some necessary steps. Although the ARIMA method is a classic approach, it is still being applied i new research areas like a recent work of Zhao et al. (2019) for resource prediction on Kubernetes source cluster management software. One of the merits of ARIMA is that it performs quite satisfa non-stationary time series, which can be easily and firmly transformed into stationary. Ho mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-linear time s especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by the sem of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions in this m made by Gardner (1985) and Gardner (2006). Exponentially Weighted Moving Average (E smoothing technique for time series data fitting, is a work developed by Brown (1956) that h originated in the 17 th century by Denis Poisson in dealing with his numerical analysis problem weighted averaging and exponential windowing. A general EWMA model is represented by Eq and (4): where , � , and � are respectively smoothing parameter, original sequence terms, and exp decreasing sequence terms, which are found by convex combinations with original terms. By rec Equations (3) and (4) can jointly be rewritten as Equation (5): Equation (5) reveals ( ) = (1 − ) � , which has exponential value decrease proper distant past values. By adding all smoothing weight, the related cumulative distribution function the form ( ) = 1 − (1 − ) � . For effective or optimal smoothing in the EWMA process, the be is a requirement for least deviation or error of fitted data with original data. The Marquardt pro other conventional advanced search approaches, as well as manual tuning, are employed to obtain Relevantly, the EWMA method was further developed for an extension for double and triple sm Holt (1957) and Winters (1960).
The Theta method was developed, introduced, and described along with background m by Assimakopoulos and Nikolopoulos (2000) and Hyndman and Billah (2003). The scholars sim suggested different approaches for derivation with the same or similar result found. They claim performance of the Theta method was similar to simple exponential smoothing (SES) with a d other research contributions involving the Theta method are works of Pagourtzi et al. (2008), Nik Along with obtaining the values of p, d, and q, and the fitted values for the coef for better or appropriate model selection and forecasting, the Box-Jenkins approach i some necessary steps. Although the ARIMA method is a classic approach, it is still being new research areas like a recent work of Zhao et al. (2019) for resource prediction on Ku source cluster management software. One of the merits of ARIMA is that it performs qui non-stationary time series, which can be easily and firmly transformed into station mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-line especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions made by Gardner (1985) and Gardner (2006). Exponentially Weighted Moving Av smoothing technique for time series data fitting, is a work developed by Brown (195 originated in the 17 th century by Denis Poisson in dealing with his numerical analysis weighted averaging and exponential windowing. A general EWMA model is represente and (4): where , � , and � are respectively smoothing parameter, original sequence terms, decreasing sequence terms, which are found by convex combinations with original term Equations (3) and (4) can jointly be rewritten as Equation (5): Equation (5) reveals ( ) = (1 − ) � , which has exponential value decreas distant past values. By adding all smoothing weight, the related cumulative distribution the form ( ) = 1 − (1 − ) � . For effective or optimal smoothing in the EWMA proces is a requirement for least deviation or error of fitted data with original data. The Marqu other conventional advanced search approaches, as well as manual tuning, are employed Relevantly, the EWMA method was further developed for an extension for double and t Holt (1957) and Winters (1960).
The Theta method was developed, introduced, and described along with backg by Assimakopoulos and Nikolopoulos (2000) and Hyndman and Billah (2003). The scho suggested different approaches for derivation with the same or similar result found. Th performance of the Theta method was similar to simple exponential smoothing (SES) other research contributions involving the Theta method are works of Pagourtzi et al. (20 Along with obtaining the values of p, d, and q, and the fitted values for the coef for better or appropriate model selection and forecasting, the Box-Jenkins approach i some necessary steps. Although the ARIMA method is a classic approach, it is still being new research areas like a recent work of Zhao et al. (2019) for resource prediction on Ku source cluster management software. One of the merits of ARIMA is that it performs qui non-stationary time series, which can be easily and firmly transformed into station mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-line especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions made by Gardner (1985) and Gardner (2006). Exponentially Weighted Moving Av smoothing technique for time series data fitting, is a work developed by Brown (195 originated in the 17 th century by Denis Poisson in dealing with his numerical analysis weighted averaging and exponential windowing. A general EWMA model is represente and (4): where , � , and � are respectively smoothing parameter, original sequence terms, decreasing sequence terms, which are found by convex combinations with original term Equations (3) and (4) can jointly be rewritten as Equation (5): Equation (5) reveals ( ) = (1 − ) � , which has exponential value decreas distant past values. By adding all smoothing weight, the related cumulative distribution the form ( ) = 1 − (1 − ) � . For effective or optimal smoothing in the EWMA proces is a requirement for least deviation or error of fitted data with original data. The Marqu other conventional advanced search approaches, as well as manual tuning, are employed Relevantly, the EWMA method was further developed for an extension for double and t Holt (1957) and Winters (1960).
The Theta method was developed, introduced, and described along with backg by Assimakopoulos and Nikolopoulos (2000) and Hyndman and Billah (2003). The scho suggested different approaches for derivation with the same or similar result found. Th performance of the Theta method was similar to simple exponential smoothing (SES) other research contributions involving the Theta method are works of Pagourtzi et al. (2 Along with obtaining the values of p, d, and q, and the fitted values for the coefficients o for better or appropriate model selection and forecasting, the Box-Jenkins approach is follow some necessary steps. Although the ARIMA method is a classic approach, it is still being applied i new research areas like a recent work of Zhao et al. (2019) for resource prediction on Kubernete source cluster management software. One of the merits of ARIMA is that it performs quite satisfa non-stationary time series, which can be easily and firmly transformed into stationary. Ho mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-linear time s especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by the sem of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions in this m made by Gardner (1985) and Gardner (2006). Exponentially Weighted Moving Average (E smoothing technique for time series data fitting, is a work developed by Brown (1956) that h originated in the 17 th century by Denis Poisson in dealing with his numerical analysis problem weighted averaging and exponential windowing. A general EWMA model is represented by Eq and (4): where , � , and � are respectively smoothing parameter, original sequence terms, and exp decreasing sequence terms, which are found by convex combinations with original terms. By rec Equations (3) and (4) can jointly be rewritten as Equation (5): Equation (5) reveals ( ) = (1 − ) � , which has exponential value decrease proper distant past values. By adding all smoothing weight, the related cumulative distribution functio the form ( ) = 1 − (1 − ) � . For effective or optimal smoothing in the EWMA process, the be is a requirement for least deviation or error of fitted data with original data. The Marquardt pro other conventional advanced search approaches, as well as manual tuning, are employed to obtain Relevantly, the EWMA method was further developed for an extension for double and triple sm Holt (1957) and Winters (1960).
The Theta method was developed, introduced, and described along with background m by Assimakopoulos and Nikolopoulos (2000) and Hyndman and Billah (2003). The scholars sim suggested different approaches for derivation with the same or similar result found. They claim performance of the Theta method was similar to simple exponential smoothing (SES) with a d other research contributions involving the Theta method are works of Pagourtzi et al. (2008), Nik Along with obtaining the values of p, d, and q, and the fitted values for the coefficients o for better or appropriate model selection and forecasting, the Box-Jenkins approach is follow some necessary steps. Although the ARIMA method is a classic approach, it is still being applied new research areas like a recent work of Zhao et al. (2019) for resource prediction on Kubernete source cluster management software. One of the merits of ARIMA is that it performs quite satisf non-stationary time series, which can be easily and firmly transformed into stationary. Ho mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-linear time especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by the sem of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions in this m made by Gardner (1985) and Gardner (2006). Exponentially Weighted Moving Average (E smoothing technique for time series data fitting, is a work developed by Brown (1956) that originated in the 17 th century by Denis Poisson in dealing with his numerical analysis problem weighted averaging and exponential windowing. A general EWMA model is represented by Eq and (4): where , � , and � are respectively smoothing parameter, original sequence terms, and ex decreasing sequence terms, which are found by convex combinations with original terms. By rec Equations (3) and (4) can jointly be rewritten as Equation (5): Equation (5) reveals ( ) = (1 − ) � , which has exponential value decrease proper distant past values. By adding all smoothing weight, the related cumulative distribution functio the form ( ) = 1 − (1 − ) � . For effective or optimal smoothing in the EWMA process, the be is a requirement for least deviation or error of fitted data with original data. The Marquardt pro other conventional advanced search approaches, as well as manual tuning, are employed to obtain Relevantly, the EWMA method was further developed for an extension for double and triple sm Holt (1957) and Winters (1960).
The Theta method was developed, introduced, and described along with background m by Assimakopoulos and Nikolopoulos (2000) and Hyndman and Billah (2003). The scholars sim suggested different approaches for derivation with the same or similar result found. They claim performance of the Theta method was similar to simple exponential smoothing (SES) with a d other research contributions involving the Theta method are works of Pagourtzi et al. (2008), Ni for better or appropriate model selection some necessary steps. Although the ARIM new research areas like a recent work of source cluster management software. One non-stationary time series, which can mentionable demerit is that ARIMA tend especially with turbulent characteristics a Smoothing methods for better da of Brown (1956), Holt (2004), and Win made by Gardner (1985) and Gardner smoothing technique for time series dat originated in the 17 th century by Denis P weighted averaging and exponential win and (4): where , � , and � are respectively sm decreasing sequence terms, which are fou Equations (3) and (4) can jointly be rewr is a requirement for least deviation or e other conventional advanced search appro Relevantly, the EWMA method was furth Holt (1957) and Winters (1960).
The Theta method was develope by Assimakopoulos and Nikolopoulos (2 suggested different approaches for deriva performance of the Theta method was s other research contributions involving th Along with obtaining the values of p, d, and q, and the fitted values for the coefficients of all te for better or appropriate model selection and forecasting, the Box-Jenkins approach is followed thro some necessary steps. Although the ARIMA method is a classic approach, it is still being applied in diffe new research areas like a recent work of Zhao et al. (2019) for resource prediction on Kubernetes, an op source cluster management software. One of the merits of ARIMA is that it performs quite satisfactorily non-stationary time series, which can be easily and firmly transformed into stationary. However mentionable demerit is that ARIMA tends to fail for highly non-stationary and non-linear time series d especially with turbulent characteristics and dynamic curvature.
Smoothing methods for better data fitting as well forecasting were developed by the seminal wo of Brown (1956), Holt (2004), and Winters (1960). Other mentionable contributions in this model w made by Gardner (1985) and Gardner (2006). Exponentially Weighted Moving Average (EWMA smoothing technique for time series data fitting, is a work developed by Brown (1956) that historic originated in the 17 th century by Denis Poisson in dealing with his numerical analysis problem relate weighted averaging and exponential windowing. A general EWMA model is represented by Equations and (4): where , � , and � are respectively smoothing parameter, original sequence terms, and exponenti decreasing sequence terms, which are found by convex combinations with original terms. By recursive Equations (3) and (4) can jointly be rewritten as Equation (5): Equation (5) reveals ( ) = (1 − ) � , which has exponential value decrease property tow distant past values. By adding all smoothing weight, the related cumulative distribution function can b the form ( ) = 1 − (1 − ) � . For effective or optimal smoothing in the EWMA process, the best valu is a requirement for least deviation or error of fitted data with original data. The Marquardt procedure other conventional advanced search approaches, as well as manual tuning, are employed to obtain this va Relevantly, the EWMA method was further developed for an extension for double and triple smoothing Holt (1957) and Winters (1960).
The Theta method was developed, introduced, and described along with background mathema by Assimakopoulos and Nikolopoulos (2000) and Hyndman and Billah (2003). The scholars simplified suggested different approaches for derivation with the same or similar result found. They claimed that performance of the Theta method was similar to simple exponential smoothing (SES) with a drift. So other research contributions involving the Theta method are works of Pagourtzi et al. (2008), Nikolopo and approaches participated in M3-Competition, they showed the performance of the Theta method. The work of Spiliotis et al. (2019) embroiled valuable conceptual insight from the Theta method for decomposition, which was extended to non-linear trend and by modifying and improving to a better hybrid model. They presented promising results with the M3-Competition data. Papacharalampous et al. (2018) applied automatic forecasting methods to the monthly time series data of temperature as well as precipitation. They compared predictability, where the Theta method was an insignificant position among the other models. A different perspective of the Theta method encompassing application and theoretical concepts were discussed and explained by Nikolopoulos and Thomakos (2019), which was wholly dedicated for the Theta method. Since Theta is one of the winners and significant part of forecasting competitions or M-Competitions, a recent brief historical sketch of Hyndman (2020) contained this model. The Theta method (Assimakopoulos & Nikolopoulos, 2000) is an approach for local curvature modification for time series, where the second difference of a newly derived series is related to the second difference of the original series with a scale factor of change or modification. The name of the method is adopted from the related Greek letter where Gradual reduction of local curvatures deflates the time series to zero where there is no curvature. Therefore, smaller values of contribute towards a more substantial deflation in the curvature pattern. When the value is zero, it produces a straight line or a linear regression line.

If
, there is no change in curvature. The authors showed that this approach of curvature modification does not modify or change the mean of the original series.
As per solving the method of second-order difference equation (Kelley & Peterson, 2001), Equation 6 presented by Hyndman and Billah (2003) is of the form of Equation 7:  2018) applied automatic forecasting methods to the monthly time series well as precipitation. They compared predictability, where the Theta method was an among the other models. A different perspective of the Theta method encompassing tical concepts were discussed and explained by Nikolopoulos and Thomakos (2019), icated for the Theta method. Since Theta is one of the winners and significant part of ns or M-Competitions, a recent brief historical sketch of Hyndman (2020) contained method (Assimakopoulos & Nikolopoulos, 2000) is an approach for local curvature series, where the second difference of a newly derived series is related to the second nal series with a scale factor of change or modification. The name of the method is are related by the following ce equation as in Equation (6): (6) � + ��� . Gradual reduction of local curvatures deflates the time series to zero where Therefore, smaller values of  contribute towards a more substantial deflation in the en the value is zero, it produces a straight line or a linear regression line. If =1, there ure. The authors showed that this approach of curvature modification does not modify the original series. the method of second-order difference equation (Kelley & Peterson, 2001), Equation man and Billah (2003) is of the form of Equation (7): Theta-line as per original literature, where = 0, � (0) contains a linear trend for uared error produces Equation (8): by minimisation of Equation (8), Equations (9) and (10) are derived: akos and Nikolopoulos (2014). In a work by Petropoulos et al. (2019) of inventory g different forecasting methods and approaches participated in M3-Competition, ance of the Theta method. The work of Spiliotis et al. (2019) embroiled valuable the Theta method for decomposition, which was extended to non-linear trend and roving to a better hybrid model. They presented promising results with the M3ous et al. (2018) applied automatic forecasting methods to the monthly time series ell as precipitation. They compared predictability, where the Theta method was an ong the other models. A different perspective of the Theta method encompassing cal concepts were discussed and explained by Nikolopoulos and Thomakos (2019), ated for the Theta method. Since Theta is one of the winners and significant part of s or M-Competitions, a recent brief historical sketch of Hyndman (2020) contained ethod (Assimakopoulos & Nikolopoulos, 2000) is an approach for local curvature ries, where the second difference of a newly derived series is related to the second al series with a scale factor of change or modification. The name of the method is are related by the following equation as in Equation (6): (6) + ��� . Gradual reduction of local curvatures deflates the time series to zero where herefore, smaller values of  contribute towards a more substantial deflation in the the value is zero, it produces a straight line or a linear regression line. If =1, there e. The authors showed that this approach of curvature modification does not modify he original series. he method of second-order difference equation (Kelley & Peterson, 2001), Equation an and Billah (2003) is of the form of Equation (7): heta-line as per original literature, where = 0, � (0) contains a linear trend for ared error produces Equation (8): y minimisation of Equation (8), Equations (9) and (10) are derived: et al. (2011), and Thomakos and Nikolopoulos (2014). In a work by Petropoulos et al. (2019) of invento performance considering different forecasting methods and approaches participated in M3-Competitio they showed the performance of the Theta method. The work of Spiliotis et al. (2019) embroiled valuab conceptual insight from the Theta method for decomposition, which was extended to non-linear trend an by modifying and improving to a better hybrid model. They presented promising results with the M Competition data. Papacharalampous et al. (2018) applied automatic forecasting methods to the monthly time seri data of temperature as well as precipitation. They compared predictability, where the Theta method was a insignificant position among the other models. A different perspective of the Theta method encompassin application and theoretical concepts were discussed and explained by Nikolopoulos and Thomakos (2019 which was wholly dedicated for the Theta method. Since Theta is one of the winners and significant part forecasting competitions or M-Competitions, a recent brief historical sketch of Hyndman (2020) containe this model. The Theta method (Assimakopoulos & Nikolopoulos, 2000) is an approach for local curvatu modification for time series, where the second difference of a newly derived series is related to the secon difference of the original series with a scale factor of change or modification. The name of the method adopted from the related Greek letter  (Theta) used in the model equation. A time series { � , � , � , … , � of original data and Theta method-based new time series { � , � , � , … , � } are related by the followin second-order difference equation as in Equation (6): Gradual reduction of local curvatures deflates the time series to zero whe there is no curvature. Therefore, smaller values of  contribute towards a more substantial deflation in th curvature pattern. When the value is zero, it produces a straight line or a linear regression line. If =1, the is no change in curvature. The authors showed that this approach of curvature modification does not modif or change the mean of the original series. As per solving the method of second-order difference equation (Kelley & Peterson, 2001), Equatio (6) presented by Hyndman and Billah (2003) is of the form of Equation (7): (7) is called Theta-line as per original literature, where = 0, � (0) contains a linear trend for the time series. The squared error produces Equation (8): For least square error, by minimisation of Equation (8), Equations (9) and (10) are derived: et al. (2011), and Thomakos and Nikolopoulos (2014). In a work by Petropoulos et performance considering different forecasting methods and approaches participate they showed the performance of the Theta method. The work of Spiliotis et al. (20 conceptual insight from the Theta method for decomposition, which was extended by modifying and improving to a better hybrid model. They presented promising Competition data. Papacharalampous et al. (2018) applied automatic forecasting methods to t data of temperature as well as precipitation. They compared predictability, where th insignificant position among the other models. A different perspective of the Theta application and theoretical concepts were discussed and explained by Nikolopoulos which was wholly dedicated for the Theta method. Since Theta is one of the winners forecasting competitions or M-Competitions, a recent brief historical sketch of Hyn this model. The Theta method (Assimakopoulos & Nikolopoulos, 2000) is an appro modification for time series, where the second difference of a newly derived series difference of the original series with a scale factor of change or modification. The adopted from the related Greek letter  (Theta) used in the model equation. A time s of original data and Theta method-based new time series { � , � , � , … , � } are re second-order difference equation as in Equation (6): where � �� = � − 2 ��� + ��� . Gradual reduction of local curvatures deflates the ti there is no curvature. Therefore, smaller values of  contribute towards a more sub curvature pattern. When the value is zero, it produces a straight line or a linear regre is no change in curvature. The authors showed that this approach of curvature modifi or change the mean of the original series. As per solving the method of second-order difference equation (Kelley & Pe (6) presented by Hyndman and Billah (2003) is of the form of Equation (7): (7) is called Theta-line as per original literature, where = 0, � (0) conta the time series. The squared error produces Equation (8): For least square error, by minimisation of Equation (8), Equations (9) and (10) (2011), and Thomakos and Nikolopoulos (2014). In a work by Petropoulos et al. (2019 performance considering different forecasting methods and approaches participated in M3 they showed the performance of the Theta method. The work of Spiliotis et al. (2019) embr conceptual insight from the Theta method for decomposition, which was extended to non-li by modifying and improving to a better hybrid model. They presented promising results Competition data. Papacharalampous et al. (2018) applied automatic forecasting methods to the mont data of temperature as well as precipitation. They compared predictability, where the Theta m insignificant position among the other models. A different perspective of the Theta method application and theoretical concepts were discussed and explained by Nikolopoulos and Tho which was wholly dedicated for the Theta method. Since Theta is one of the winners and sig forecasting competitions or M-Competitions, a recent brief historical sketch of Hyndman (2 this model. The Theta method (Assimakopoulos & Nikolopoulos, 2000) is an approach for l modification for time series, where the second difference of a newly derived series is related difference of the original series with a scale factor of change or modification. The name of adopted from the related Greek letter  (Theta) used in the model equation. A time series { � of original data and Theta method-based new time series { � , � , � , … , � } are related by second-order difference equation as in Equation (6): where � �� = � − 2 ��� + ��� . Gradual reduction of local curvatures deflates the time series there is no curvature. Therefore, smaller values of  contribute towards a more substantial d curvature pattern. When the value is zero, it produces a straight line or a linear regression lin is no change in curvature. The authors showed that this approach of curvature modification d or change the mean of the original series. As per solving the method of second-order difference equation (Kelley & Peterson,2 (6) presented by Hyndman and Billah (2003) is of the form of Equation (7): (7) is called Theta-line as per original literature, where = 0, � (0) contains a line the time series. The squared error produces Equation (8): For least square error, by minimisation of Equation (8), Equations (9) and (10) are derived: Nikolopoulos (2014) (2019), the Theta method. Since Theta is one of the winners and significant part of ompetitions, a recent brief historical sketch of Hyndman (2020) contained ssimakopoulos & Nikolopoulos, 2000) is an approach for local curvature ere the second difference of a newly derived series is related to the second with a scale factor of change or modification. The name of the method is are related by the following as in Equation (6): Gradual reduction of local curvatures deflates the time series to zero where smaller values of  contribute towards a more substantial deflation in the e is zero, it produces a straight line or a linear regression line. If =1, there thors showed that this approach of curvature modification does not modify al series. d of second-order difference equation (Kelley & Peterson, 2001), Equation illah (2003) is of the form of Equation (7): as per original literature, where = 0, � (0) contains a linear trend for r produces Equation (8): isation of Equation (8), Equations (9) and (10) contains a linear trend for are derived: (9) et al. (2011), and Thomakos and Nikolopoulos (2014). In a work by Petropoulos et al. (2019 performance considering different forecasting methods and approaches participated in M3 they showed the performance of the Theta method. The work of Spiliotis et al. (2019) embro conceptual insight from the Theta method for decomposition, which was extended to non-lin by modifying and improving to a better hybrid model. They presented promising results Competition data. Papacharalampous et al. (2018) applied automatic forecasting methods to the month data of temperature as well as precipitation. They compared predictability, where the Theta m insignificant position among the other models. A different perspective of the Theta method e application and theoretical concepts were discussed and explained by Nikolopoulos and Thom which was wholly dedicated for the Theta method. Since Theta is one of the winners and sign forecasting competitions or M-Competitions, a recent brief historical sketch of Hyndman (20 this model. The Theta method (Assimakopoulos & Nikolopoulos, 2000) is an approach for lo modification for time series, where the second difference of a newly derived series is related difference of the original series with a scale factor of change or modification. The name of adopted from the related Greek letter  (Theta) used in the model equation. A time series { � , of original data and Theta method-based new time series { � , � , � , … , � } are related by second-order difference equation as in Equation (6): where � �� = � − 2 ��� + ��� . Gradual reduction of local curvatures deflates the time series there is no curvature. Therefore, smaller values of  contribute towards a more substantial d curvature pattern. When the value is zero, it produces a straight line or a linear regression line is no change in curvature. The authors showed that this approach of curvature modification do or change the mean of the original series. As per solving the method of second-order difference equation (Kelley & Peterson,20 (6) presented by Hyndman and Billah (2003) is of the form of Equation (7): (7) is called Theta-line as per original literature, where = 0, � (0) contains a line the time series. The squared error produces Equation (8): For least square error, by minimisation of Equation (8), Equations (9) and (10) are derived: (2011), and Thomakos and Nikolopoulos (2014). In a work by Petropoulos et al. (2019) performance considering different forecasting methods and approaches participated in M3-C they showed the performance of the Theta method. The work of Spiliotis et al. (2019) embroi conceptual insight from the Theta method for decomposition, which was extended to non-line by modifying and improving to a better hybrid model. They presented promising results w Competition data. Papacharalampous et al. (2018) applied automatic forecasting methods to the monthl data of temperature as well as precipitation. They compared predictability, where the Theta me insignificant position among the other models. A different perspective of the Theta method en application and theoretical concepts were discussed and explained by Nikolopoulos and Thom which was wholly dedicated for the Theta method. Since Theta is one of the winners and signif forecasting competitions or M-Competitions, a recent brief historical sketch of Hyndman (202 this model. The Theta method (Assimakopoulos & Nikolopoulos, 2000) is an approach for loc modification for time series, where the second difference of a newly derived series is related t difference of the original series with a scale factor of change or modification. The name of th adopted from the related Greek letter  (Theta) used in the model equation. A time series { � , of original data and Theta method-based new time series { � , � , � , … , � } are related by t second-order difference equation as in Equation (6): where � �� = � − 2 ��� + ��� . Gradual reduction of local curvatures deflates the time series t there is no curvature. Therefore, smaller values of  contribute towards a more substantial def curvature pattern. When the value is zero, it produces a straight line or a linear regression line. is no change in curvature. The authors showed that this approach of curvature modification doe or change the mean of the original series. As per solving the method of second-order difference equation (Kelley & Peterson,200 (6) presented by Hyndman and Billah (2003) is of the form of Equation (7): (7) is called Theta-line as per original literature, where = 0, � (0) contains a linea the time series. The squared error produces Equation (8): For least square error, by minimisation of Equation (8), Equations (9) and (10) are derived: (2011), and Thomakos and Nikolopoulos (2014). In a work by Petropoulos et al. (2019) performance considering different forecasting methods and approaches participated in M3-C they showed the performance of the Theta method. The work of Spiliotis et al. (2019) embroi conceptual insight from the Theta method for decomposition, which was extended to non-line by modifying and improving to a better hybrid model. They presented promising results w Competition data. Papacharalampous et al. (2018) applied automatic forecasting methods to the monthly data of temperature as well as precipitation. They compared predictability, where the Theta me insignificant position among the other models. A different perspective of the Theta method en application and theoretical concepts were discussed and explained by Nikolopoulos and Thoma which was wholly dedicated for the Theta method. Since Theta is one of the winners and signif forecasting competitions or M-Competitions, a recent brief historical sketch of Hyndman (2020 this model. The Theta method (Assimakopoulos & Nikolopoulos, 2000) is an approach for loc modification for time series, where the second difference of a newly derived series is related to difference of the original series with a scale factor of change or modification. The name of th adopted from the related Greek letter  (Theta) used in the model equation. A time series { � , of original data and Theta method-based new time series { � , � , � , … , � } are related by th second-order difference equation as in Equation (6): where � �� = � − 2 ��� + ��� . Gradual reduction of local curvatures deflates the time series to there is no curvature. Therefore, smaller values of  contribute towards a more substantial def curvature pattern. When the value is zero, it produces a straight line or a linear regression line. I is no change in curvature. The authors showed that this approach of curvature modification does or change the mean of the original series. As per solving the method of second-order difference equation (Kelley & Peterson,200 (6) presented by Hyndman and Billah (2003) is of the form of Equation (7): (7) is called Theta-line as per original literature, where = 0, � (0) contains a linear the time series. The squared error produces Equation (8): For least square error, by minimisation of Equation (8), Equations (9) and (10) are derived: For least square error, by minimisation of Equation 8, Equations 9 and 10 are derived: (9) Averaging both sides of Equation (7) will produce Equation (11): Putting the value of from Equation10 into Equation 11, Therefore, curvature change through the Theta method does not change the mean of time series dataset.
It can be and is shown by the authors that, and Considering in this method as per the authors, step forecast regarding a time series with data is , where is found through linear extrapolation and is through simple exponential smoothing. Hyndman and Billah (2003) found the equivalent or same result as Assimakopoulos and Nikolopoulos (2000). One expected merit on behalf of the Theta method is that it can capture local curvature occurring due to a new vibration of underlying time series. A possible demerit of this approach is that it may not embroil global average curvature, which in many cases contributes towards the measuring trend.
The seminal work of Huang et al. (1998) introduced the widely applicable Empirical Mode Decomposition (EMD) method as a contribution to signal processing, which later on was applied in diversified fields of research including economics, finance, meteorology, demography, etc. For important indication and inspiration of EMD in financial time series, the work of Huang et al. (2003) occupied an essential place for general guidelines. For the conceptual understanding and explanation of EMD, Rilling and Flandrin (2008) contributed along with insightful illustrations. IMFs, which are a vital part of EMD or Hilbert-Huang transform in the extended case that was emphasized, focused on the concern of the adaptive approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition, has a close connection with other theoretical methods, namely Fourier transforms and Hilbert transforms. The EMD process divides a signal or sequence of dataset into some sub-signals or sub-sequences of the original data, and these decomposed components are known as Intrinsic Mode Functions (IMF), whereas the last one is called the residue. The number of these components are at most equal to or less than being the total quantity of data. Equation (7) is called Theta-line as per original literature, where = 0, � (0) contains a linear t the time series. The squared error produces Equation (8): For least square error, by minimisation of Equation (8), Equations (9) and (10) are derived: Averaging both sides of Equation (7) will produce Equation (11): the time series. The squared error produces Equation (8): For least square error, by minimisation of Equation (8), Equations (9) and (10) are derived: Averaging both sides of Equation (7) will produce Equation (11): Putting the value of from Equation (10) into Equation (11), �( ) = ̅ . Therefore, curvat through the Theta method does not change the mean of time series dataset.

It can be and is shown by the authors that
and � + ��� = 0. Considering = 0 in this method as per the authors, ℎ-step forecast regar where � (0, ℎ) is found through linear ex and � (2, ℎ) is through simple exponential smoothing. Hyndman and Billah (2003) found the eq same result as Assimakopoulos and Nikolopoulos (2000). One expected merit on behalf of the Th is that it can capture local curvature occurring due to a new vibration of underlying time series demerit of this approach is that it may not embroil global average curvature, which in many cases towards the measuring trend.
The seminal work of Huang et al. (1998) introduced the widely applicable Empi Decomposition (EMD) method as a contribution to signal processing, which later on was diversified fields of research including economics, finance, meteorology, demography, etc. Fo indication and inspiration of EMD in financial time series, the work of Huang et al. (2003) o essential place for general guidelines. For the conceptual understanding and explanation of EM and Flandrin (2008) contributed along with insightful illustrations. IMFs, which are a vital part Hilbert-Huang transform in the extended case that was emphasized, focused on the concern of t approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition, connection with other theoretical methods, namely Fourier transforms and Hilbert transforms process divides a signal or sequence of dataset into some sub-signals or sub-sequences of the o and these decomposed components are known as Intrinsic Mode Functions (IMF), whereas the called the residue. The number of these components are at most equal to or less than � ( ), total quantity of data.
IMFs are produced through the algorithmic process named sifting process, which follow concept of Hilbert transforms. Therefore, the decomposed or components are of the same size a orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local mean mod empirical modes are found through averaging the upper envelope and the lower envelope, whic splines fitted above and below the original signal. Obtaining mean envelops and subtracting the immediate remainder dataset are continued in the sifting process until the process ends by satisf the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, Thresho Putting the value of from Equation (10) into Equation (11), �( ) = ̅ . Therefore, curvatu through the Theta method does not change the mean of time series dataset.

It can be and is shown by the authors that
and � + ��� = 0. Considering = 0 in this method as per the authors, ℎ-step forecast regard where � (0, ℎ) is found through linear ext and � (2, ℎ) is through simple exponential smoothing. Hyndman and Billah (2003) found the equ same result as Assimakopoulos and Nikolopoulos (2000). One expected merit on behalf of the The is that it can capture local curvature occurring due to a new vibration of underlying time series. A demerit of this approach is that it may not embroil global average curvature, which in many cases c towards the measuring trend.
The seminal work of Huang et al. (1998) introduced the widely applicable Empiri Decomposition (EMD) method as a contribution to signal processing, which later on was diversified fields of research including economics, finance, meteorology, demography, etc. For indication and inspiration of EMD in financial time series, the work of Huang et al. (2003) oc essential place for general guidelines. For the conceptual understanding and explanation of EM and Flandrin (2008) contributed along with insightful illustrations. IMFs, which are a vital part o Hilbert-Huang transform in the extended case that was emphasized, focused on the concern of th approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition, h connection with other theoretical methods, namely Fourier transforms and Hilbert transforms. process divides a signal or sequence of dataset into some sub-signals or sub-sequences of the ori and these decomposed components are known as Intrinsic Mode Functions (IMF), whereas the called the residue. The number of these components are at most equal to or less than � ( ), total quantity of data.
IMFs are produced through the algorithmic process named sifting process, which follow concept of Hilbert transforms. Therefore, the decomposed or components are of the same size an orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local mean moda empirical modes are found through averaging the upper envelope and the lower envelope, which splines fitted above and below the original signal. Obtaining mean envelops and subtracting them immediate remainder dataset are continued in the sifting process until the process ends by satisfy the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, Threshol and S-Number Criterion. Consequently, IMFs are extracted sequentially by following algorithmi Putting the value of from Equation (10) into Equation (11), �( ) = ̅ . Therefore, curvature change through the Theta method does not change the mean of time series dataset.

It can be and is shown by the authors that
and � + ��� = 0. Considering = 0 in this method as per the authors, ℎ-step forecast regarding a time where � (0, ℎ) is found through linear extrapolation and � (2, ℎ) is through simple exponential smoothing. Hyndman and Billah (2003) found the equivalent or same result as Assimakopoulos and Nikolopoulos (2000). One expected merit on behalf of the Theta method is that it can capture local curvature occurring due to a new vibration of underlying time series. A possible demerit of this approach is that it may not embroil global average curvature, which in many cases contributes towards the measuring trend.
The seminal work of Huang et al. (1998) introduced the widely applicable Empirical Mode Decomposition (EMD) method as a contribution to signal processing, which later on was applied in diversified fields of research including economics, finance, meteorology, demography, etc. For important indication and inspiration of EMD in financial time series, the work of Huang et al. (2003) occupied an essential place for general guidelines. For the conceptual understanding and explanation of EMD, Rilling and Flandrin (2008) contributed along with insightful illustrations. IMFs, which are a vital part of EMD or Hilbert-Huang transform in the extended case that was emphasized, focused on the concern of the adaptive approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition, has a close connection with other theoretical methods, namely Fourier transforms and Hilbert transforms. The EMD process divides a signal or sequence of dataset into some sub-signals or sub-sequences of the original data, and these decomposed components are known as Intrinsic Mode Functions (IMF), whereas the last one is called the residue. The number of these components are at most equal to or less than � ( ), being the total quantity of data.
IMFs are produced through the algorithmic process named sifting process, which follows the basic concept of Hilbert transforms. Therefore, the decomposed or components are of the same size and form an orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local mean modal values or empirical modes are found through averaging the upper envelope and the lower envelope, which are cubic splines fitted above and below the original signal. Obtaining mean envelops and subtracting them from the immediate remainder dataset are continued in the sifting process until the process ends by satisfying any of the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, Threshold Method, and S-Number Criterion. Consequently, IMFs are extracted sequentially by following algorithmic steps.
For any signal, let � and � be the upper and lower cubic spline envelops, then their mean envelope is obtained from Equation (12): To present the sifting process, let ( ) be a signal and 1 the mean envelope. The following are Putting the value of from Equation (10) into Equation (11), �( ) = ̅ . Therefore, curvature cha through the Theta method does not change the mean of time series dataset.

It can be and is shown by the authors that
where � (0, ℎ) is found through linear extrapolat and � (2, ℎ) is through simple exponential smoothing. Hyndman and Billah (2003) found the equivalen same result as Assimakopoulos and Nikolopoulos (2000). One expected merit on behalf of the Theta meth is that it can capture local curvature occurring due to a new vibration of underlying time series. A possi demerit of this approach is that it may not embroil global average curvature, which in many cases contribu towards the measuring trend. The seminal work of Huang et al. (1998) introduced the widely applicable Empirical Mo Decomposition (EMD) method as a contribution to signal processing, which later on was applied diversified fields of research including economics, finance, meteorology, demography, etc. For import indication and inspiration of EMD in financial time series, the work of Huang et al. (2003) occupied essential place for general guidelines. For the conceptual understanding and explanation of EMD, Rill and Flandrin (2008) contributed along with insightful illustrations. IMFs, which are a vital part of EMD Hilbert-Huang transform in the extended case that was emphasized, focused on the concern of the adapt approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition, has a cl connection with other theoretical methods, namely Fourier transforms and Hilbert transforms. The EM process divides a signal or sequence of dataset into some sub-signals or sub-sequences of the original d and these decomposed components are known as Intrinsic Mode Functions (IMF), whereas the last on called the residue. The number of these components are at most equal to or less than � ( ), being total quantity of data.
IMFs are produced through the algorithmic process named sifting process, which follows the ba concept of Hilbert transforms. Therefore, the decomposed or components are of the same size and form orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local mean modal values empirical modes are found through averaging the upper envelope and the lower envelope, which are cu splines fitted above and below the original signal. Obtaining mean envelops and subtracting them from immediate remainder dataset are continued in the sifting process until the process ends by satisfying any the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, Threshold Meth and S-Number Criterion. Consequently, IMFs are extracted sequentially by following algorithmic steps For any signal, let � and � be the upper and lower cubic spline envelops, then their mean envel is obtained from Equation (12): Putting the value of from Equation (10) into Equation (11) where � (0, ℎ) is and � (2, ℎ) is through simple exponential smoothing. Hyndman and B same result as Assimakopoulos and Nikolopoulos (2000). One expected is that it can capture local curvature occurring due to a new vibration o demerit of this approach is that it may not embroil global average curvatu towards the measuring trend.
The seminal work of Huang et al. (1998) introduced the w Decomposition (EMD) method as a contribution to signal processin diversified fields of research including economics, finance, meteorolog indication and inspiration of EMD in financial time series, the work o essential place for general guidelines. For the conceptual understandin and Flandrin (2008) contributed along with insightful illustrations. IMF Hilbert-Huang transform in the extended case that was emphasized, foc approach for data analysis in the work of Wang et al. (2010). EMD, an connection with other theoretical methods, namely Fourier transforms process divides a signal or sequence of dataset into some sub-signals or and these decomposed components are known as Intrinsic Mode Func called the residue. The number of these components are at most equal t total quantity of data.
IMFs are produced through the algorithmic process named sifti concept of Hilbert transforms. Therefore, the decomposed or componen orthogonal family of sub-signal datasets virtually. In the EMD sifting p empirical modes are found through averaging the upper envelope and t splines fitted above and below the original signal. Obtaining mean enve immediate remainder dataset are continued in the sifting process until th the stopping criteria, namely standard deviation (SD), Tracking of Ene and S-Number Criterion. Consequently, IMFs are extracted sequentiall Putting the value of from Equation (10) into Equation (11) where � (0, ℎ) is and � (2, ℎ) is through simple exponential smoothing. Hyndman and B same result as Assimakopoulos and Nikolopoulos (2000). One expected is that it can capture local curvature occurring due to a new vibration o demerit of this approach is that it may not embroil global average curvatu towards the measuring trend.
The seminal work of Huang et al. (1998) introduced the w Decomposition (EMD) method as a contribution to signal processin diversified fields of research including economics, finance, meteorolog indication and inspiration of EMD in financial time series, the work o essential place for general guidelines. For the conceptual understandin and Flandrin (2008) contributed along with insightful illustrations. IMF Hilbert-Huang transform in the extended case that was emphasized, foc approach for data analysis in the work of Wang et al. (2010). EMD, an connection with other theoretical methods, namely Fourier transforms process divides a signal or sequence of dataset into some sub-signals or and these decomposed components are known as Intrinsic Mode Funct called the residue. The number of these components are at most equal t total quantity of data.
IMFs are produced through the algorithmic process named sifti concept of Hilbert transforms. Therefore, the decomposed or componen orthogonal family of sub-signal datasets virtually. In the EMD sifting p empirical modes are found through averaging the upper envelope and t splines fitted above and below the original signal. Obtaining mean enve immediate remainder dataset are continued in the sifting process until th the stopping criteria, namely standard deviation (SD), Tracking of Ene and S-Number Criterion. Consequently, IMFs are extracted sequentially (11) ) into Equation (11), �( ) = ̅ . Therefore, curvature change the mean of time series dataset.

ors that
this method as per the authors, ℎ-step forecast regarding a time � (2, ℎ)], where � (0, ℎ) is found through linear extrapolation smoothing. Hyndman and Billah (2003) found the equivalent or poulos (2000). One expected merit on behalf of the Theta method ing due to a new vibration of underlying time series. A possible mbroil global average curvature, which in many cases contributes al. (1998) introduced the widely applicable Empirical Mode tribution to signal processing, which later on was applied in nomics, finance, meteorology, demography, etc. For important ncial time series, the work of Huang et al. (2003) occupied an the conceptual understanding and explanation of EMD, Rilling insightful illustrations. IMFs, which are a vital part of EMD or ase that was emphasized, focused on the concern of the adaptive ang et al. (2010). EMD, an adaptive decomposition, has a close , namely Fourier transforms and Hilbert transforms. The EMD aset into some sub-signals or sub-sequences of the original data, own as Intrinsic Mode Functions (IMF), whereas the last one is mponents are at most equal to or less than � ( ), being the orithmic process named sifting process, which follows the basic he decomposed or components are of the same size and form an irtually. In the EMD sifting process, local mean modal values or ing the upper envelope and the lower envelope, which are cubic signal. Obtaining mean envelops and subtracting them from the in the sifting process until the process ends by satisfying any of iation (SD), Tracking of Energy Difference, Threshold Method, Putting the value of from Equation through the Theta method does not cha It can be and is shown by the and � (2, ℎ) is through simple exponen same result as Assimakopoulos and Nik is that it can capture local curvature oc demerit of this approach is that it may no towards the measuring trend. The seminal work of Huang Decomposition (EMD) method as a diversified fields of research including indication and inspiration of EMD in essential place for general guidelines. F and Flandrin (2008) contributed along Hilbert-Huang transform in the extende approach for data analysis in the work o connection with other theoretical meth process divides a signal or sequence of and these decomposed components are called the residue. The number of these total quantity of data.
IMFs are produced through the concept of Hilbert transforms. Therefor orthogonal family of sub-signal dataset empirical modes are found through ave splines fitted above and below the origi immediate remainder dataset are contin the stopping criteria, namely standard and S-Number Criterion. Consequently the value of from Equation (10) into Equation (11), �( ) = ̅ . Therefore, curvature change the Theta method does not change the mean of time series dataset.

It can be and is shown by the authors that
where � (0, ℎ) is found through linear extrapolation 2, ℎ) is through simple exponential smoothing. Hyndman and Billah (2003) found the equivalent or sult as Assimakopoulos and Nikolopoulos (2000). One expected merit on behalf of the Theta method t can capture local curvature occurring due to a new vibration of underlying time series. A possible t of this approach is that it may not embroil global average curvature, which in many cases contributes s the measuring trend. The seminal work of Huang et al. (1998) introduced the widely applicable Empirical Mode position (EMD) method as a contribution to signal processing, which later on was applied in ied fields of research including economics, finance, meteorology, demography, etc. For important on and inspiration of EMD in financial time series, the work of Huang et al. (2003) occupied an l place for general guidelines. For the conceptual understanding and explanation of EMD, Rilling ndrin (2008) contributed along with insightful illustrations. IMFs, which are a vital part of EMD or -Huang transform in the extended case that was emphasized, focused on the concern of the adaptive h for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition, has a close tion with other theoretical methods, namely Fourier transforms and Hilbert transforms. The EMD divides a signal or sequence of dataset into some sub-signals or sub-sequences of the original data, se decomposed components are known as Intrinsic Mode Functions (IMF), whereas the last one is he residue. The number of these components are at most equal to or less than � ( ), being the antity of data. IMFs are produced through the algorithmic process named sifting process, which follows the basic t of Hilbert transforms. Therefore, the decomposed or components are of the same size and form an nal family of sub-signal datasets virtually. In the EMD sifting process, local mean modal values or al modes are found through averaging the upper envelope and the lower envelope, which are cubic fitted above and below the original signal. Obtaining mean envelops and subtracting them from the iate remainder dataset are continued in the sifting process until the process ends by satisfying any of ping criteria, namely standard deviation (SD), Tracking of Energy Difference, Threshold Method, umber Criterion. Consequently, IMFs are extracted sequentially by following algorithmic steps. For any signal, let � and � be the upper and lower cubic spline envelops, then their mean envelope (11) 0) into Equation (11), �( ) = ̅ . Therefore, curvature change e the mean of time series dataset.
this method as per the authors, ℎ-step forecast regarding a time � (2, ℎ)], where � (0, ℎ) is found through linear extrapolation l smoothing. Hyndman and Billah (2003) found the equivalent or poulos (2000). One expected merit on behalf of the Theta method ring due to a new vibration of underlying time series. A possible mbroil global average curvature, which in many cases contributes al. (1998) introduced the widely applicable Empirical Mode tribution to signal processing, which later on was applied in onomics, finance, meteorology, demography, etc. For important ancial time series, the work of Huang et al. (2003) occupied an the conceptual understanding and explanation of EMD, Rilling h insightful illustrations. IMFs, which are a vital part of EMD or ase that was emphasized, focused on the concern of the adaptive ang et al. (2010). EMD, an adaptive decomposition, has a close s, namely Fourier transforms and Hilbert transforms. The EMD taset into some sub-signals or sub-sequences of the original data, own as Intrinsic Mode Functions (IMF), whereas the last one is mponents are at most equal to or less than � ( ), being the gorithmic process named sifting process, which follows the basic the decomposed or components are of the same size and form an irtually. In the EMD sifting process, local mean modal values or ing the upper envelope and the lower envelope, which are cubic l signal. Obtaining mean envelops and subtracting them from the d in the sifting process until the process ends by satisfying any of iation (SD), Tracking of Energy Difference, Threshold Method, Fs are extracted sequentially by following algorithmic steps. upper and lower cubic spline envelops, then their mean envelope Putting the value of from Equation (10) into Equation (11) where � (0 and � (2, ℎ) is through simple exponential smoothing. Hyndman same result as Assimakopoulos and Nikolopoulos (2000). One ex is that it can capture local curvature occurring due to a new vibra demerit of this approach is that it may not embroil global average c towards the measuring trend.
The seminal work of Huang et al. (1998) introduced Decomposition (EMD) method as a contribution to signal pr diversified fields of research including economics, finance, met indication and inspiration of EMD in financial time series, the essential place for general guidelines. For the conceptual unders and Flandrin (2008) contributed along with insightful illustration Hilbert-Huang transform in the extended case that was emphasiz approach for data analysis in the work of Wang et al. (2010). EM connection with other theoretical methods, namely Fourier tran process divides a signal or sequence of dataset into some sub-sig and these decomposed components are known as Intrinsic Mode called the residue. The number of these components are at most total quantity of data.
IMFs are produced through the algorithmic process nam concept of Hilbert transforms. Therefore, the decomposed or com orthogonal family of sub-signal datasets virtually. In the EMD si empirical modes are found through averaging the upper envelop splines fitted above and below the original signal. Obtaining mea immediate remainder dataset are continued in the sifting process the stopping criteria, namely standard deviation (SD), Tracking and S-Number Criterion. Consequently, IMFs are extracted sequ For any signal, let � and � be the upper and lower cubic is obtained from Equation (12): (11) ̅ . Therefore, curvature change t.
, ℎ)] = � , since � + ��� = 0 , ℎ-step forecast regarding a time ound through linear extrapolation lah (2003) found the equivalent or erit on behalf of the Theta method underlying time series. A possible e, which in many cases contributes dely applicable Empirical Mode , which later on was applied in , demography, etc. For important Huang et al. (2003) occupied an and explanation of EMD, Rilling , which are a vital part of EMD or sed on the concern of the adaptive aptive decomposition, has a close nd Hilbert transforms. The EMD ub-sequences of the original data, ns (IMF), whereas the last one is or less than � ( ), being the g process, which follows the basic s are of the same size and form an ocess, local mean modal values or e lower envelope, which are cubic ops and subtracting them from the process ends by satisfying any of y Difference, Threshold Method, n (10) into Equation (11), �( ) = ̅ . Therefore, curvature change ange the mean of time series dataset.
authors that 0 in this method as per the authors, ℎ-step forecast regarding a time , ℎ) + � (2, ℎ)], where � (0, ℎ) is found through linear extrapolation ntial smoothing. Hyndman and Billah (2003) found the equivalent or kolopoulos (2000). One expected merit on behalf of the Theta method ccurring due to a new vibration of underlying time series. A possible ot embroil global average curvature, which in many cases contributes et al. (1998) introduced the widely applicable Empirical Mode contribution to signal processing, which later on was applied in g economics, finance, meteorology, demography, etc. For important financial time series, the work of Huang et al. (2003) occupied an For the conceptual understanding and explanation of EMD, Rilling with insightful illustrations. IMFs, which are a vital part of EMD or ed case that was emphasized, focused on the concern of the adaptive of Wang et al. (2010). EMD, an adaptive decomposition, has a close hods, namely Fourier transforms and Hilbert transforms. The EMD f dataset into some sub-signals or sub-sequences of the original data, e known as Intrinsic Mode Functions (IMF), whereas the last one is e components are at most equal to or less than � ( ), being the e algorithmic process named sifting process, which follows the basic re, the decomposed or components are of the same size and form an ts virtually. In the EMD sifting process, local mean modal values or eraging the upper envelope and the lower envelope, which are cubic IMFs are produced through the algorithmic process named sifting process, which follows the basic concept of Hilbert transforms. Therefore, the decomposed or components are of the same size and form an orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local mean modal values or empirical modes are found through averaging the upper envelope and the lower envelope, which are cubic splines fitted above and below the original signal. Obtaining mean envelops and subtracting them from the immediate remainder dataset are continued in the sifting process until the process ends by satisfying any of the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, Threshold Method, and S-Number Criterion. Consequently, IMFs are extracted sequentially by following algorithmic steps.
For any signal, let be the upper and lower cubic spline envelops, then their mean envelope is obtained from Equation 12: (12) To present the sifting process, let be a signal and the mean envelope. The following are steps implemented in the process (using Equations 13 -16): (a) Obtaining mean envelope the process calculates the current remainder by subtracting from (13) (b) In the second step, current remainder is treated as the data, and by applying a similar procedure of upper and lower cubic splines, new mean envelope is found from (14) (c) The process is implemented repeatedly, say, k times, until which is satisfied with stopping criterion (Equation 17): (d)When satisfies the stopping criterion, it is regarded the first IMF component of the original data, which can be denoted by Then, separate from the original data: This process is performed repeatedly to extract all possible or say, n IMFs and (16) essential place for general guidelines. For the conceptual understanding and explanation of EMD, Ri and Flandrin (2008) contributed along with insightful illustrations. IMFs, which are a vital part of EM Hilbert-Huang transform in the extended case that was emphasized, focused on the concern of the ada approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition, has a connection with other theoretical methods, namely Fourier transforms and Hilbert transforms. The E process divides a signal or sequence of dataset into some sub-signals or sub-sequences of the original and these decomposed components are known as Intrinsic Mode Functions (IMF), whereas the last o called the residue. The number of these components are at most equal to or less than � ( ), bein total quantity of data.
IMFs are produced through the algorithmic process named sifting process, which follows the concept of Hilbert transforms. Therefore, the decomposed or components are of the same size and for orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local mean modal valu empirical modes are found through averaging the upper envelope and the lower envelope, which are c splines fitted above and below the original signal. Obtaining mean envelops and subtracting them from immediate remainder dataset are continued in the sifting process until the process ends by satisfying a the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, Threshold Me and S-Number Criterion. Consequently, IMFs are extracted sequentially by following algorithmic step For any signal, let � and � be the upper and lower cubic spline envelops, then their mean enve is obtained from Equation (12): To present the sifting process, let ( ) be a signal and 1 the mean envelope. The followin steps implemented in the process (using Equations (13) -(16)): (a) Obtaining mean envelope 1, the process calculates the current remainder by subtracting 1 ( ): approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposit connection with other theoretical methods, namely Fourier transforms and Hilbert transfo process divides a signal or sequence of dataset into some sub-signals or sub-sequences of th and these decomposed components are known as Intrinsic Mode Functions (IMF), whereas called the residue. The number of these components are at most equal to or less than � ( total quantity of data. IMFs are produced through the algorithmic process named sifting process, which fo concept of Hilbert transforms. Therefore, the decomposed or components are of the same si orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local mean m empirical modes are found through averaging the upper envelope and the lower envelope, w splines fitted above and below the original signal. Obtaining mean envelops and subtracting immediate remainder dataset are continued in the sifting process until the process ends by sa the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, Thre and S-Number Criterion. Consequently, IMFs are extracted sequentially by following algor For any signal, let � and � be the upper and lower cubic spline envelops, then their is obtained from Equation (12): To present the sifting process, let ( ) be a signal and 1 the mean envelope. Th steps implemented in the process (using Equations (13) -(16)): (a) Obtaining mean envelope 1, the process calculates the current remainder by subtra ( ): approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition connection with other theoretical methods, namely Fourier transforms and Hilbert transform process divides a signal or sequence of dataset into some sub-signals or sub-sequences of the o and these decomposed components are known as Intrinsic Mode Functions (IMF), whereas th called the residue. The number of these components are at most equal to or less than � ( ), total quantity of data.
IMFs are produced through the algorithmic process named sifting process, which follo concept of Hilbert transforms. Therefore, the decomposed or components are of the same size orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local mean mo empirical modes are found through averaging the upper envelope and the lower envelope, wh splines fitted above and below the original signal. Obtaining mean envelops and subtracting th immediate remainder dataset are continued in the sifting process until the process ends by satis the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, Thresh and S-Number Criterion. Consequently, IMFs are extracted sequentially by following algorith For any signal, let � and � be the upper and lower cubic spline envelops, then their me is obtained from Equation (12): To present the sifting process, let ( ) be a signal and 1 the mean envelope. The f steps implemented in the process (using Equations (13) -(16)): (a) Obtaining mean envelope 1, the process calculates the current remainder by subtract ( ): d inspiration of EMD in financial time series, the work of Huang et al. (2003) occupied an e for general guidelines. For the conceptual understanding and explanation of EMD, Rilling (2008) contributed along with insightful illustrations. IMFs, which are a vital part of EMD or g transform in the extended case that was emphasized, focused on the concern of the adaptive data analysis in the work of Wang et al. (2010). EMD, an adaptive decomposition, has a close ith other theoretical methods, namely Fourier transforms and Hilbert transforms. The EMD es a signal or sequence of dataset into some sub-signals or sub-sequences of the original data, omposed components are known as Intrinsic Mode Functions (IMF), whereas the last one is idue. The number of these components are at most equal to or less than � ( ), being the of data. are produced through the algorithmic process named sifting process, which follows the basic ilbert transforms. Therefore, the decomposed or components are of the same size and form an mily of sub-signal datasets virtually. In the EMD sifting process, local mean modal values or des are found through averaging the upper envelope and the lower envelope, which are cubic above and below the original signal. Obtaining mean envelops and subtracting them from the mainder dataset are continued in the sifting process until the process ends by satisfying any of criteria, namely standard deviation (SD), Tracking of Energy Difference, Threshold Method, er Criterion. Consequently, IMFs are extracted sequentially by following algorithmic steps. ny signal, let � and � be the upper and lower cubic spline envelops, then their mean envelope om Equation (12): resent the sifting process, let ( ) be a signal and 1 the mean envelope. The following are ented in the process (using Equations (13) -(16)): ining mean envelope 1, the process calculates the current remainder by subtracting 1 from : approach for data analysis in the work of Wang et al. (2010). EMD, an adaptive decom connection with other theoretical methods, namely Fourier transforms and Hilbert tra process divides a signal or sequence of dataset into some sub-signals or sub-sequences and these decomposed components are known as Intrinsic Mode Functions (IMF), wh called the residue. The number of these components are at most equal to or less than total quantity of data. IMFs are produced through the algorithmic process named sifting process, whi concept of Hilbert transforms. Therefore, the decomposed or components are of the sa orthogonal family of sub-signal datasets virtually. In the EMD sifting process, local m empirical modes are found through averaging the upper envelope and the lower envelo splines fitted above and below the original signal. Obtaining mean envelops and subtra immediate remainder dataset are continued in the sifting process until the process ends the stopping criteria, namely standard deviation (SD), Tracking of Energy Difference, and S-Number Criterion. Consequently, IMFs are extracted sequentially by following a For any signal, let � and � be the upper and lower cubic spline envelops, then is obtained from Equation (12): To present the sifting process, let ( ) be a signal and 1 the mean envelope steps implemented in the process (using Equations (13) -(16)): (a) Obtaining mean envelope 1, the process calculates the current remainder by s ( ): approach for data analysis in the work of Wang et al connection with other theoretical methods, namely process divides a signal or sequence of dataset into s and these decomposed components are known as In called the residue. The number of these components total quantity of data. IMFs are produced through the algorithmic concept of Hilbert transforms. Therefore, the decom orthogonal family of sub-signal datasets virtually. In empirical modes are found through averaging the up splines fitted above and below the original signal. O immediate remainder dataset are continued in the sif the stopping criteria, namely standard deviation (SD and S-Number Criterion. Consequently, IMFs are ex For any signal, let � and � be the upper and is obtained from Equation (12): To present the sifting process, let ( ) be a steps implemented in the process (using Equations ( (a) Obtaining mean envelope 1, the process ca ( ): In the second step, current remainder ℎ1 is treated as the data of upper and lower cubic splines, new mean envelope 11 i ℎ11 = ℎ1 − 11 (c) The process is implemented repeatedly, say, k times, until criterion (Equation (17)): ℎ1 = ℎ1( − 1) − 1 (d) When ℎ1 satisfies the stopping criterion, it is regarded the data, which can be denoted by 1 = ℎ1 . Then, separate 1 1. This process is performed repeatedly to extract all possib 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: The sifting process is stopped if � has a value less than a pre-set m ℎ1 = ( ) − 1 (b) In the second step, current remainder ℎ1 is treated as the data, and by applying a of upper and lower cubic splines, new mean envelope 11 is found from ℎ1: ℎ11 = ℎ1 − 11 (c) The process is implemented repeatedly, say, k times, until ℎ1 , which is satis criterion (Equation (17)): ℎ1 = ℎ1( − 1) − 1 (d) When ℎ1 satisfies the stopping criterion, it is regarded the first IMF compon data, which can be denoted by 1 = ℎ1 . Then, separate 1 from the original 1. This process is performed repeatedly to extract all possible or say, n IMFs a 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: The sifting process is stopped if � has a value less than a pre-set minimum value.
cond step, current remainder ℎ1 is treated as the data, and by applying a similar procedure and lower cubic splines, new mean envelope 11 is found from ℎ1: cess is implemented repeatedly, say, k times, until ℎ1 , which is satisfied with stopping (Equation (17)): 1 satisfies the stopping criterion, it is regarded the first IMF component of the original ich can be denoted by 1 = ℎ1 . Then, separate 1 from the original data: ( ) − 1 = process is performed repeatedly to extract all possible or say, n IMFs and : = 2, . . . . , − 1 − = terion based on SD is formulated as follows: ss is stopped if has a value less than a pre-set minimum value.
In the second step, current remainder ℎ1 is treated as the data, of upper and lower cubic splines, new mean envelope 11 is f ℎ11 = ℎ1 − 11 (c) The process is implemented repeatedly, say, k times, until ℎ1 criterion (Equation (17)): ℎ1 = ℎ1( − 1) − 1 (d) When ℎ1 satisfies the stopping criterion, it is regarded the data, which can be denoted by 1 = ℎ1 . Then, separate 1 f 1. This process is performed repeatedly to extract all possibl 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: cond step, current remainder ℎ1 is treated as the data, and by applying a similar procedure r and lower cubic splines, new mean envelope 11 is found from ℎ1: ℎ1 − 11 (14) cess is implemented repeatedly, say, k times, until ℎ1 , which is satisfied with stopping (Equation (17)): 1 satisfies the stopping criterion, it is regarded the first IMF component of the original ich can be denoted by 1 = ℎ1 . Then, separate 1 from the original data: ( ) − 1 = s process is performed repeatedly to extract all possible or say, n IMFs and : = 2, . . . . , − 1 − = iterion based on SD is formulated as follows: In the second step, current remainder ℎ1 is treated as the data, and by applying a of upper and lower cubic splines, new mean envelope 11 is found from ℎ1: ℎ11 = ℎ1 − 11 (c) The process is implemented repeatedly, say, k times, until ℎ1 , which is satisfi criterion (Equation (17)): ℎ1 = ℎ1( − 1) − 1 (d) When ℎ1 satisfies the stopping criterion, it is regarded the first IMF compone data, which can be denoted by 1 = ℎ1 . Then, separate 1 from the original d 1. This process is performed repeatedly to extract all possible or say, n IMFs an 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: In the second step, current remainder ℎ1 is treated as the data, and of upper and lower cubic splines, new mean envelope 11 is found ℎ11 = ℎ1 − 11 (c) The process is implemented repeatedly, say, k times, until ℎ1 , w criterion (Equation (17)): ℎ1 = ℎ1( − 1) − 1 (d) When ℎ1 satisfies the stopping criterion, it is regarded the first data, which can be denoted by 1 = ℎ1 . Then, separate 1 from 1. This process is performed repeatedly to extract all possible or s 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: In the second step, current remainder ℎ1 is treated as the data, and by applying a of upper and lower cubic splines, new mean envelope 11 is found from ℎ1: ℎ11 = ℎ1 − 11 (c) The process is implemented repeatedly, say, k times, until ℎ1 , which is satisf criterion (Equation (17)): ℎ1 = ℎ1( − 1) − 1 (d) When ℎ1 satisfies the stopping criterion, it is regarded the first IMF compone data, which can be denoted by 1 = ℎ1 . Then, separate 1 from the original d 1. This process is performed repeatedly to extract all possible or say, n IMFs an 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: In the second step, current remainder ℎ1 is treated as the data, and by applying a similar procedure of upper and lower cubic splines, new mean envelope 11 is found from ℎ1: The process is implemented repeatedly, say, k times, until ℎ1 , which is satisfied with stopping criterion (Equation (17)): When ℎ1 satisfies the stopping criterion, it is regarded the first IMF component of the original data, which can be denoted by 1 = ℎ1 . Then, separate 1 from the original data: ( ) − 1 = 1. This process is performed repeatedly to extract all possible or say, n IMFs and : 1 − 2 = 2, . . . . , − 1 − = ping criterion based on SD is formulated as follows: In the second step, current remainder ℎ1 is treated as the data, and b of upper and lower cubic splines, new mean envelope 11 is found ℎ11 = ℎ1 − 11 (c) The process is implemented repeatedly, say, k times, until ℎ1 , w criterion (Equation (17)): ℎ1 = ℎ1( − 1) − 1 (d) When ℎ1 satisfies the stopping criterion, it is regarded the first I data, which can be denoted by 1 = ℎ1 . Then, separate 1 from 1. This process is performed repeatedly to extract all possible or s 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: In the second step, current remainder ℎ1 is treated as the data, and by app of upper and lower cubic splines, new mean envelope 11 is found from ℎ11 = ℎ1 − 11 (c) The process is implemented repeatedly, say, k times, until ℎ1 , which criterion (Equation (17)): ℎ1 = ℎ1( − 1) − 1 (d) When ℎ1 satisfies the stopping criterion, it is regarded the first IMF c data, which can be denoted by 1 = ℎ1 . Then, separate 1 from the or 1. This process is performed repeatedly to extract all possible or say, n 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: he second step, current remainder ℎ1 is treated as the data, and by applying a similar procedure pper and lower cubic splines, new mean envelope 11 is found from ℎ1: process is implemented repeatedly, say, k times, until ℎ1 , which is satisfied with stopping erion (Equation (17) en ℎ1 satisfies the stopping criterion, it is regarded the first IMF component of the original a, which can be denoted by 1 = ℎ1 . Then, separate 1 from the original data: ( ) − 1 = . This process is performed repeatedly to extract all possible or say, n IMFs and : − 2 = 2, . . . . , − 1 − = g criterion based on SD is formulated as follows: ta, and by applying a similar procedure is found from ℎ1: ℎ1 , which is satisfied with stopping (15) e first IMF component of the original 1 from the original data: ( ) − 1 = ible or say, n IMFs and : ainder ℎ1 is treated as the data, and by applying a similar procedure nes, new mean envelope 11 is found from ℎ1: repeatedly, say, k times, until ℎ1 , which is satisfied with stopping ing criterion, it is regarded the first IMF component of the original y 1 = ℎ1 . Then, separate 1 from the original data: ( ) − 1 = repeatedly to extract all possible or say, n IMFs and : formulated as follows: The stopping criterion based on SD is formulated using Equation 17: The sifting process is stopped if has a value less than a pre-set minimum value.
Time series has a similarity with signal processing encompassing patterns, noise along with non-stationarity and non-linearity in some cases. Huang et al. (1998) applied and explained the use of EMD in financial time series. Later on, many works of literature followed the approach and focused on the application of EMD to analyse and hence forecast time series data in many research areas. Although EMD is a powerful decomposition approach, it has some limitations like end effect and mode mixing for which it may not be applied on every time series, but there are improved EMD variants to overcome these limitations.
Many researchers have recommended the hybridisation of EMD with different combinations of models. One such work is of Wang et al. (2014), where their work was on the EMD-ARIMA combined approach for predicting traffic speed in the short-term forecast horizon. Then, the work of Abadan and Shabri (2014) was aimed for rice price forecasting by EMD-ARIMA hybridisation. Later, the work of Nava et al. (2018) was performed on EMD-Support Vector Regression (SVR) for forecasting financial time series of Standard and Poor 500 Index. Meanwhile, Awajan et al. (2017) worked on EMD-MA to forecast the daily stock market index. Next, Nai et al. (2017) had a focus on the EMD-SARIMA-based model for forecasting air traffic. For short-term speed prediction of vehicle-type specific traffic, Wang et al. (2016) applied a hybrid EMD-ARIMA framework. Recently, Zhong et al. (2020) worked on EMD-ARIMA to predict Service Invention Patents in Agricultural Machinery.
In the EMD-ARIMA hybrid method, all of the extracted IMFs along with residue are forecasted using the ARIMA approach, and all these component forecasts are added to produce the forecast results for the original time series. Both EMD and ARIMA are as efficient as their essence. Therefore, sometimes EMD-ARIMA (presented by the procedural diagram in Figure 1 and Algorithm 1 can be a suitable forecasting hybrid method to gain better accuracy. Nevertheless, when they are not in well-accordance for the intrinsic property of underlying time series, especially in weak stationarity cases of IMFs, their hybridisation can be less satisfactory. data, which can be denoted by 1 = ℎ1 . Then, separate 1 from the origi 1. This process is performed repeatedly to extract all possible or say, n IM 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: The sifting process is stopped if � has a value less than a pre-set minimum value.
Time series has a similarity with signal processing encompassing patterns, stationarity and non-linearity in some cases. Huang et al. (1998) applied and explai financial time series. Later on, many works of literature followed the approac application of EMD to analyse and hence forecast time series data in many research is a powerful decomposition approach, it has some limitations like end effect and mo may not be applied on every time series, but there are improved EMD variants to over Many researchers have recommended the hybridisation of EMD with diff models. One such work is of Wang et al. (2014), where their work was on the EM approach for predicting traffic speed in the short-term forecast horizon. Then, the Shabri (2014)  In the EMD-ARIMA hybrid method, all of the extracted IMFs along with using the ARIMA approach, and all these component forecasts are added to produce the original time series. Both EMD and ARIMA are as efficient as their essence. EMD-ARIMA (presented by the procedural diagram in Figure 1 and Algorithm forecasting hybrid method to gain better accuracy. Nevertheless, when they are not the intrinsic property of underlying time series, especially in weak stationarity hybridisation can be less satisfactory. data, which can be denoted by 1 = ℎ1 . Then, separate 1 from the original data: 1. This process is performed repeatedly to extract all possible or say, n IMFs and 1 − 2 = 2, . . . . , − 1 − = The stopping criterion based on SD is formulated as follows: The sifting process is stopped if � has a value less than a pre-set minimum value. Time series has a similarity with signal processing encompassing patterns, noise alo stationarity and non-linearity in some cases. Huang et al. (1998) applied and explained the u financial time series. Later on, many works of literature followed the approach and fo application of EMD to analyse and hence forecast time series data in many research areas. A is a powerful decomposition approach, it has some limitations like end effect and mode mixin may not be applied on every time series, but there are improved EMD variants to overcome the Many researchers have recommended the hybridisation of EMD with different com models. One such work is of Wang et al. (2014), where their work was on the EMD-ARIM approach for predicting traffic speed in the short-term forecast horizon. Then, the work of Shabri (2014)  In the EMD-ARIMA hybrid method, all of the extracted IMFs along with residue a using the ARIMA approach, and all these component forecasts are added to produce the forec the original time series. Both EMD and ARIMA are as efficient as their essence. Therefor EMD-ARIMA (presented by the procedural diagram in Figure 1 and Algorithm 1) can forecasting hybrid method to gain better accuracy. Nevertheless, when they are not in well-a the intrinsic property of underlying time series, especially in weak stationarity cases o hybridisation can be less satisfactory.  Step 1 : Begin Step 2 : Read Step 3 : Split into ����� and ���� Step 4 : Set h← | ���� | Step 5 : Compute ��������.����� ←other-method( ����� , . ← ℎ) Step 6 : Define Error( , ) Step 7 : Compute accuracy.other ←Error( ���� , ��������.����� ) Step 8 : Implement EMD( ����� ) Step 9 : Store IMFs. ����� , and residue. ����� Step 10 : For i← 1 to |IMFs. ����� | do Step 11   Step 1 : Begin Step 2 : Read Step 3 : Split into ����� and ���� Step 4 : Set h← | ���� | Step 5 : Compute ��������.����� ←other-method( ����� , . ← ℎ) Step 6 : Define Error( , ) Step 7 : Compute accuracy.other ←Error( ���� , ��������.����� ) Step 8 : Implement EMD( ����� ) Step 9 : Store IMFs. ����� , and residue. ����� Step 10 : For i← 1 to |IMFs. ����� | do Step 11  Step 16 : Print accuracy.EMD-ARIMA, accuracy.other

THE PROPOSED METHOD
In this work, the = proposed hybrid method was implemented as well as other conventional ones including EMD-ARIMA (Abadan & Shabri, 2014;Wang et al., 2016;Zhong et al., 2020). The hybrid model was performed on daily stock closing price data of Royal Dutch Shell (RDSB), AstraZeneca (AZN), Unilever (ULVR), Reckitt Benckiser Group (RB), and Smith & Nephew (SN) all of which are in the FTSE 100 Index. This study utilises 1,111 data for each of the companies (from January 01, 2015, to April 05, 2019) where the first 1,100 data were separated as training data and the remaining as test data. The 11 test data were also divided into six different forecast horizons of 1, 3, 5, 7, 9, and 11 days. In order to obtain a primary picture of datasets, basic descriptive measures of statistics are presented in Table 1, which encompass mean, median, minimum, maximum, coefficient of variation (COV), skewness, and kurtosis of stock price training data for all the five companies. Since visualisation aids the quick acquisition of data pattern, time series are also presented here in Figure 2.

Proposed EMD-Theta Method
EMD is very useful in dissecting time series data into some nearly orthogonal subseries of different characteristic frequency densities where high-frequency subseries are generally stationary, and low-frequency subseries tend to be nonstationary. Therefore, EMD gives sequential decomposed components. On the other hand, the Theta method has a combined approach of averaging linear trend and simple exponential smoothing following through some Theta values location to location in a view to modify curvature but not mean of the time series data, which can be very useful in some cases. Therefore, hybridisation between EMD and Theta methods, briefly the EMD-Theta model presented in Figure 3 with procedural diagram and Algorithm 2, is a potentially useful approach in time series forecasting. After splitting datasets, this study applied EMD on training sets for IMFs extraction along with the residue and then applied the Theta method on all IMFs as well as the residue for fitting and forecasting to a forecast horizon . Upon completion of forecast on all

Proposed EMD-Theta Method
EMD is very useful in dissecting time series data into some nearly orthogonal subseries of different characteristic frequency densities where high-frequency subseries are generally stationary, and lowfrequency subseries tend to be non-stationary. Therefore, EMD gives sequential decomposed components. On the other hand, the Theta method has a combined approach of averaging linear trend and simple exponential smoothing following through some Theta values location to location in a view to modify curvature but not mean of the time series data, which can be very useful in some cases. Therefore, hybridisation between EMD and Theta methods, briefly the EMD-Theta model (presented in Figure 3 with procedural diagram and Algorithm 2), is a potentially useful approach in time series forecasting. After splitting datasets, this study applied EMD on training sets for IMFs extraction along with the residue and then applied the Theta method on all IMFs as well as the residue for fitting and forecasting to a forecast horizon ℎ. Upon completion of forecast on all component subseries, the final forecast is found by adding these component forecasts. Finally, error measuring or accuracy tools are used to measure the performances pared with forecast results produced by methods of ARIMA, EWMA, Theta, and EMDe present study worked on six forecast horizons of ℎ = 1, 3, 5, 7, 9, and 11. component subseries, the final forecast is found by adding these component forecasts. Finally, error measuring or accuracy tools are used to measure the performances that are also compared with forecast results produced by methods of ARIMA, EWMA, Theta, and EMD-ARIMA. Here, the present study worked on six forecast horizons of ompared with forecast results produced by methods of ARIMA, EWMA, Theta, and EMD-, the present study worked on six forecast horizons of ℎ = 1, 3, 5, 7, 9, and 11. e 3. Procedural diagram of EMD-Theta method. at are also compared with forecast results produced by methods of ARIMA, EWMA, Theta, and EMD-RIMA. Here, the present study worked on six forecast horizons of ℎ = 1, 3, 5, 7, 9, and 11.  that are also compared with forecast results produced by methods of ARIMA, EWMA, Theta, and EM ARIMA. Here, the present study worked on six forecast horizons of ℎ = 1, 3, 5, 7, 9, and 11.
Intrinsic characteristics and external factor are always responsible for the shape of the data. Provided no uncertainty or external factors are involved, a method can produce relatively better forecast only when the method can capture all the intrinsic features of the dataset and can forecast or extrapolate to future horizons accordingly. The ULVR dataset (with MAPE 0.004, 0.003, 0.004,0.004, 0.005, and 0.008, respectively for h=1, 3, 5, 7, 9, and 11) was best forecastable and RDSB (with MAPE 0.027, 0.03, 0.031,0.027, 0.023, and 0.02, respectively for h=1, 3, 5, 7, 9, and 11) was least forecastable by the proposed method as well as all other methods except the EMD-ARIMA method for h=1. EMD-ARIMA performed best of all other methods in the forecast horizon h=1 with the least MAPE in case of only two companies, i.e. AZN (0.005) and RB (0.008). In h=1, EMD-ARIMA also forecasted better than other methods for the company ULVR (0.015), but not better than EMD-Theta, in which MAPE for ULVR was 0.013.
As per the proposed method, for forecast horizon , the forecastability of the method based on MAPE could be put according to order for the five datasets from best to worst (with from less to much error values) as ULVR<RB< SN<AZN<RDSB, where the starting one (ULVR) of the sequence was the best case, and the ending one (RDSB) was the worst case among them. As per values of MAPE, the forecastability order for other forecast horizons h=3, 5, 7, 9, and 11 were ULVR < SN<AZN< RB <RDSB, ULVR < SN<AZN< RB <RDSB, ULVR < SN < RB<RDSB<AZN, ULVR < SN < RB <RDSB<AZN, and ULVR < SN < RB <RDSB<AZN, respectively. The overall result tends to indicate that EMD-ARIMA can be the right choice for only single point immediate forecast horizon h=1. However, beyond the single point forecast, EMD-Theta is best of all other models considering the relative accuracy measures.

CONCLUSION
With all the results and discussion in this study, it is evident that the proposed EMD-Theta hybrid method performed better considering all the five nonlinear and non-stationary time series datasets and six different forecast horizons based on five types of error or accuracy measures. The ULVR dataset was best forecastable, and RDSB was least forecastable. Future uncertainty and involvement of external factors are primarily responsible for poor forecast results in time series, mainly from short term highfrequency data. Therefore, different data bear different messages of their varying degree of forecastability and some of these phenomena are shown through the selected time series datasets. Nevertheless, for all the five time series datasets, the synergised performance of the EMD and Theta method was better than other methods as capturing, fitting, and forecasting were useful for such data pattern and characteristics. In future, it is expected for further works of forecasting to be extended to machine learning as well as deep learning tools. It is suggested for future studies to focus on unexplored classical model-based hybridisation.