A HYBRID METHOD BASED ON CUCKOO SEARCH ALGORITHM FOR GLOBAL OPTIMIZATION PROBLEMS 1

Cuckoo search algorithm is considered one of the promising metaheuristic algorithms applied to solve numerous problems in different fields. However, it undergoes the premature convergence problem for high dimensional problems because the algorithm converges rapidly. Therefore, we proposed a robust approach to solve this issue by hybridizing optimization algorithm, which is a combination of Cuckoo search algorithmand Hill climbing called CSAHC discovers many local optimum traps by using local and global searches, although the local search method is trapped at the local minimum point. In other words, CSAHC has the ability to balance between the global exploration of the CSA and the deep exploitation of the HC method. The validation of the performance is determined by applying 13 benchmarks. The results of experimental simulations prove the improvement in the efficiency and the effect of the cooperation strategy and the promising of CSAHC.


INTRODUCTION
Optimization resides in many domains, such as engineering, energy, economics, medical, and computer science (Mustaffa, Yusof, & Kamaruddin, 2013).It is mainly concerned with finding the optimal values for several decision variables to form a solution to problem optimization.This solution is optimally considered when the decision maker is satisfied with it.An optimization problem is the minimization or maximization of a suitable decision-making algorithm normally adapted to the approximation methods.The principle of decision making entails choosing between several alternatives.The result of this choice is the selection of the best decision from all choices (Mohammed, Khader, & Al-Betar, 2016).Optimization algorithms developed based on nature-inspired ideas deal with selecting the best alternative in the sense of the given objective function.The optimization algorithm can be either a heuristic or a metaheuristic approach.Heuristic approaches are problem-designed approaches where each optimization problem has its own heuristic methods that are not applicable for other kinds of optimization problems.The metaheuristic-based algorithm is also minimization or maximization of a suitable decision-making algorithm normally adapted to the approximation methods.The principle of decision making entails choosing between several alternatives.The result of this choice is the selection of the best decision from all choices (Mohammed, Khader, & Al-Betar, 2016).Optimization algorithms developed based on nature-inspired ideas deal with selecting the best alternative in the sense of the given objective function.The optimization algorithm can be either a heuristic or a metaheuristic approach.Heuristic approaches are problem-designed approaches where each optimization problem has its own heuristic methods that are not applicable for other kinds of optimization problems.The metaheuristic-based algorithm is also a general solver template that can be adapted for various kinds of optimization problems by properly tweaking its operators and configuring its parameters (Hasan, Quo, & Shamsuddin, 2012).As shown in Fig. 1, each optimization algorithm can be categorized into three classes: evolutionary algorithms (EAs), swarm-based algorithms, and trajectory-based algorithms.Examples of EAs include genetic algorithms (GAs) (Holland, 1975), genetic programming (GP) (Koza, 1994), and differential evolution (DE) (Storn & Price, 1996).Examples of swarm-based algorithms include artificial bee colony (ABC) (Karaboga, 2005), particle swarm optimization (PSO) (James & Russell, 1995), and cuckoo search algorithm (CSA) (Yang & Deb, 2009).Examples of trajectory-based algorithms includes tabu search (TS) (Glover, 1977), simulated annealing (SA) (Kirkpatrick, Gelatt, Vecchi, & others, 1983), hill climbing (Schaerf & Meisels, 1999).Shehab, Khader, & Al-Betar, 2017).The performance of the population-based algorithms is measured through checking its ability to establish a proper trade-off between exploration and exploitation.Where the algorithm has a weak balance between exploration and exploitation be more likely to the trapping in local optima, premature convergence and stagnation (M.M. Shehab, Khader, & Al-Betar, 2016).
The performance of the population-based algorithms is measured through checking its ability to establish a proper trade-off between exploration and exploitation.Where the algorithm has a weak balance between exploration and exploitation be more likely to the trapping in local optima, premature convergence and stagnation (Shehab, Khader, & Al-Betar, 2016).
Population-based search algorithm is normally very powerful in exploring several regions of the problem search space.However, it has difficulty in determining the local optima in each region.By contrast, deep searching of the local search-based algorithm is very efficient in a single search space region but not for several search space regions (McMinn, 2004).Thus, sometimes, it is very beneficial to hybridize a local and a population search-based method to complement their advantages in a single optimization framework.Based on the above suggestion and through hybridization, the search can strike a balance between the wide range of exploration and nearby exploitation of the problem search space.In this context, CSA has been hybridized with other local searchbased algorithm to improve its performance in tackling complex optimization problems.
The linear least squares problem solved by hybridization algorithm between Newton method (NM) and CSA is called CSANM (Abdel-Baset & Hezam, 2016).The authors benefited from CSA for fast convergence and global search as well as from NM for the ability of strong local search.The experimental results showed the convergence efficiency and computational accuracy of the CSANM in comparison with the basic CSA and HS based on NM (HSNM).
A novel CSA base on the Gauss distribution (GCSA) was proposed by Zheng et al. (2012).In the basic CSA, although it finds the optimum solution, the search entirely depends on random walks.By contrast, fast convergence and precision cannot be guaranteed.For this purpose, GCSA was introduced to solve the low convergence rate of the basic CSA.GCSA has been applied to solve the standard test functions and engineering design optimization problems.The obtained results showed that the GCSA proved its efficiency through achieving better solutions compared with basic CSA.Wang et al. (2016) proposed a hybrid algorithm that combined CSA and a HS (HS/CSA) for continuous optimization problems.In the HS/CSA method, the pitch adjustment of HS was used to update the process of the CSA, which leads to the increase of population diversity.The improved elitism scheme was used to retain the best individuals in the cuckoo population as well.The performance of HS/CSA was evaluated by means of testing the set of benchmark functions.The obtained results showed that the HS/CSA achieved better outcomes in comparison with ACO, PSO, GA, HS, DE, and basic CSA.
Quadratic assignment problems (QAPs) are considered to be NP-hard problems, which cannot be easily solved by exact methods.Therefore, Dejam et al., (2012) proposed a hybrid algorithm combined with the CSA of TS (i.e., CSA-TS) to solve QAPs.In their research, the QAPs were initially tackled using CSA.Thereafter, these were combined with TS, which focused on the local search to increase the optimization precision.The experimental results indicated that the proposed algorithm performs better than ABC and GA.
In this work, a new hybrid optimization approach is developed by hybridizing the cuckoo search algorithm with hill climbing to solve global optimization problems.The proposed approach is evaluated on thirteen benchmark functions carefully selected from the literature.Experimental results demonstrate that the CSAHC performs better than Krill heard (KH) (Gandomi & Alavi, 2012), Harmony Search (HS) (Geem, Kim, & Loganathan, 2001), Bat Algorithm (BA) (Yang, 2010a), GA, and the basic CSA.
The paper is organized as follows.Next section describes the CSA and HC in brief.The Proposed Methodology section presents the CSAHC approach in details.Subsequently, our method is evaluated through 13 benchmarks and comparing with 5 methods in the Experimental Results Analysis section.Finally, the conclusion and future works are given in the last section.

Cuckoo Search Algorithm
The use of CSA in the optimization context was proposed by Yang and Deb, (2009).To date, work on this algorithm has significantly increased, and the CSA has succeeded in having its rightful place among other optimization methodologies (Fister Jr, Yang, Fister, & Fister, 2014).This algorithm is based on the obligate brood parasitic behavior found in some cuckoo species, in combination with the Levy flight behavior discovered in some birds and fruit flies.The CSA is an efficient metaheuristic swarm based algorithm that efficiently strikes a balance between local nearby exploitation and globalwide exploration in the search space problem (Shehab, Khader, & Laouchedi, 2017).
The cuckoo has a specific way of laying its eggs to distinguish it from the rest of the birds (Yang & Deb, 2014).The following three idealized rules clarify and describe the standard cuckoo search: o Each cuckoo lays one egg at a time and dumps it in a randomly chosen nest.o The best nests with high-quality eggs will be carried over to the next generations.o The number of available host nests is fixed, and the egg laid by a cuckoo is discovered by the host bird with a probability Pα∈(0,1).In this case, the host bird can either get rid of the egg or simply abandon the nest and build a completely new nest.In addition, probability Pα can be used by the n host nest to replace the new nests.Rank the solutions and find the current best 13: End While 14: Postprocess results and visualization Figure 2 shows the pseudo code of the CSA search process.Similar to other swarm-based algorithms, the CSA starts with an initial population of n host nests.These initial host nests will be randomly attracted by the cuckoos with eggs and also by random Levy flights to lay the eggs.Thereafter, nest quality will be evaluated and compared with another random host nest.In case the host nest is better, it will replace the old host nests.This new solution has the egg laid by a cuckoo.If the host bird discovers the egg with a probability Pα∈(0,1), the host either throws out the eggs, or abandons it and builds a new nest.This step is done by replacing the abundant solutions with the new random solutions.
Yang and Deb used a certain and simple representation of the implementation, with each egg representing a solution.As the cuckoo lays only one egg, it also represents one solution.The purpose is to increase the diversity of new, and probably better, cuckoos (solutions) and replace them instead with the worst solutions.By contrast, the CSA can be more complicated by using multiple eggs in each nest to represent a set of solutions.
The CSA, as a bat algorithm (Yang, 2010a) and an FA (Yang, 2010b), uses a balance between exploration and exploitation.The CSA is equiponderance to the integration of a Levy flights.When generating new solutions x t+1 for, say, a cuckoo i, a Levy flight is performed where α > 0 is the step size which should be related to the scales of the problem of interests.In most cases, we can use α = 1.The in the above equation represents the current location, which is the only way to determine the next location .This is called random walk or Markov chain.The product ⊕ means entry wise multiplications.This entry wise product is similar to those used in PSO, but here the random walk via Levy flight is more efficient in exploring the search space as its step length is much longer in the long run.A global explorative random walk by using Levy flights can be expressed as follows: (2) where λ is a parameter which is the mean or expectation of the occurrence of the event during a unit interval.Here the steps essentially form a random walk process with a power law step-length distribution with a heavy tail.Some of Yang and Deb used a certain and simple representation of the implementation, with each egg representing a solution.As the cuckoo lays only one egg, it also represents one solution.The purpose is to increase the diversity of new, and probably better, cuckoos (solutions) and replace them instead with the worst solutions.By contrast, the CSA can be more complicated by using multiple eggs in each nest to represent a set of solutions.
The CSA, as a bat algorithm (Yang, 2010a) and an FA (Yang, 2010b), uses a balance between exploration and exploitation.The CSA is equiponderance to the integration of a Levy flights.When generating new solutions 1  t x for, say, a cuckoo i, a Levy flight is performed where 0   is the step size which should be related to the scales of the problem of interests.In most cases, we can use 1   .The t i x in the above equation represents the current location, which is the only way to determine the next location x .This is called random walk or Markov chain.The product  means entry wise multiplications.This entry wise product is similar to those used in PSO, but here the random walk via Levy flight is more efficient in exploring the search space as its step length is much longer in the long run.A global explorative random walk by using Levy flights can be expressed as follows: Yang and Deb used a certain and simple representation of the impleme representing a solution.As the cuckoo lays only one egg, it also represents on is to increase the diversity of new, and probably better, cuckoos (solutions) a with the worst solutions.By contrast, the CSA can be more complicated by each nest to represent a set of solutions.
The CSA, as a bat algorithm (Yang, 2010a) and an FA (Yang, 2010b), exploration and exploitation.The CSA is equiponderance to the integration generating new solutions 1  t x for, say, a cuckoo i, a Levy flight is performed x .This is called random wal product  means entry wise multiplications.This entry wise product is simil but here the random walk via Levy flight is more efficient in exploring the length is much longer in the long run.A global explorative random walk by u expressed as follows: of the Cuckoo Search Algorithm certain and simple representation of the implementation, with each egg As the cuckoo lays only one egg, it also represents one solution.The purpose ty of new, and probably better, cuckoos (solutions) and replace them instead .By contrast, the CSA can be more complicated by using multiple eggs in et of solutions.
orithm (Yang, 2010a) and an FA (Yang, 2010b), uses a balance between tion.The CSA is equiponderance to the integration of a Levy flights.When ep size which should be related to the scales of the problem of interests.In 1  .The t i x in the above equation represents the current location, which is ne the next location x .This is called random walk or Markov chain.The ise multiplications.This entry wise product is similar to those used in PSO, lk via Levy flight is more efficient in exploring the search space as its step the long run.A global explorative random walk by using Levy flights can be 3 1 ,    (2) Yang and Deb used a certain and simple representation of the implementation, with each egg representing a solution.As the cuckoo lays only one egg, it also represents one solution.The purpose is to increase the diversity of new, and probably better, cuckoos (solutions) and replace them instead with the worst solutions.By contrast, the CSA can be more complicated by using multiple eggs in each nest to represent a set of solutions.
The CSA, as a bat algorithm (Yang, 2010a) and an FA (Yang, 2010b), uses a balance between exploration and exploitation.The CSA is equiponderance to the integration of a Levy flights.When generating new solutions where 0   is the step size which should be related to the scales of the problem of interests.In most cases, we can use 1   .The t i x in the above equation represents the current location, which is the only way to determine the next location x .This is called random walk or Markov chain.The product  means entry wise multiplications.This entry wise product is similar to those used in PSO, but here the random walk via Levy flight is more efficient in exploring the search space as its step length is much longer in the long run.A global explorative random walk by using Levy flights can be expressed as follows: the new solutions should be generated by Levy walk around the best solution obtained so far, this will speed up the local search.However, a substantial fraction of the new solutions should be generated by far field randomization and whose locations should be far enough from the current best solution, this will make sure the system will not be trapped in a local optimum.

Hill Climbing
Hill Climbing (HC) is a mathematical optimization technique which belongs to the family of local search (Schaerf & Meisels, 1999).It searches for a better solution in the neighborhood through evaluating the current state.If it is also goal state, then return to it and quit.Otherwise, continue updating the current state, if possible.Then, loop until a solution is found or until there are no new operators left to be applied in the current state.Also, inside the loop there are two steps.The first step, select an operator that has not yet been applied to the current state and apply it to produce the new state.The second step, evaluate the new state.Figure 3 shows the pseudo-code of the HC algorithm, which proves the simplicity of hill climbing.
Based on the above, in HC the basic idea is to always head towards a state which is better than the current one.So, it always improves the quality of a solution (Burke & Newall, 2002).

Figure 3. Pseudo code of the Hill Climbing method
HC has some advantages, such as it can easily be adjusted to the problem at hand.Almost any aspect of the algorithm may be changed and customized.For example, It can be used in conversions as well as discrete domains (Alajmi et al., 2011;Rubio & Gámez, 2011).

THE PROPOSED METHODOLOGY: CSA-HILL CLIMBING
Based on the introduction of CSA and HC in the previous sections, this section provides a detailed description of the proposed cuckoo search algorithm with hill climbing (CSAHC).re  is a parameter which is the mean or expectation of the occurrence of the event during a unit rval.Here the steps essentially form a random walk process with a power law step-length ribution with a heavy tail.Some of the new solutions should be generated by Levy walk around best solution obtained so far, this will speed up the local search.However, a substantial fraction of new solutions should be generated by far field randomization and whose locations should be far ugh from the current best solution, this will make sure the system will not be trapped in a local imum.
l Climbing l Climbing (HC) is a mathematical optimization technique which belongs to the family of local rch (Schaerf & Meisels, 1999).It searches for a better solution in the neighborhood through luating the current state.If it is also goal state, then return to it and quit.Otherwise, continue ating the current state, if possible.Then, loop until a solution is found or until there are no new rators left to be applied in the current state.Also, inside the loop there are two steps.The first step, ct an operator that has not yet been applied to the current state and apply it to produce the new e.The second step, evaluate the new state.Fig. 3 shows the pseudo-code of the HC algorithm, ch proves the simplicity of hill climbing.ed on the above, in HC the basic idea is to always head towards a state which is better than the rent one.So, it always improves the quality of a solution (Burke & Newall, 2002).ure 2. Pseudo code of the Cuckoo Search Algorithm ure 3. Pseudo code of the Hill Climbing method has some advantages, such as it can easily be adjusted to the problem at hand.Almost any aspect he algorithm may be changed and customized.For example, It can be used in conversions as well iscrete domains (Alajmi et al., 2011;Rubio and Gámez, 2011).

THE PROPOSED METHODOLOGY: CSA-HILL CLIMBING
Generates an s € Neighbours (i); 4: If fitness (s) > fitness (i) then 5: Replace s with the i; 6: End If CSA based on the obligate brood parasitic behavior found in some cuckoo species, in combination with the Levy flight, which it is a type of random walk which has a power law step length distribution with a heavy tail.It is inspired from behavior discovered of some birds and fruit flies (Yang & Deb, 2009).Levy flight used for global exploration and proved its efficiency through achieving good results (Pavlyukevich, 2007;Yang & Deb, 2013).Thus, the CSA is considered as an efficient metaheuristic swarm-based algorithm that efficiently strikes a balance between local nearby exploitation and global wide exploration in the search space problem (Roy & Chaudhuri, 2013b).However, sometimes it exploits solutions poorly with slow convergence.For that reason, the proposed algorithm improves the search ability of the basic CSA through combining it with HC method for deepening exploitation; so-called CSAHC algorithm is used to optimize the benchmark functions (refer Figure 4).CSAHC starts the search by applying the standard cuckoo search for the number of iterations.The best-obtained solution is then passed to the HC Table 2 shows that CSAHC performs the best on 11 of the 13 benchmarks which are F1-F4, F6-F10, and F12-F13.CSA is the second most effective, performing the best on the benchmarks F1-F2, F4-F5, and F13.Followed by GA, KH, BA, HS, respectively.Table 3 illustrated the average of results.Where, could be observed CSAHC method performs the most effective at determining objective function minimum on 10 of the 13 benchmarks F2-F4, F6-F9, and F11-F13.CSA and GA are the second most effective, performing best on the benchmarks F4-F5, F10, and F13 for the CSA.While, F2, F11-F12, and F13 for the GA.Followed by KH, BA, and HS, respectively.to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm.HC is an iterative algorithm that starts with an arbitrary solution to a problem and subsequently attempts to determine a better solution by incrementally changing a single element of the solution.
When the change produces a better solution, incremental change is performed on the new solution, which is repeated until no further improvements can be found.It then returns the solution to the CSA to check it through the fraction probability Pα.

THE EXPERIMENTAL RESULTS ANALYSIS
In this section, the proposed CSAHC was tested through an array of experiments.For testing purposes, we implemented the original version of CSA.We compared results of CSAHC with other methods.This comparison is shown in the tables within this section.
All the experiments are conducted using a computer with processor Intel(R) Core (TM) i7-6700K CPU 4.00 GHz with 16 GB of RAM and 64-bit for Microsoft Windows 10 Pro.The source code is implemented using MATLAB (R2015a).

Benchmark Functions
To test the performance of a CSAHC, 13 well-known benchmark functions are used for comparison.Table 1 describes these benchmark functions in terms of the optimum solution after a predefined number of iterations and the rate of convergence to the optimum solution.Further information about all the benchmark functions can be found in (Yao, Liu, & Lin, 1999;Simon, 2008;Jamil & Yang, 2013).

A. Comparisons with other methods
CSAHC was initially compared with the global optimization problems of five optimization algorithms, namely, KH, HS, GA, BA, and CSA.

F13
Step In our simulations, similar parameters for CSA have been used, the number of host nests n = 20 and probability of discovery Pα = 0.25.The tests have been run on 10, 25, 50, and 100 dimensions for a maximum of 100000 function evaluations.All tests have been run 100 times.Tables 2 and 3 show the different scales used to normalize the values to illustrate the differences of the six methods.
Table 2 shows that CSAHC performs the best on 11 of the 13 benchmarks which are F1-F4, F6-F10, and F12-F13.CSA is the second most effective, performing the best on the benchmarks F1-F2, F4-F5, and F13.Followed by GA, KH, BA, HS, respectively.Table 3 illustrated the average of results.Where, could be observed CSAHC method performs the most effective at determining objective function minimum on 10 of the 13 benchmarks F2-F4, F6-F9, and F11-F13.CSA and GA are the second most effective, performing best on the benchmarks F4-F5, F10, and F13 for the CSA.While, F2, F11-F12, and F13 for the GA.Followed by KH, BA, and HS, respectively.Further, the most representative convergent curves are provided (see Figure 5 -Figure 10).The values in the figures are the mean function optimum, which are the true values.
From Figure 5, apparently, CSAHC is well capable of finding the better solutions than all other methods.Here, HS converges sharply at the first search stage, however, soon it gets trapped into the sub-minima and the global minimum decreases slightly.In addition, in this function, BA is closed to CSAHC in the first stage, but the difference is increasing in the second stage.Each of BA, CSA, GA, and KH have moved to the best solutions initially, while later CSA converges to the better minimum than the others and CSAHC is the best of all.Figure 6 shows that CSAHC has the best performance among the six methods, while CSA ranks second.GA has the third best performance with a relatively slow and stable convergence rate.GA has the third best performance with a relatively slow and stable convergence rate.
Figs . 7, 8, and 9 shows that CSAHC is capable of finding better solutions compared with all the other methods.In the Fig. 7, CSA achieved best solutions from the beginning until 25th generation, and then GA got the best solutions from 26th generation until 43rd generation, followed by CSAHC with best solutions until the end.The results in Fig. 8 are almost same with the results achieved in Fig. 7. But, in Fig. 8, the results of CSAHC, GA, CSA, and KH are close together with a preference for CSAHC.Fig. 9, have the same ranking for Fig. 7 and Fig. 8.However, the CSAHC in Fig. 9 has clear outperformed comparing with the other methods.
Fig. 10 shows that CSAHC achieved the best solution in the especially in the first part of the results, with simple superiority for basic CSA.However, the GA outperforms both of basic CSA and CSAHC especially at the last part.An analysis of Figures 5 to 10 reveals that our proposed metaheuristic CSAHC method greatly outperforms the other methods.Figures 7, 8, and 9 shows that CSAHC is capable of finding better solutions compared with all the other methods.In the Figure 7 CSA achieved best solutions from the beginning until 25th generation, and then GA got the best solutions from 26th generation until 43rd generation, followed by CSAHC with best solutions until the end.The results in Fig. 8 are almost same with the results achieved in Figure 7. But, in Fig. 8, the results of CSAHC, GA, CSA, and KH are close together with a preference for CSAHC.Figure 9, have the same ranking for Figure 7 and Figure 8.However, the CSAHC in Figure 9 has clear outperformed comparing with the other methods.Figure 10 shows that CSAHC achieved the best solution in the especially in the first part of the results, with simple superiority for basic CSA.However, the GA outperforms both of basic CSA and CSAHC especially at the last part.An analysis of Figures 5 to 10 reveals that our proposed metaheuristic CSAHC method greatly outperforms the other methods.

Influence of control parameter
Parameter setting plays an important role in the performance of metaheuristic methods when solving different problems.In this article are the number of host nests (population size n) and the probability of discovery (Pα) are thoroughly studied with 100 trials, which are implemented in the above problems to search for the best solution and mean as shown in Tables 4, 5, 6, and 7.

B. Influence of control parameter
Parameter setting plays an important role in the performance of metaheuristic methods when solving different problems.In this article are the number of host nests (population size n) and the probability of discovery (  P ) are thoroughly studied with 100 trials, which are implemented in the above problems to search for the best solution and mean as shown in Tables 4, 5, 6, and 7.
From Tables 4 and 5, we can see that the superior performance of CSAHC when the value of n = 20.While performance decreases as the value of n increases.This due to increasing the value of n that mean increase the search space, therefore the performance of CSAHC will decrease.

B. Influence of control parameter
Parameter setting plays an important role in the performance of metaheuristic methods when solving different problems.In this article are the number of host nests (population size n) and the probability of discovery (  P ) are thoroughly studied with 100 trials, which are implemented in the above problems to search for the best solution and mean as shown in Tables 4, 5, 6, and 7.
From Tables 4 and 5, we can see that the superior performance of CSAHC when the value of n = 20.While performance decreases as the value of n increases.This due to increasing the value of n that mean increase the search space, therefore the performance of CSAHC will decrease.From Table 6, obviously, it can be seen that CSAHC performs the best when Pα = 0.1 and 0.2.Especially, for the F1 until F5, CSAHC has the similar performance; that is, the elitism parameter Pα has little influence on these three benchmark functions.Furthermore, when Pα = 0 and from 0.3 until 0.8, CSAHC performs achieved almost same results.However, the worst results when Pα = 0.9 and 1.In Table 7, there is a clear superiority for the CSAHC when the Pα = 0.2, followed by Pα = 0.1 and 0.3 almost the same results.Finally, all other values of are achieved nearby results.In short, CSAHC has the best performance when Pα = 0.2.Journal of ICT, 17, No. 3 (July) 2018, pp: 469-491 488

CONCLUSION AND FUTURE WORK
In the present work, a novel metaheuristic CSAHC method is proposed for solving global optimisation tasks.We improved the CSA by combining it with HC and evaluated the performance of the CSAHC on the 13 benchmark functions.The hybridization enhanced the exploration of basic CSA by using the HC which is capable of dealing with local searches.Furthermore, CSAHC is investigated on 13 benchmark functions.Results showed that comparing CSAHC with other search methods, such as the original CSA, original BA, GA, HS, and KH, improves its efficiency and effect.The CSAHC can be applied to more benchmark functions, including some real-world optimization problems for further examinations.

Figure 2 .
Figure 2. Pseudo code of the Cuckoo Search Algorithm

Figure 2 .
Figure 2. Pseudo code of the Cuckoo Search Algorithm

Figure 2 .
Figure 2. Pseudo code of the Cuckoo Search Algorithm size which should be related to the scales of the most cases, we can use 1   .The t i x in the above equation represents the c the only way to determine the next location 1  t i

Figure 2 .
Figure 2. Pseudo code of the Cuckoo Search Algorithm

Figure 5 .
Figure 5. Performance comparison for the F1 Ackley function.

Figure 3 .
Figure 3. Performance comparison for the F1 Ackley function.

Figure 8 .
Figure 8. Performance comparison for the F9 Schwefel 1.2 function

Figure 9 .
Figure 9. Performance comparison for the F12 Sphere function.

Figure 10 .
Figure 10.Performance comparison for the F13 Step function.

Figure 9 .
Figure 9. Performance comparison for the F12 Sphere function.

Figure 10 .
Figure 10.Performance comparison for the F13 Step function.

Figure 9 .
Figure 9. Performance comparison for the F12 Sphere function.

Figure 10 .
Figure 10.Performance comparison for the F13 Step function.

Table 1
Benchmark Functions

Table 2
Best normalized optimization results

Table 3
Mean Normalized Optimization Results

Table 5
Best normalized optimization results with different n

Table 6
Best Normalised Optimization Results with Different Pα

Table 7
Mean normalised optimization results with different Pα