HYBRID CAT SWARM OPTIMIZATION AND SIMULATED ANNEALING FOR DYNAMIC TASK SCHEDULING ON CLOUD COMPUTING ENVIRONMENT

The unpredictable number of task arriving at cloud datacentre and the rescaling of virtual processing elements can affect the provisioning of better Quality of Service expectations during task scheduling in cloud computing. Existing researchers have contributed several task scheduling algorithms to provide better QoS expectations but are characterized with entrapment at the local search and high dimensional breakdown due to slow convergence speed and imbalance between global and local search, resulting from lack of scalability. Dynamic task scheduling algorithms that can adjust to long-time changes and continue facilitating the provisioning of better QoS are necessary for cloud computing environment. In this study, a Cloud Scalable Multi-Objective Cat Swarm Optimization-based Simulated Annealing algorithm is proposed. In the proposed method, the Published: 12 June 2018 Journal of ICT, 17, No. 3 (July) 2018, pp: 435–467 436 . orthogonal Taguchi approach is applied to enhance the SA which is incorporated into the local search of the proposed CSMCSOSA algorithm for scalability performance. A multi-objective QoS model based on execution time and execution cost criteria is presented to evaluate the efficiency of the proposed algorithm on CloudSim tool with two different datasets. Quantitative analysis of the algorithm is carried out with metrics of execution time, execution cost, QoS and performance improvement rate percentage. Meanwhile, the scalability analysis of the proposed algorithm using Isospeed-efficiency scalability metric is also reported. The results of the experiment show that the proposed CSM-CSOSA has outperformed Multi-Objective Genetic Algorithm, Multi-Objective Ant Colony and Multi-Objective Particle Swarm Optimization by returning minimum execution time and execution cost as well as better scalability acceptance rate of 0.4811−0.8990 respectively. The proposed solution when implemented in real cloud computing environment could possibly meet customers QoS expectations as well as that of the service providers.


INTRODUCTION
The evolution of cloud computing has reshaped Information Technology (IT) consumption through the provisioning of high-performance computing as well as massive resource storage that are continually channelled across a medium called the Internet.The paradigm permits the execution of large-scale applications, where distributed collaborative resources which are managed by several autonomous domains are made available (Khajehvand et al., 2014;Gabi, 2014).Trends toward the development of cloud computing have arisen far back when computers are connected and how networking among computers moved to distributed computing, which further led to cluster computing and from cluster computing to grid computing and eventually now, cloud computing (Rani et al., 2015).Presently, services provided by cloud computing are available at affordable cost, with high availability and scalability for all scales of businesses (Hassan et al., 2017).These services include: Software as a Service (SaaS); providing users with opportunities to run applications remotely from the cloud.The Infrastructure as a Service (IaaS); providing virtualize computer services that ensure better processing power with reserved bandwidth for storage.The Platform as a Service (PaaS); providing operating systems and require services for a particular application (Furkt, 2010;Raza et al., 2015;Cui et al., 2017).All these services function within the delivery model of cloud computing such as Public cloud; that permit dynamic allocation of computing resource over the Internet through web applications.The Private clouds; built to provide full control over data, security as well as the quality of service.The Hybrid cloud; which controls the distribution of applications across both public and private cloud (Furkt, 2010).One of the fundamental challenges of cloud computing is the level of Quality of Service (QoS) satisfaction which has become insufficient to meet consumer and service provider expectations.The number of tasks arriving cloud datacentre are alarming and the recalling of virtual machines processing elements to meet each task expectations is a complex scheduling problem (Ibrahim et al., 2015).The cloud consumers sent tasks to cloud virtual resources (virtual machines).Each task is characterized with QoS objective(s) expected to be met.The cloud consumer demands their submitted task to be processed in a short time with less cost of execution.The service provider facilitates the provisioning of the required service that can meet this expectation while demanding for better pay.This problem can be referred to as a multi-objective NP-hard problem (Kalra & Singh, 2015).It has become necessary to develop task scheduling algorithm that considers dynamicity of cloud computing environment to facilitate efficient mapping of each task on a suitable resource and ordering the task on each resource to satisfy performance criteria (Monika & Jindal, 2016;Kalra & Singh, 2015;Zhang et al., 2014;Letort et al., 2015).Therefore, dynamic optimization algorithms are the potential solutions to distributing tasks amongst virtual machines at run-time as well as considering the current state of Virtual Machine (VM) information on its capacity to fast track next distribution decision (Gabi et al., 2015;Mustaffa et al., 2013;Ibrahim et al., 2016).To date, it is vital to design a low-complexity dynamic optimization algorithm to adapt the dynamicity of cloud tasks and resources while maintaining better QoS performance.
The Swarm Intelligence (SI) techniques are relatively new promising approaches for solving combinatorial optimization problems because of their ability in handling large scale problem and produce results in just one run.These techniques are inspired by the collective intelligence of social behavioural model of insects and other animals (Singh et al., 2017).With the SI technique, sharing of information is done easily among multiple swarms for co-evolution which learn in searching for solution space.The large-scale optimization becomes practical with this technique because it allows multiple agents to be parallelised easily (Singh et al., 2017;Mustaffa et al., 2015).Some of the examples of SI techniques used by existing researchers to address task scheduling problem are; Particle Swarm Optimization (PSO) (Ramezaini et al., 2013;Awad et al., 2015;Jena, 2015), Ant Colony Optimization (ACO), (Shengjun et al., 2015;Anradha & Selvakumar, 2015), Artificial Bee Colony (ABC) (Kumar & Gunasekaram, 2014;Li & Pan, 2015;Gao et al., 2015), BAT algorithm (Gandomi & Yang, 2014;Jacob, 2014;George, 2015) & Cat Swarm Optimization (CSO) (Bilgaiyan et al., 2015;Gabi et al., 2016).
Cat Swarm Optimization (CSO) is one of the SI approaches introduced in (Chu & Tsai, 2007) to address continuous optimization problem.The technique converges faster than Particle Swarm Optimization (PSO) in terms of speed and convergence (Chu & Tsai, 2007).Exploring this technique to address a discrete optimization problem especially cloud task scheduling problem will be a potential solution.The CSO has both global and local search known as the seeking and tracing mode and a mixed ratio (MR) that determine the position of the cat (Gabi et al., 2016;Chu &Tsai, 2007).Its local search (tracing mode) can be enhanced to search for optimality in a multi-dimensional problem.Simulated Annealing (SA) is a type of local search and is easy to implement probabilistic approximation algorithm, as introduced in (Kirkpatrick et al., 1983) to solve the NP-hard optimization problem (Wang et al., 2016).It uses a neighbourhood function and a fitness function to avoid being trapped at the local optimal, thereby finding a solution closer to global optimum (Jonasson & Norgren, 2016;Abdullahi & Ngadi, 2016;Černý, 1985).The strength of the SA when searching for an optimal solution can as well be enhanced when method like orthogonal Taguchi is introduced (Taguchi et al., 2000).In this study, we proposed a Cloud Scalable Multi-Objective Cat Swarm Optimization based Simulated Annealing (CSM-CSOSA) algorithm to address cloud task scheduling problem in cloud computing.To determine the effectiveness of the algorithm, a multi-objective QoS task scheduling model is presented and solved using the proposed (CSM-CSOSA) algorithm.
Several contributions are made possible in this study, i.e. the development of a Multi-Objective model based on execution time and execution cost objectives for optimal task scheduling on cloud computing environment; the development of CSM-CSOSA task scheduling algorithm to solve the multiobjective task scheduling model; the implementation of the CSM-CSOSA task scheduling algorithm on CloudSim tool; the performance comparison of the proposed CSM-CSOSA task scheduling algorithm with multi-objective genetic algorithm (Budhiraja & Singh, 2014), multi-objective scheduling optimization method based on ant colony optimization (Zuo et al.¸2015) and multi-objective particle swarm optimization (Ramezaini et al., 2013) based on execution time, execution cost, QoS and percentage improvement rate percentage.

RELATED WORK
Several authors have put forward task scheduling optimization algorithms to solve task scheduling problem in cloud computing.Some of which are discussed as follows: Zuo et al. (2015) introduced a multi-objective optimization scheduling method based on an ant colony.The authors' aim is to optimise both the objective of performance and cost.The authors conduct some experiments via simulation to shows the effectiveness of their proposed algorithm.The result of the experiment shows that their method managed to achieve 56.6% increase in the best-case scenario as compared to other algorithms.However, local trapping is an issue regarding the ant colony method as they traverse toward solution finding.The updating process of pheromone can lead to long computation time.Besides, the number of tasks used for the experiment may not be significant enough to justify whether their proposed method is scalable to handle large task size.Similarly, Zuo et al. (2016) proposed a multi-objective task scheduling method based on Ant Colony Optimization (MOSACO).The objective of the study is to address deadline and cost in a hybrid cloud computing environment.The researchers have been able to measure the effectiveness of their proposed MOSACO algorithm using metrics of task completion time, cost, the number of deadline violations, and the degree of private resource utilization.
The results of the simulation show that their proposed MOSACO task scheduling algorithm can provide the highest optimality.However, scalability may be an issue due to the number of tasks used for the experiment, especially when considering the dynamicity of cloud computing.In another development, Dandhwani and Vekariya (2016) put forward a multi-objective scheduling algorithm for cloud computing environments.Their objective is to minimize the execution time and makespan of schedule tasks on computing resources.The authors reported that simulation results of their proposed method can minimize the execution time and makespan time effectively.However, the greedy approach may be insufficient to handle large scale tasks scheduling problem, especially in a dynamic cloud environment.Khajehvand et al. (2013) dwelled on heuristic scalable cost-time trade-off scheduling algorithm for grid computing environments to solve workflow scheduling problem.The study makes use of three scheduling constraints (i.e. the task sizes, task parallelism, and heterogeneous resources) to evaluate their proposed method.The authors revealed that simulation results show that their heuristic method has outperformed the comparison method based on performance and scalability with different workflow sizes.However, the heuristic based approach can perform better when centralized scheduling environment is considered, where task arrival is known in advance and scheduling is done on the capacity of the virtual machines to handle the task demand.Besides, their performance in a dynamic cloud environment could be an issue due to the volume of tasks and heterogeneity of cloud computing resources.As a result, determining the right resource to execute the task demand will be a very complex decision.In another development, Lakra and Yadav (2015) introduced a multi-objective task scheduling algorithm to increase throughput and minimize resource execution cost.The experimental result via simulation shows that their proposed method can yield better performance in terms of cost and improves throughput.However, its application to address large size tasks in an elastic resource condition is still an issue that needs to be addressed.Yue et al. (2016) presented an improved multi-objective niched Pareto genetic (NPGA) method.The objective of the study is to minimize time consumption and financial cost of handling the users' tasks.The results of the experiment via simulation shows that their proposed algorithms can maintain the diversity and the distribution of Pareto-optimal solutions in cloud tasks scheduling under the same population size and evolution generation than the comparison algorithm.However, long computation time is bound to occur due to mutation process characterised by the genetic algorithm.Besides, the global solution finding merit of the genetic algorithm is insufficient to find an optimal solution due to the nature of its chromosome selection using the probability function.
In their part, Budhiraja and Singh (2014) introduced a multi-objective task scheduling algorithm using the genetic technique.The objective of the study is to reduce the cost of execution, execution time and ensured scalability performance.The result of the simulation as stated by the authors shows that their method can obtain a better optimiser in terms of makespan and cost of resource usage.However, it is hard to draw a conclusion on their proposed algorithm, since comparison technique has not been considered.Hua et al. (2016) presented a PSO-based adaptive multi-objective task scheduling algorithm for cloud computing environment.The objective of their study is to minimize processing time and the transmission time of scheduled tasks in cloud datacentre.The results of the experiment via simulation shows that their PSO-based AMTS algorithm can obtain better quasi-optimal solutions in task completion time, average cost, and energy consumption compared to the genetic algorithm.However, global search process of the PSO is insufficient to handle task scheduling optimization problem without incorporating any local search optimization technique.Besides, the number of iterations used in the experiments is insufficient to justify the performance of the proposed algorithm.On the other hand, Letort et al. (2015) presented a greedy-based scheduling algorithm that handles task scheduling problem based on resource and precedence constraints.The experimental results via simulation show a significant increase in several numbers of cumulative constraints.However, the greedy approach can perform better when considering small scale network environment with small task size.Leena et al. (2016) proposed a bioobjective task scheduling algorithm based on genetic algorithm for hybrid cloud environments.The objective of the study is to minimize the execution time and execution cost of task schedule on computing resources.The authors make used of two single objective algorithms each for the execution time and execution cost to show the effectiveness of their proposed method.The result of the experiment via simulation shows that their proposed method can reduce the execution time and execution cost of all tasks scheduled on computing resources as compared to the single objective optimization algorithms.However, local entrapment can still be an issue with the genetic technique.Ramezani et al. (2013) introduced a multi-objective algorithm to solve three conflicting objectives; task execution/transfer time and task execution cost.The result of the experiment via simulation on CloudSim tool shows remarkable performance than other comparative algorithms.However, the PSO can easily get entrapped at the local optima region.

Findings from the Existing Method
Findings show that the heuristic (greedy) task scheduling algorithms are applicable to small size scheduling problems.Although some degree of success in addressing the NƤ-completeness of the scheduling of a task can be achieved by returning a feasible solution, but the dynamic nature of cloud computing environment lags the heuristic approach to satisfy scheduling optimization problems such as makespan and execution cost.The metaheuristic techniques are promising than the heuristic techniques.However, metaheuristic techniques used in the existing literature for multi-objective task scheduling problem exhibits both global and local search optimization process.The global search optimization alone cannot guarantee optimality and local search optimization often gets trapped at the local optimal.Hence, intensification and diversification will generate focus on exploring the search space in a local region using a combination of several methods to help achieve global optimality of both the execution time and execution cost objectives.This will as well increase the scalability to handle the dynamic changing task and resource condition (i.e. the virtual machine processing elements).Chu and Tsai (2007) introduce Cat Swarm Optimization (CSO) technique which mimics the common behaviour of a natural cat.As observed by the author, the cats always remain alert while spending most of their time resting and move slowly when observing their environment.Two modes were actualized which represent the behaviour of cat (Gabi et al., 2016) i.e., the seeking mode and the tracing mode.The seeking mode is the global search process of the CSO technique.Four attributes were associated with this mode.The Seeking Memory Pool (SMP); which indicates the memory size sort by the cat, Seeking Range of selected Dimension (SRD); for selecting cat dimensions, Counts of Dimension to Change (CDC); used for disclosing how many dimensions according to cat number varied, and Self-Position Considering (SPC); represents a Boolean variable that unveil if the position at which the cat is presently standing can be chosen as the candidates' position to move into (Gabi et al., 2016).Algorithm 1 shows the procedure for the seeking mode (Chu & Tsai, 2007).The tracing mode is the local search procedure of the CSO technique.Algorithm 2 shows the pseudocode for the CSO tracing mode (Gabi et al., 2016).(1)

Cat Swarm Optimization
Where c; the constant value of acceleration, r; is the uniform distributed random number in the range of [0, 1]. 2. Add new velocity by computing the current (new) position of the cat using Equation 2(2) 3. Calculate the fitness values of all cats.4. Update and return best cats with the best fitness.End

Limitations of Cat Swarm Optimization to Solve Cloud Task Scheduling Problem
Although the CSO technique has proven to be more efficient than PSO in both computation time and convergence speed (Chu &Tsai, 2007), its application in 8 while spending most of their time resting and move slowly when observing their environment.
Two modes were actualized which represent the behaviour of cat (Gabi et al., 2016) i.e., the seeking mode and the tracing mode.The seeking mode is the global search process of the CSO technique.Four attributes were associated with this mode.The Seeking Memory Pool (SMP); which indicates the memory size sort by the cat, Seeking Range of selected Dimension (SRD); for selecting cat dimensions, Counts of Dimension to Change (CDC); used for disclosing how many dimensions according to cat number varied, and Self-Position Considering (SPC); represents a Boolean variable that unveil if the position at which the cat is presently standing can be chosen as the candidates' position to move into (Gabi et al., 2016).Algorithm 1 shows the procedure for the seeking mode (Chu & Tsai, 2007).The tracing mode is the local search procedure of the CSO technique.Algorithm 2 shows the pseudocode for the CSO tracing mode (Gabi et al., 2016). (1) Where c; the constant value of acceleration, r; is the uniform distributed random number in the range of [0, 1].
2. Add new velocity by computing the current (new) position of the cat using Equation 2 , =  , +  , 3. Calculate the fitness values of all cats.
4. Update and return best cats with the best fitness.
End 8 while spending most of their time resting and move slowly when observing their environment.
Two modes were actualized which represent the behaviour of cat (Gabi et al., 2016)  represents a Boolean variable that unveil if the position at which the cat is presently standing can be chosen as the candidates' position to move into (Gabi et al., 2016).Algorithm 1 shows the procedure for the seeking mode (Chu & Tsai, 2007).The tracing mode is the local search procedure of the CSO technique.Algorithm 2 shows the pseudocode for the CSO tracing mode (Gabi et al., 2016).
3. Calculate the fitness values of all cats.
4. Update and return best cats with the best fitness.

End
cloud computing may require improvement to solve complex task scheduling optimization problem.The global search optimization process of the CSO is quite promising.However, this global search alone can not guarantee an optimal solution without the support of the local search optimization process.
The CSO suffered local entrapment while its global solution finding merit is preserved.This is because the number of cats going into seeking mode (global search) all the time always exceed the ones with tracing mode (local search mode).This may cause the mutation process of the CSO at tracing (local search) mode to affect performance and may end up not achieving an optimal solution for cloud task scheduling optimization problem (Gabi et al., 2016).Similarly, for every iteration, the seeking (global search) mode and tracing (local search) mode of CSO were carried out independently, causing its position and velocity update to exhibit similar process.As a result, a very high computation time is bound to occur (Pradhan & Panda, 2012).Therefore, a local search optimization algorithm incorporated at the local search of the CSO is sufficient to address its limitations.

Simulated Annealing
Simulated Annealing (SA) is a local search probabilistic approximation algorithm introduced by Kirkpatrick et al. (1983).The algorithm uses a neighbourhood and a fitness function to avoid being trapped at the local optima (Jonasson & Norgre, 2016).The SA algorithm often begins with an initial solution according to some neighbourhood function with an updated solution created .As to how the particle tend to adopt a state which is an improvement over current one, the algorithm generates a solution when the fitness value becomes lower than .However, assume has the higher fitness, it will occasionally be accepted if the defined probability shown in equation 3 is satisfied (Abdullahi & Ngadi, 2016). (3) Where is the fitness evaluation functions and the current solutions of the neighbour accordingly; and represents the control parameter called the temperature.This parameter is determined according to the cooling rate used in (Abdullahi & Ngadi, 2016). (4) Where: = temperature descending rate, ; the number of times which neighbour solutions have been generated so far; initial temperature; final temperature.When the initial value of the temperature is low, the algorithm 9 timization process of the CSO is quite promising.However, this global search alone can not arantee an optimal solution without the support of the local search optimization process.The O suffered local entrapment while its global solution finding merit is preserved.This is cause the number of cats going into seeking mode (global search) all the time always exceed ones with tracing mode (local search mode).This may cause the mutation process of the CSO tracing (local search) mode to affect performance and may end up not achieving an optimal lution for cloud task scheduling optimization problem (Gabi et al., 2016) (1983).The algorithm uses a neighbourhood and a fitness function to avoid ing trapped at the local optima (Jonasson & Norgre, 2016).The SA algorithm often begins th an initial solution  according to some neighbourhood function  with an updated solution created.As to how the particle tend to adopt a state which is an improvement over current one, algorithm generates a solution when the fitness value ( * ) becomes lower than ().
wever, assume  * has the higher fitness, it will occasionally be accepted if the defined bability shown in equation 3 is satisfied (Abdullahi & Ngadi, 2016).
here ( * ) is the fitness evaluation functions and () the current solutions of the neighbour cordingly; and  represents the control parameter called the temperature.This parameter is termined according to the cooling rate used in (Abdullahi & Ngadi, 2016).
9 ptimization process of the CSO is quite promising.However, this global search alone can not uarantee an optimal solution without the support of the local search optimization process.The SO suffered local entrapment while its global solution finding merit is preserved.This is ecause the number of cats going into seeking mode (global search) all the time always exceed he ones with tracing mode (local search mode).This may cause the mutation process of the CSO t tracing (local search) mode to affect performance and may end up not achieving an optimal olution for cloud task scheduling optimization problem (Gabi et al., 2016).Similarly, for every  (1983).The algorithm uses a neighbourhood and a fitness function to avoid eing trapped at the local optima (Jonasson & Norgre, 2016).The SA algorithm often begins ith an initial solution  according to some neighbourhood function  with an updated solution ′ created.As to how the particle tend to adopt a state which is an improvement over current one, he algorithm generates a solution when the fitness value ( * ) becomes lower than ().
owever, assume  * has the higher fitness, it will occasionally be accepted if the defined robability shown in equation 3 is satisfied (Abdullahi & Ngadi, 2016).
here ( * ) is the fitness evaluation functions and () the current solutions of the neighbour ccordingly; and  represents the control parameter called the temperature.This parameter is etermined according to the cooling rate used in (Abdullahi & Ngadi, 2016).

Simulated Annealing
Simulated Annealing (SA) is a local search probabilistic approximation al Kirkpatrick et al. (1983).The algorithm uses a neighbourhood and a fit being trapped at the local optima (Jonasson & Norgre, 2016).The SA a with an initial solution  according to some neighbourhood function  w  ′ created.As to how the particle tend to adopt a state which is an improve the algorithm generates a solution when the fitness value ( * ) becom However, assume  * has the higher fitness, it will occasionally be ac probability shown in equation 3 is satisfied (Abdullahi & Ngadi, 2016).
Where ( * ) is the fitness evaluation functions and () the current solu accordingly; and  represents the control parameter called the temperat determined according to the cooling rate used in (Abdullahi & Ngadi, 2016  =   *   +   9 guarantee an optimal solution without the support of the local search optimization proces CSO suffered local entrapment while its global solution finding merit is preserved. because the number of cats going into seeking mode (global search) all the time always the ones with tracing mode (local search mode).This may cause the mutation process of th at tracing (local search) mode to affect performance and may end up not achieving an solution for cloud task scheduling optimization problem (Gabi et al., 2016).Similarly, fo iteration, the seeking (global search) mode and tracing (local search) mode of CSO were out independently, causing its position and velocity update to exhibit similar process.As a a very high computation time is bound to occur (Pradhan & Panda, 2012).Therefore, search optimization algorithm incorporated at the local search of the CSO is sufficient to its limitations.

Simulated Annealing
Simulated Annealing (SA) is a local search probabilistic approximation algorithm introdu Kirkpatrick et al. (1983).The algorithm uses a neighbourhood and a fitness function t being trapped at the local optima (Jonasson & Norgre, 2016).The SA algorithm often with an initial solution  according to some neighbourhood function  with an updated s  ′ created.As to how the particle tend to adopt a state which is an improvement over curre the algorithm generates a solution when the fitness value ( * ) becomes lower than However, assume  * has the higher fitness, it will occasionally be accepted if the probability shown in equation 3 is satisfied (Abdullahi & Ngadi, 2016).
Where ( * ) is the fitness evaluation functions and () the current solutions of the ne accordingly; and  represents the control parameter called the temperature.This param determined according to the cooling rate used in (Abdullahi & Ngadi, 2016).
guarantee an optimal solution without the support of the local search optimization p CSO suffered local entrapment while its global solution finding merit is preser because the number of cats going into seeking mode (global search) all the time al the ones with tracing mode (local search mode).This may cause the mutation proces at tracing (local search) mode to affect performance and may end up not achievin solution for cloud task scheduling optimization problem (Gabi et al., 2016).Similar iteration, the seeking (global search) mode and tracing (local search) mode of CSO out independently, causing its position and velocity update to exhibit similar process a very high computation time is bound to occur (Pradhan & Panda, 2012).There search optimization algorithm incorporated at the local search of the CSO is sufficie its limitations.

Simulated Annealing
Simulated Annealing (SA) is a local search probabilistic approximation algorithm i Kirkpatrick et al. (1983).The algorithm uses a neighbourhood and a fitness funct being trapped at the local optima (Jonasson & Norgre, 2016).The SA algorithm with an initial solution  according to some neighbourhood function  with an upd  ′ created.As to how the particle tend to adopt a state which is an improvement over the algorithm generates a solution when the fitness value ( * ) becomes lower However, assume  * has the higher fitness, it will occasionally be accepted if probability shown in equation 3 is satisfied (Abdullahi & Ngadi, 2016).
Where ( * ) is the fitness evaluation functions and () the current solutions of th accordingly; and  represents the control parameter called the temperature.This determined according to the cooling rate used in (Abdullahi & Ngadi, 2016).ess, it will occasionally be accepted if the defined (Abdullahi & Ngadi, 2016).
tions and () the current solutions of the neighbour parameter called the temperature.This parameter is ed in (Abdullahi & Ngadi, 2016). (4)

9
CSO suffered local entrapment while its global solution finding merit is preser because the number of cats going into seeking mode (global search) all the time al the ones with tracing mode (local search mode).This may cause the mutation proces at tracing (local search) mode to affect performance and may end up not achievin solution for cloud task scheduling optimization problem (Gabi et al., 2016).Simila iteration, the seeking (global search) mode and tracing (local search) mode of CSO out independently, causing its position and velocity update to exhibit similar process a very high computation time is bound to occur (Pradhan & Panda, 2012).There search optimization algorithm incorporated at the local search of the CSO is sufficie its limitations.

Simulated Annealing
Simulated Annealing (SA) is a local search probabilistic approximation algorithm i Kirkpatrick et al. (1983).The algorithm uses a neighbourhood and a fitness func being trapped at the local optima (Jonasson & Norgre, 2016).The SA algorithm with an initial solution  according to some neighbourhood function  with an upd  ′ created.As to how the particle tend to adopt a state which is an improvement over the algorithm generates a solution when the fitness value ( * ) becomes lowe However, assume  * has the higher fitness, it will occasionally be accepted if probability shown in equation 3 is satisfied (Abdullahi & Ngadi, 2016).
Where ( * ) is the fitness evaluation functions and () the current solutions of t accordingly; and  represents the control parameter called the temperature.This determined according to the cooling rate used in (Abdullahi & Ngadi, 2016).
becomes limited in locating global optimal solution as the computation time of the algorithm is believed to be shorter (Jonasson & Norgre, 2016;Gabi et al. 2017b).
At each iteration performed by the SA algorithm, the comparison between the currently obtained solution and a solution newly selected is carried out.A solution that shows improvement is always accepted (Moschakis & Karatza, 2015).The non-improving solutions are still accepted since there is a possibility that they may escape being trapped at local optima while searching for a global optimal solution.Based on the defined probability in equation 3, the acceptance of the nonimproving ones is often determined by the temperature parameter (Nikolaev & Jacobson, 2010).This makes SA algorithm one of the most powerful optimization mechanism.
The basic SA procedure is represented in Algorithm 3.

Limitation of Simulated Annealing to Cloud Task Scheduling
The SA has been regarded as a powerful local search probabilistic algorithm (Abdullahi & Ngadi, 2016), the SA iterates a number of times before finding an optimal or near optimal solution.The repeated number of iteration may affect the computational complexity of the algorithm in solving cloud task scheduling problem thereby affecting the computational time.Similarly, the SA can get entrapped at the local optimal region especially when the problem size is very large.Its ability to enhance the local search region without the support of the global search may not guarantee optimality (Wang et al., 2016).Therefore, it can be a powerful local search optimization process when combined with a greedy method to overcome its weaknesses.

Orthogonal Taguchi Method
The Orthogonal Taguchi is a greedy-based method developed by Dr. Genichi Taguchi belonging to Nippon telephones and telegraph company in Japan (Gabi et al., 2016).One potential benefit of using the Taguchi method is its ability to solve complex problem while drastically reducing the computation time.The Taguchi method is used to address both single and multi-objective optimization problem (Tsai et al., 2012;Tsai et al., 2013).Taguchi proposed a general formula for establishing an orthogonal array with two levels of Z factors using equation 5 (Chang et al., 2015). (5) Where, n -1symbolizes the column numbers in two-levels orthogonal array; n = 2knumber of experiments corresponding to the n rows, and columns; number of required level for each factor Z; kis a positive integer (k > 1).According to Taguchi, for any column pairs, the combination of all factors at each level occurs at an equal number of times.Algorithm 4 shows the pseudocodes for the Taguchi optimization Method (Gabi et al., 2017a).

Definition 1.1
Given as the solution search space, let f : D → ℜ represents an objective function defined in the solution search space.Find X* ∈ D ∋ f(X*) << (X) ∀X∈ D.Where 11 is very large.Its ability to enhance the local search region without the support of search may not guarantee optimality (Wang et al., 2016).Therefore, it can be a pow search optimization process when combined with a greedy method to overcome its wea

Orthogonal Taguchi Method
The Orthogonal Taguchi is a greedy-based method developed by Dr. Genichi Taguchi to Nippon telephones and telegraph company in Japan (Gabi et al., 2016).One poten of using the Taguchi method is its ability to solve complex problem while drasticall the computation time.The Taguchi method is used to address both single and mult optimization problem (Tsai et al., 2012;Tsai et al., 2013).Taguchi proposed a gener for establishing an orthogonal array with two levels of Z factors using equation 5 (Ch 2015).
(2 −1 ), Where,  − 1 −symbolizes the column numbers in two-levels orthogonal array; number of experiments corresponding to the  rows, and columns; 2 − number of req for each factor Z;  − is a positive integer ( > 1).According to Taguchi, for a pairs, the combination of all factors at each level occurs at an equal number of times.
Algorithm 4:  (Gabi et al., 2016).The mutation process of the CSO at tracing (local search) mode is bound to affect performance and this may end up not achieving an optimal for cloud task scheduling optimization problem (Gabi et al., 2016).Similarly, for every iteration, the seeking (global search) mode and tracing (local search) mode of CSO are carried out independently, causing its position and velocity update to exhibit similar process.As a result, a very high computation time is bound to occur (Pradhan & Panda, 2012).
Although the chances of locating the global optima increased at the global search process, it may lose the ability to converge faster at tracing mode and that may have a significant effect to solution finding.Hence, a special mechanism is needed to incorporate in the tracing (local search) mode procedure of the CSO to improve its convergence velocity, scalability, and quality of solution (Abdullahi & Ngadi, 2016).As a powerful local search optimization algorithm, Simulated Annealing (SA) employs certain probability as prevention from being trapped at the local optima.Although it can iterate a number of times after which a near optimal solution can be found.To overcome this, a Taguchi experimental design procedure can be used to enhance its performance by reducing the number of iterations.With the combination of SA and Taguchi method in CSO, a CSM-CSOSA algorithm for scheduling independent nonpreemptive task in cloud datacentre for the purpose of ensuring consumers QoS expectations is proposed.The methodology that describes this process is elaborated in the next subsection.

CSM-CSOSA SA Local Search with Taguchi Method
With the proposed CSM-CSOSA algorithm, the tracing (local) search process can now move out of the local optima region (Abdullahi & Ngadi, 2016).
To control the performance of parameters of the proposed (CSM-CSOSA) algorithm, the tracing search procedure was further enhanced with the Taguchi method and simulated annealing.Two sets of candidate velocities V k,d1 (t) and V k,d2 (t) (Gabi et al., 2016;Gabi et al., 2017a) were generated using the Taguchi method as shown in Equation 6.Details about Taguchi method can be found in (Taguchi et al., 2000).The velocities control the efficiency and accuracy of the algorithm towards achieving an optimum solution. (6) Where, V k,d (t) is the velocity of the cat; is the constant value of acceleration, r; is a random number in the range of [0, 1], t; symbolizes the iteration number.
A non-dominant velocity among the generated velocities is selected to update the new position of the algorithm using the following rule: (7) At each iteration, the comparison between the currently obtained solution and a solution newly selected is carried out.Hence, a solution that improves better is always accepted.The probability of accepting neighbour solution into a new generation of cats using SA is obtained using equation 11 (Abdullahi & Ngadi, 2016).The velocity set with best convergence speed is selected by the CSM-CSOSA algorithm to update the new position of the next cat provided the condition in equation 8 is satisfied (Zuo et al., 2016). ( Where r; is an integer random number [0,1].The position of the cat represents the solution of the cat.The cat with the best fitness is stored in an n × m archive at each run of the algorithm and is compared with the initial best solution in the archive based on dominant strategy.Assume i th and j th represent the positions of two cats in a D-dimensional search space as X i = (x i2 ,x i3 ,...,x id ,.... x iD ) and X j = (x j2 ,x j2 ,...,x jd ,....x jD ) respectively.A non-dominant strategy is adopted to determine the best fitness when the conditions in equations 9 and 10 are satisfied (Abdullahi and Ngadi, 2016) With the proposed CSM-CSOSA algorithm, the tracing (local) search process can now move out of the local optima region (Abdullahi & Ngadi, 2016).To control the performance of parameters of the proposed (CSM-CSOSA) algorithm, the tracing search procedure was further enhanced with the Taguchi method and simulated annealing.Two sets of candidate velocities  ,1 () and  ,2 () (Gabi et al., 2016;Gabi et al., 2017a) were generated using the Taguchi method as shown in Equation 6.Details about Taguchi method can be found in (Taguchi et al., 2000).The velocities control the efficiency and accuracy of the algorithm towards achieving an optimum solution.
, () = {  ,1 () =  , ( + 1) + ( 1 ×  1 × (   ( + 1) - , ( + 1)) ,2 () =  , ( + 1) + ( 1 ×  1 × (   ( + 1) - , ( + 1)) (6) Where,  , () is the velocity of the cat;  is the constant value of acceleration, ; is a random number in the range of [0, 1], ; symbolizes the iteration number.A non-dominant velocity among the generated velocities is selected to update the new position of the algorithm using the following rule: At each iteration, the comparison between the currently obtained solution and a solution newly selected is carried out.Hence, a solution that improves better is always accepted.The probability of accepting neighbour solution into a new generation of cats using SA is obtained using equation 11 (Abdullahi and Ngadi, 2016).The velocity set with best convergence speed is 13 With the proposed CSM-CSOSA algorithm, the tracing (local) search process can now move out of the local optima region (Abdullahi & Ngadi, 2016).To control the performance of parameters of the proposed (CSM-CSOSA) algorithm, the tracing search procedure was further enhanced with the Taguchi method and simulated annealing.Two sets of candidate velocities  ,1 () and  ,2 () (Gabi et al., 2016;Gabi et al., 2017a) were generated using the Taguchi method as shown in Equation 6.Details about Taguchi method can be found in (Taguchi et al., 2000).The velocities control the efficiency and accuracy of the algorithm towards achieving an optimum solution.
, () = {  ,1 () =  , ( + 1) + ( 1 ×  1 × (   ( + 1) - , ( + 1)) ,2 () =  , ( + 1) + ( 1 ×  1 × (   ( + 1) - , ( + 1)) (6) Where,  , () is the velocity of the cat;  is the constant value of acceleration, ; is a random number in the range of [0, 1], ; symbolizes the iteration number.A non-dominant velocity among the generated velocities is selected to update the new position of the algorithm using the following rule: At each iteration, the comparison between the currently obtained solution and a solution newly selected is carried out.Hence, a solution that improves better is always accepted.The probability of accepting neighbour solution into a new generation of cats using SA is obtained using equation 11 (Abdullahi and Ngadi, 2016).The velocity set with best convergence speed is selected by the CSM-CSOSA algorithm to update the new position of the next cat provided the condition in equation 8 is satisfied (Zuo et al., 2016).
Where r ; is an integer random number ] 1 , 0 [ .The position of the cat represents the solution of the cat.The cat with the best fitness is stored in an  ×  archive at each run of the algorithm and is compared with the initial best solution in the archive based on dominant strategy.Assume  ℎ and  ℎ represent the positions of two cats in a -dimensional search space as   = ( 2,  3 , … ,   , … .  ) and   = ( 2,  2 , … ,   , … .  ) respectively.A non-dominant strategy is adopted to determine the best fitness when the conditions in equations 9 and 10 are satisfied (Abdullahi and Ngadi, 2016) selected by the CSM-CSOSA algorithm to update the new position of the next cat provided the condition in equation 8 is satisfied (Zuo et al., 2016).
Where r ; is an integer random number ] 1 , 0 [ .The position of the cat represents the solution of the cat.The cat with the best fitness is stored in an  ×  archive at each run of the algorithm and is compared with the initial best solution in the archive based on dominant strategy.Assume  ℎ and  ℎ represent the positions of two cats in a -dimensional search space as   = ( 2,  3 , … ,   , … .  ) and   = ( 2,  2 , … ,   , … .  ) respectively.A non-dominant strategy is adopted to determine the best fitness when the conditions in equations 9 and 10 are satisfied (Abdullahi and Ngadi, 2016) Where (.) denotes the fitness evaluation function.If the fitness value (  ′ ) is better than that (10) Where f(.) denotes the fitness evaluation function.If the fitness value is better than that of the f(X i ).For minimization process, the new fitness is accepted for an update with the probability defined in equation 11. (11)

Where
and f(X i ) denotes fitness functions of the cat and current solutions, represents the control parameter which is the temperature.The CSM-CSOSA algorithm is illustrated in Algorithm 5.
14 (Abdullahi and Ngadi, 2016) Where (.) denotes the fitness evaluation function.If the fitness value (  ′ ) is better than that of the (  ).For minimization process, the new fitness is accepted for an update with the probability defined in equation 11. Where denotes fitness functions of the cat and current solutions, T represents the control parameter which is the temperature.The CSM-CSOSA algorithm is illustrated in Algorithm 5.

Generate an empty non-dominant archive of (n × m) size of uniform random number [0, 1]
Output: Best solution with minimum total execution time and minimum total execution cost.Identify the best optimal solution for the trade-off values as   ∈ ∀  = 14 (Abdullahi and Ngadi, 2016) Where (.) denotes the fitness evaluation function.If the fitness value (  ′ ) is better than that of the (  ).For minimization process, the new fitness is accepted for an update with the probability defined in equation 11. Where denotes fitness functions of the cat and current solutions, T represents the control parameter which is the temperature.The CSM-CSOSA algorithm is illustrated in Algorithm 5.

Generate an empty non-dominant archive of (n × m) size of uniform random number [0, 1]
Output: Best solution with minimum total execution time and minimum total execution cost.Identify the best optimal solution for the trade-off values as   ∈ ∀  = 14 is adopted to determine the best fitness when the conditions in equations 9 and 10 are satisfied (Abdullahi and Ngadi, 2016) Where (.) denotes the fitness evaluation function.If the fitness value (  ′ ) is better than that of the (  ).For minimization process, the new fitness is accepted for an update with the probability defined in equation 11. Where denotes fitness functions of the cat and current solutions, T represents the control parameter which is the temperature.The CSM-CSOSA algorithm is illustrated in Algorithm 5.

Generate an empty non-dominant archive of (n × m) size of uniform random number [0, 1]
Output: Best solution with minimum total execution time and minimum total execution cost.Identify the best optimal solution for the trade-off values as   ∈ ∀  = 14 (Abdullahi and Ngadi, 2016) Where (.) denotes the fitness evaluation function.If the fitness value (  ′ ) is better than of the (  ).For minimization process, the new fitness is accepted for an update with probability defined in equation 11.
denotes fitness functions of the cat and current solutions, T represent control parameter which is the temperature.The CSM-CSOSA algorithm is illustrate Algorithm 5.  18.If  ≤ 0  exp (− −1 ) > (0,1) // rand (0, 1) is a uniformly random generated number between 0 and 1 19.Apply new fitness selection strategy based on Pareto dominance according to Equation 9 &10 20.Reduce the temperature using Equation 4Problem Description In cloud computing, the attributes associated with the task scheduling problem are Cloud Information System (CIS), Cloud Broker (CB) and Virtual Machines (VMs).The tasks are referred to as cloudlets in cloud computing.The CIS receives cloudlets {c 1 , c 2 , c 3 , ... ... .., c n } from the cloud consumers which are sent to CB.A query is generated from CIS−CB in each datacenter n the required service to execute the received cloudlets.Assume {v 1 , v 2 , v 3 , ... ... ., v m } represent heterogeneous VMs (which varies in capacity in both speed and memory) for executing each cloudlet, then the time a cloudlet spends executing on VMs will determine the total cost per time quantum on all VMs.Therefore, the following assumptions are considered necessary for the scheduling: (i) two datacentres are considered sufficient for the task schedule; (ii) The two datacentres belong to the same service provider; (iii) Transmission cost is ignored; (iv) Cloudlets are dynamically assigned to VMs where each VM handles at most one cloudlet at a time and the total number of all possible schedules is considered to be (n!) m (Zuo et al., 2015) for the problems with n cloudlets and m VMs; (v) Pre-emptive allocation policy is not allowed; (vi) The cost of using VMs for a time quantum varies from one to another per hour (/hr).Hence, the Expected Time to Compute (ETC) and the Expected Cost to Compute (ECC) matrix will be used for the scheduling decision.
The modelling of the execution time and execution cost objective is as follows: Let C i ∀i = {1, 2, ... ., n} denotes set of cloudlets that are independent of one to the other schedule on virtual machine V j ∀j = {1, 2, ... ., m} .The total execution time T exeij for all cloudlets executed on V j can be calculated using Equation 12and the execution time of cloudlets C i ∀i = {1, 2, ... ., n} on is computed using equation 13.Where, exe ij is the execution time of running cloudlets on one virtual machine; C i is the set of cloudlets in Millions Instruction (MI) assigned on the virtual machine V j ; V mips j is the virtual machine speed in Million Instructions per Seconds (MIPs); is the number of the processing element (Gabi et al., 2016).Equation 15is used to compute the cost of executing all cloudlets on all V j if and only if the cost of a virtual machine per time quantum is given per hour (/hr) (Ramezani et al., 2013) while equation 16 computes the cost of executing cloudlets on V j .

(15)
Where TTexe cost ij is the total cost of executing all cloudlets on V j , exe cost ij is the cost of executing cloudlets on V j (Ramezaini et al., 2013).( 16) Vcost j , is the monetary cost of one unit V j in US dollar per hour.A mathematical model for the multi-objective task scheduling problem can be expressed as follows: (17) The fitness for the QoS when the trade-off factors for the time and cost for consumer service preference can be expressed as follows (Zuo et al., 2015;Beegom & Rajasree, 2015).16 can be calculated using Equation 12and the execution time of cloudlets   ∀ = {1,2,   is computed using equation 13. 16 can be calculated using Equation 12and the execution time of cloudlets   ∀ = {1,2
, Such that: Where,   is the execution time of running cloudlets on one virtual machine;   i cloudlets in Millions Instruction (MI) assigned on the virtual machine   ;    is machine speed in Million Instructions per Seconds (MIPs);   is the number of the element (Gabi et al., 2016).Equation 15is used to compute the cost of executing all c all   if and only if the cost of a virtual machine per time quantum is given per (Ramezani et al., 2013) while equation 16 computes the cost of executing cloudlets on

16
is computed using equation 13.
, Such that: Where,   is the execution time of running cloudlets on one virtual machine;   i cloudlets in Millions Instruction (MI) assigned on the virtual machine   ;    is machine speed in Million Instructions per Seconds (MIPs);   is the number of the element (Gabi et al., 2016).Equation 15is used to compute the cost of executing all c all   if and only if the cost of a virtual machine per time quantum is given per (Ramezani et al., 2013) while equation 16 computes the cost of executing cloudlets on Where    is the total cost of executing all cloudlets on   ,    is executing cloudlets on   (Ramezaini et al., 2013).
, is the monetary cost of one unit   in US dollar per hour.A mathematical mo multi-objective task scheduling problem can be expressed as follows: () = {(  ,    } Subject to: The fitness for the QoS when the trade-off factors for the time and cost for consum preference can be expressed as follows (Zuo et al., 2015;Beegom and Rajasree, 2015) Where    is the total cost of executing all cloudlets on   ,    is the cos executing cloudlets on   (Ramezaini et al., 2013).
, is the monetary cost of one unit   in US dollar per hour.A mathematical model for multi-objective task scheduling problem can be expressed as follows: Subject to: The fitness for the QoS when the trade-off factors for the time and cost for consumer ser preference can be expressed as follows (Zuo et al., 2015;Beegom and Rajasree, 2015).
Where, θ[0,1] is the control factor for selection of consumer service preference based on time and cost objectives.

Evaluation Metrics
The metrics used for evaluation are execution time, execution cost using the model presented in equation ( 12) and ( 15) and the QoS (fitness) model in equation ( 18) as well as the statistical analysis based on percentage improvement rate percentage (PIR%) using equation ( 19). (

RESULTS AND DISCUSSION
The CloudSim simulator tool (Buyya et al., 2010) is used for the experiment.The CloudBroker policy of the CloudSim is used to implement the algorithm and run with two (2) different datasets.The settings for each algorithm are shown in Table 1.The comparison with multi-objective task scheduling algorithms discussed in the introduction were used, i.e. the Multi-Objective Genetic Algorithm (Budhiraja & Singh, 2014), Multi-Objective scheduling method based on Ant Colony Optimization (Zuo et al., 2016) & Multi-Objective Particle Swarm Optimization (Ramezaini et al., 2013).
Table 1 The parameter setting for the four task scheduling algorithms The fitness for the QoS when the trade-off factors for the time and cost for consume preference can be expressed as follows (Zuo et al., 2015;Beegom and Rajasree, 2015).
Where, [0,1]is the control factor for selection of consumer service preference based and cost objectives.

Evaluation Metrics
The metrics used for evaluation are execution time, execution cost using the model pres equation ( 12) and ( 15) and the QoS (fitness) model in equation ( 18) as well as the s analysis based on percentage improvement rate percentage (PIR%) using equation ( 19).

RESULTS AND DISCUSSION
The CloudSim simulator tool (Buyya et al., 2010) is used for the experiment.The Clou policy of the CloudSim is used to implement the algorithm and run with two (2) Datasets.The parameter setting for the datacentres (as shown in Table 2) were based on al., 2016; Abdullahi & Ngadi, 2016).The comparison with multi-objective task sc 17 The fitness for the QoS when the trade-off factors for the time and cost for consum preference can be expressed as follows (Zuo et al., 2015;Beegom and Rajasree, 2015) Where, [0,1]is the control factor for selection of consumer service preference bas and cost objectives.

Evaluation Metrics
The metrics used for evaluation are execution time, execution cost using the model p equation ( 12) and ( 15) and the QoS (fitness) model in equation ( 18) as well as th analysis based on percentage improvement rate percentage (PIR%) using equation (19 (ℎ ℎ) * 100

RESULTS AND DISCUSSION
The CloudSim simulator tool (Buyya et al., 2010) is used for the experiment.The C policy of the CloudSim is used to implement the algorithm and run with two (2 Datasets.The parameter setting for the datacentres (as shown in The performance of the proposed CSM-CSOSA (on minimization of task execution time and execution cost) with the variation of its control parameters for consumer service selection preference is evaluated.The results are compared for the objective of execution time and execution cost as an extremely critical parameter for consumer QoS for varying number of tasks namely 100-1000 respectively.These experiments on two benchmark datasets, i.e., the normal distributed dataset & the HPC2N dataset (Abdullahi & Ngadi, 2016) where the experimental results are compared with three task scheduling algorithms (MOGA, MOSACO & MOPSO).Therefore, each algorithm runs 30 simulation times and the average value is taken as the comparison.In Tables 3 and 4, the conducted experiment shows the effectiveness of scheduling algorithms.The result of the experiments is summarized via an average value for a total of 30 simulation runs.According to this average value illustrated in Tables 3 and 4 precisely, it is clear that for the execution time and execution cost multi-objectives, the proposed CSM-CSOSA algorithm has balanced both the total execution time and total execution cost as consumer requirement which makes it superior compared to MOGA, MOSACO and MOPSO.In both tables (3 & 4) based on the two different datasets used, it can be seen that for CSM-CSOSA task scheduling, the execution time and execution cost spent to complete tasks is very much minimal as compared to the execution time and execution cost spent to complete the tasks with MOGA, MOSACO and MOPSO algorithm.
It is shown that the execution time obtained has an influence on the cost performance.
Moreover, to have a better sense of the performance of the algorithms, some figures are illustrated to show the performance of the algorithms more explicitly.As the task keeps increasing from 100-1000, all the four scheduling algorithms increase in terms of execution time and execution cost.Figures 1-4 are plotted for execution time and execution cost based on the two different datasets used respectively.According to these figures, as the number of tasks keep increasing, both the execution time and the execution cost increase as well.
On the execution time and execution cost minimization, the proposed CSM-CSOSA task scheduling algorithm has a better operation and outperforms the MOGA, MOSACO and MOPSO task scheduling algorithms.The increase in task size and the performance obtained also show that the proposed CSM-CSOSA is scalable as well as capable of scheduling huge tasks with the lowest execution time in the heterogeneous environment.However, it also confirms that the CSM-CSOSA algorithm has shown to increase its quality of solutions by balancing task on the best virtual machine with minimum execution time and execution cost.In addition, the fitness (QoS) function formula in equation ( 18) is used to guide the target to optimize the performance of global best in the CSM-CSOSA algorithm and the MOGA, MOSACO and MOPSO task scheduling algorithm.
The result is shown in Table 5.In all cases, the CSM-CSOSA shows the best performance.In Table 6, the improvement of the proposed CSM-CSOSA algorithm over the three comparative scheduling algorithms using the normal distributed dataset shows our proposed algorithm has managed to achieve 34.59%, 30.37% and 17.87% in terms of total average execution time.A similar analysis is reported using the HPC2N dataset where the result is shown in Table 7 in terms of execution time.In Table 7, the performance improvement achieved by the four scheduling algorithms is reported.The result of the analysis shows the CSM-CSOSA which was able to reduce the execution time by obtaining 47.86%, 42.68% and 11.31% compared to MOGA, MOSACO and MOPSO.The performance recorded by our proposed algorithm is due to the combination of Simulated Annealing (SA) and the Taguchi approach which is incorporated at the local search of the CSM-CSOSA that guides the algorithm toward position updating without affecting the computational complexity.This approach also helps our proposed algorithm returns local best solution as fast as possible which is also attributed to the significant choice of velocity.
The CSM-CSOSA has shown to improve its quality of solutions at the latter stage of search procedure, making it more efficient for cloud task scheduling (Gabi et al., 2018).

Scalability Analysis of the Scheduling Algorithms
To further unveil performance of our proposed CSM-CSOSA task scheduling algorithm together with the three comparative algorithms, a scalability analysis is conducted.This process enables us to gain insight on the scalability of the proposed algorithm towards scaling with large workloads and the changes on the number of virtual processing elements (Chen et al., 2008).Kumar and Kao (1987) put forward measuring criteria known as Isoefficiency metric to account for the scalability of a system.In the context of cloud computing, scalability can be seen as an algorithm-Virtual Machine (VM) combination.
According to Sun and Rover (1994), scalability of an algorithm in relation to VM combination is when an average execution time exhibited remain constant even when a re-scaled in processing element and problem size occurs.Hence, considering the heterogeneity of cloud computing resources, an Isospeedefficiency scalability metric proposed in (Chen et al., 2008) for calculating the scalability of an algorithm based on machine dependance is adopted for the scalability investigation.In this study, the expected value for the scalability performance is considered to be in the ranges .The Isospeed-efficiency scalability function for computing the scalability is illustrated in equation 20 (Chen et al., 2008). (20) Where, is the initial execution time achieved by the algorithms based on configured number of processing elements on virtual machines, C is the scaled execution time when the processing element increases on virtual machine,W is the initial workload (tasks) assigned on virtual machine, W I is the rescaled workload (tasks) assigned on virtual machines.To compute the scalability of the proposed algorithm, one Parallel Workload, i.e., the HPC2N dataset with 527, 371 tasks were considered and 5000−14000 tasks instances drawn from the workload were used in the experiment.Processing elements from 5−50 are assigned to virtual machines.The results associated with each algorithm based on the obtained execution time is shown in Table 8, while computed scalability performance is reported in Table 9.The scalability computation for each algorithm is carried out using the following example for MOGA algorithm shown in equation 21.  with the three comparative algorithms, a scalability analysis is us to gain insight on the scalability of the proposed algori workloads and the changes on the number of virtual processi Kumar & Kao (1987) put forward measuring criteria known a for the scalability of a system.In the context of cloud comput algorithm-Virtual Machine (VM) combination.According to S an algorithm in relation to VM combination is when an average constant even when a re-scaled in processing element an considering the heterogeneity of cloud computing resources, a metric (,  ′ ) proposed in (Chen et al., 2008) for calculatin based on machine dependance is adopted for the scalability expected value for the scalability performance is considered to Isospeed-efficiency scalability function  (,  ′ ) for computin equation 20 (Chen et al., 2008).
Where,   is the initial execution time achieved by the algorithm processing elements on virtual machines,is the scaled exec element increases on virtual machine,   is the initial work machine,  is the rescaled workload (tasks) assigned on vi scalability of the proposed algorithm, one Parallel Workload, i 371 tasks were considered and 5000−14000 tasks instances dra 24 us to gain insight on the scalability of the proposed algorithm towards sca workloads and the changes on the number of virtual processing elements (Che Kumar & Kao (1987) put forward measuring criteria known as Isoefficiency m for the scalability of a system.In the context of cloud computing, scalability ca algorithm-Virtual Machine (VM) combination.According to Sun & Rover (1994 an algorithm in relation to VM combination is when an average execution time e constant even when a re-scaled in processing element and problem size considering the heterogeneity of cloud computing resources, an Isospeed-effici metric (,  ′ ) proposed in (Chen et al., 2008) for calculating the scalability based on machine dependance is adopted for the scalability investigation.In expected value for the scalability performance is considered to be in the ranges 0 Isospeed-efficiency scalability function  (,  ′ ) for computing the scalability equation 20 (Chen et al., 2008).
Where,   is the initial execution time achieved by the algorithms based on config processing elements on virtual machines,is the scaled execution time when element increases on virtual machine,   is the initial workload (tasks) assig machine,  is the rescaled workload (tasks) assigned on virtual machines.T scalability of the proposed algorithm, one Parallel Workload, i.e., the HPC2N d 371 tasks were considered and 5000−14000 tasks instances drawn from the work 24 e three comparative algorithms, a scalability analysis is conducted.This process enables ain insight on the scalability of the proposed algorithm towards scaling with large ads and the changes on the number of virtual processing elements (Chen et al., 2008).
& Kao (1987) put forward measuring criteria known as Isoefficiency metric to account scalability of a system.In the context of cloud computing, scalability can be seen as an m-Virtual Machine (VM) combination.According to Sun & Rover (1994), scalability of rithm in relation to VM combination is when an average execution time exhibited remain t even when a re-scaled in processing element and problem size occurs.Hence, ring the heterogeneity of cloud computing resources, an Isospeed-efficiency scalability (,  ′ ) proposed in (Chen et al., 2008) for calculating the scalability of an algorithm n machine dependance is adopted for the scalability investigation.In this study, the d value for the scalability performance is considered to be in the ranges 0 <  < 1.The d-efficiency scalability function  (,  ′ ) for computing the scalability is illustrated in n 20 (Chen et al., 2008).
is the initial execution time achieved by the algorithms based on configured number of ing elements on virtual machines,is the scaled execution time when the processing t increases on virtual machine,   is the initial workload (tasks) assigned on virtual e,  is the rescaled workload (tasks) assigned on virtual machines.To compute the ity of the proposed algorithm, one Parallel Workload, i.e., the HPC2N dataset with 527, ks were considered and 5000−14000 tasks instances drawn from the workload were used 24 workloads and the changes on the number of virtual processing elements (Che Kumar & Kao (1987) put forward measuring criteria known as Isoefficiency m for the scalability of a system.In the context of cloud computing, scalability ca algorithm-Virtual Machine (VM) combination.According to Sun & Rover (1994 an algorithm in relation to VM combination is when an average execution time e constant even when a re-scaled in processing element and problem size considering the heterogeneity of cloud computing resources, an Isospeed-effici metric (,  ′ ) proposed in (Chen et al., 2008) for calculating the scalability o based on machine dependance is adopted for the scalability investigation.In expected value for the scalability performance is considered to be in the ranges 0 Isospeed-efficiency scalability function  (,  ′ ) for computing the scalability equation 20 (Chen et al., 2008).
Where,   is the initial execution time achieved by the algorithms based on config processing elements on virtual machines,is the scaled execution time when element increases on virtual machine,   is the initial workload (tasks) assig machine,  is the rescaled workload (tasks) assigned on virtual machines.T scalability of the proposed algorithm, one Parallel Workload, i.e., the HPC2N da 371 tasks were considered and 5000−14000 tasks instances drawn from the work 24 Kumar & Kao (1987) put forward measuring criteria known as Isoefficiency met for the scalability of a system.In the context of cloud computing, scalability can algorithm-Virtual Machine (VM) combination.According to Sun & Rover (1994), an algorithm in relation to VM combination is when an average execution time exh constant even when a re-scaled in processing element and problem size oc considering the heterogeneity of cloud computing resources, an Isospeed-efficien metric (,  ′ ) proposed in (Chen et al., 2008) for calculating the scalability of based on machine dependance is adopted for the scalability investigation.In t expected value for the scalability performance is considered to be in the ranges 0 Isospeed-efficiency scalability function  (,  ′ ) for computing the scalability is equation 20 (Chen et al., 2008).
Where,   is the initial execution time achieved by the algorithms based on configur processing elements on virtual machines,is the scaled execution time when th element increases on virtual machine,   is the initial workload (tasks) assign machine,  is the rescaled workload (tasks) assigned on virtual machines.To scalability of the proposed algorithm, one Parallel Workload, i.e., the HPC2N data 371 tasks were considered and 5000−14000 tasks instances drawn from the worklo in the experiment.Processing elements from 5−50 are assigned to virtual machin associated with each algorithm based on the obtained execution time is shown in computed scalability performance is reported in Table 9.The scalability compu algorithm is carried out using the following example for MOGA algorithm shown  In the aforementioned Table 9, the proposed CSM-CSOSA algorithm is able to maintain better scalability performance by returning an acceptable value of 0.4811, 0.6986, 0.8630, 0.8990 for the HPC2N dataset compared to that of MOGA, MOSACO and MOPSO task scheduling algorithms.These values, however, shows that the proposed algorithm can respond to the dynamic changing cloud task and resource condition than the comparative algorithms under consideration.

CONCLUSION
The unpredictable number of task arriving at cloud datacentre and the rescaling of virtual machine processing elements during task scheduling affects the provisioning of better QoS expectations.Dynamic task scheduling algorithms are considered to be effective for addressing this kind of problem but are truly complex to develop.Previous authors have contributed immensely through the provision of several task scheduling algorithms but at the expense of scalability.In this study, we introduce Cloud Scalable Multi-Objective Cat Swarm Optimization based on Simulated Annealing (CSM-CSOSA) that considers the dynamicity of cloud computing environment to improve better QoS.The effectiveness of the algorithm is evaluated using a multi-objective model for the time and cost criteria.The novelty of the proposed method is based on the use of SA and Taguchi method that enhance the local search procedure of the algorithm in exploring larger search space which eventually yield better optimum solutions.Comparison of the performance of CSM-CSOSA with some of the existing metaheuristics (MOPSO, MOSACO and MOGA) task scheduling algorithms is carried out with one dataset and one parallel workload.The results obtained shows that the proposed method has achieved a remarkable performance by returning good QoS as well as better scalability performance with 0.4811, 0.6986, 0.8630 and 0.8990 compared to the comparative algorithms.In the future, the study aims to look at privacy aware scheduling in such a way that protects the sensitive information associated with tasks.

Algorithm 1 :Algorithm 2 :
Pseudocode for CSO seeking mode Do 1. Generate N copies of cat, 1. Change at random the dimension of cats as per CDC by applying mutation operator 2. Determine all changed cats' fitness values.3. Discover most suitable cats (non-dominant) based on their fitness values.4. Replace the position of the N cat after picking a candidate at random While Stopping condition is not exceeded.Pseudocode for CSO tracing mode Begin 1. Compute and update cat velocity using the new velocity in Equation 1:

Algorithm 1 :
Pseudocode for CSO seeking mode Do 1. Generate N copies of cat, 2. Change at random the dimension of cats as per CDC by applying mutation operator 3. Determine all changed cats' fitness values.

4.
Discover most suitable cats (non-dominant) based on their fitness values.5. Replace the position of the cat after picking a candidate at random While Stopping condition is not exceeded.Algorithm 2: Pseudocode for CSO tracing mode Begin 1. Compute and update cat velocity using the new velocity in Equation 1: teration, the seeking (global search) mode and tracing (local search) mode of CSO were carried ut independently, causing its position and velocity update to exhibit similar process.As a result, very high computation time is bound to occur(Pradhan & Panda, 2012).Therefore, a local earch optimization algorithm incorporated at the local search of the CSO is sufficient to address ts limitations.imulated Annealing imulated Annealing (SA) is a local search probabilistic approximation algorithm introduced by irkpatrick et al.
4) 9 guarantee an optimal solution without the support of the local search opti CSO suffered local entrapment while its global solution finding merit because the number of cats going into seeking mode (global search) all t the ones with tracing mode (local search mode).This may cause the mutat at tracing (local search) mode to affect performance and may end up no solution for cloud task scheduling optimization problem (Gabi et al., 2016 iteration, the seeking (global search) mode and tracing (local search) mod out independently, causing its position and velocity update to exhibit simil a very high computation time is bound to occur (Pradhan & Panda, 201 search optimization algorithm incorporated at the local search of the CSO its limitations. Algorithm 3: SA pseudocode INPUT: Initialize Temperature   , Final Temperature   , Temperature change counter  = 0, Cooling schedule , Number of iteration   OUTPUT: Best Optimum Solution found 1. Generate an initial solution  ∈  2. Repeat 3. Initialize repetition counter  ← 0 4. Repeat 5. Generate a new solution   ∈ , where  is the neighbourhood of  6. Compute the   according to Equation 3 7.If 0 <   ≪ 0 decide whether to accept or reject a new solution based on   (,  * , ) 8. Repeat counter  ←  + 1 9. Memorize the optimum solution so far found 10.Until  =   11.  ←  + 1 12.Until stopping criteria is note exceeded

Algorithm 5 :
Proposed CSM-CSOSA Algorithm Begin: Input: Initialize cat parameters: create population of the cats as   ∀ = {1,2,3 … . .} initialize   , flag number, Initialize SA parameters: initial Temperature   , fina Temperature   , rate of cooling .Generate an empty non-dominant archive of (n × m) size of uniform random number [0 1]Output: Best solution with minimum total execution time and minimum total executio cost.Identify the best optimal solution for the trade-off values as   ∈ ∀  = (continued)

Figure 3 .Figure 1 .Figure 2 .Figure 3 .Figure 4 .Figure 4 .
Figure 3. Average execution time-HPC2N dataset *   + h probabilistic approximation algorithm introduced by ses a neighbourhood and a fitness function to avoid on & Norgre, 2016).The SA algorithm often begins e neighbourhood function  with an updated solution opt a state which is an improvement over current one, the fitness value ( * ) becomes lower than ().

Cloud Scalable Multi-Objective Cat Swarm Optimization Based Simulated Annealing
(Habibi & Navimipour, 2016) the orthogonal array   (2 −1 ).End X is the vector of optimization variables X= {x1, x2, ....., x n ) Therefore, each function associated with solution X is an optimal solution X* that optimizes f .Several swarm intelligence techniques get entrapped at the local optima(Habibi & Navimipour, 2016).The real CSO technique is no different. A rightly highlighted, the CSO has a control variable called the Mixed Ratio (MR) that defines the cat position (seeking or tracing mode).Assume the MR is set to 1, they allow 10% cats into tracing mode (local search) while 90% of the cats move into seeking (global search) mode.The number of cats that goes into seeking mode (global search) all the time always exceed that of tracing mode (local search mode) Taguchi Optimization Algorithm Begin 1. Select two-level orthogonal array for matrix experiments such that   (2 −1 ) ∀  1, and N represent task numbers.
,    ) 17.  = (  ′ ) − (  ) (Gabi et al., 2016)013)f running cloudlets on one virtual machine;   is cloudlets in Millions Instruction (MI) assigned on the virtual machine   ;    is t machine speed in Million Instructions per Seconds (MIPs);   is the number of the p element(Gabi et al., 2016).Equation15is used to compute the cost of executing all clo all   if and only if the cost of a virtual machine per time quantum is given per h(Ramezani et al., 2013)while equation 16 computes the cost of executing cloudlets on    is the execution time of running cloudlets on one virtual machine;   is cloudlets in Millions Instruction (MI) assigned on the virtual machine   ;    is t machine speed in Million Instructions per Seconds (MIPs);   is the number of the p element(Gabi et al., 2016).Equation15is used to compute the cost of executing all clo all   if and only if the cost of a virtual machine per time quantum is given per h (Ramezani et al., 2013)while equation 16 computes the cost of executing cloudlets on     = ∑     =1

Table 3
Comparison on Execution time(sec) and Execution Cost(/hr)− Normal distributed dataset

Table 5
Comparison on Estimated total QoS Minimized

Table 6
Comparison on Improvement (%) based on Execution Time − Normal Distributed Dataset