SELF-ADAPTIVE MODEL BASED ON GOAL-ORIENTED REQUIREMENTS ENGINEERING FOR HANDLING SERVICE VARIABILITY

Service system is currently facing environmental complexity problems, such as the need of a distributed, heterogeneous, decentralized, and interdependent system which operates dynamically and unpredictably. This condition requires the service system to have an ability to adapt in order to realize sustainable functions. The success of service adaptation is determined by its ability to handle variability at runtime. The purpose of this research is to realize service flexibility through variability modeling, which is an extension of previous work to enrich the adaptability view. The methodology was developed through the monitor-analyse-plan-execute-knowledge control loops approach integrated into the adaptive service (service level) element within the adaptive enterprise service system metamodel based on goal-oriented requirements engineering. Service adaptation scenario was prepared through proactive and reactive adaptation mechanisms. For evaluation, the model was applied to the case of a configuration management system. The experimental results showed that the model is able to adapt to runtime variability and accomodates the growth of the service component items shown by the description of the system scalability. The proposed model has a better alternative design in analyzing variability with a Received: 31/10/2018 Revised: 2/5/2019 Accepted: 8/5/2019 Published: 31/3/2020

its ability to handle variability at runtime. The purpose of this research is to realize service flexibility through variability modeling, which is an extension of previous work to enrich the adaptability view. The methodology was developed through the monitor-analyse-plan-execute-knowledge control loops approach integrated into the adaptive service (service level) element within the adaptive enterprise service system metamodel based on goal-oriented requirements engineering. Service adaptation scenario was prepared through proactive and reactive adaptation mechanisms. For evaluation, the model was applied to the case of a configuration management system. The experimental results showed that the model is able to adapt to runtime variability and accomodates the growth of the service component items shown by the description of the system scalability. The proposed model has a better alternative design in analyzing variability with a

INTRODUCTION
The service system has now become an important part in various activities, where different elements of the real-world system can interact with the system. The involvement of various elements and activities raises the issue of complexity in its development, for example the characteristics of system entities related to rapid organization growth, hardware ubiquitous, the dynamic and unpredictable environment, etc. These conditions require the service system to have an ability to adapt to environmental characteristics and uncertainty at runtime. The main factor behind the uncertainty is the variability at runtime that refers to changes which occur in the system requirements, environment, related systems, and the system itself (Abbas & Anderson, 2017). In our previous work (Surendro, Aradea, & Supriana, 2016), we introduced a requirements engineering for cloud computing adaptive (RECCA) model focused on cloud services variability. In this work, we propose three views, namely architectural view, alignment view, and adaptability view from which the three views of the requirements engineering process capture the service system requirements. However, the adaptability view only focuses on providing external services, which are cloud services. Meanwhile, the need for a service system within an enterprise of reality will also require a service provided by an internal party. This research extends the capability of the adaptability view, where the service flexibility factor becomes the main focus so that the system is able to adapt to the services requirements provided by both external and internal parties. Providing these services will address the runtime variability and growth of service items.
Based on a review of related works, there are still some missing pieces that has motivated us to conduct this research. For example, the results of work by Qureshi, Jureta, and Perini (2012), Clark, Warnier, and Brazier (2011), Morandini, Penserini, and Marchetto (2017), and Mendoca, Rodrigues, Alves, Ali, and Baresi (2016) indicate the need to further investigate the dynamic evolutionary needs of the requirements model for service variability. Meanwhile, Abuseta and Swesi (2015), Arciani, Riccobene, and Scandura (2015), Knauss, Damian, Franch, Rook, Muller, and Thomo (2016), Paz and Arboleda (2016) have utilized the advantages of autonomic computing to develop adaptation mechanisms. However, the representation of domain models (goal models) in the concept has yet to be investigated. Based on these facts, we see an opportunity to deal with limitations in the adaptive enterprise service system (AESS) metamodel proposed by Surendro et al. (2016). A more detailed discussion of this gap is presented in the related works section. This paper introduces the handling of service variability where the lifecycle of the adaptive service element in the AESS metamodel is formulated as a monitor-analyze-plan-execute-knowledge (MAPE-K) pattern through goal-oriented requirements engineering (GORE). Adaptation mechanisms are developed through two strategies. The first is proactive adaptation prepared through a set of variability rules to anticipate changes in the service context. The second is the reactive adaptation prepared through a set of evolution rules to follow up on the needs for additions or changes to the new function service system at runtime.

RELATED WORKS
There have been some work on the concept of self-adaptive service. For example, Perini (2012) discussed various challenges related to requirements from the engineering perspective for self-adaptive service based applications, in which a challenge viewpoint is defined for design-time and run-time requirements. The proposed model in this paper may be regarded as one of the answers to the challenge. Qureshi and Perini (2010) proposed a framework for continuous adaptive requirements engineering (CARE) supporting selfadaptive service-based applications using Techne's language to map the goal model into ontology domains. This concept can help in detailing the behavior of the system to meet its goals and adaptation actions. However, the mechanism of reasoning for changes in domain assumptions, preferences and contexts still requires further research. Meanwhile, our research proposes a dynamic rule model so that reasoning at run-time can be done automatically. Clark et al. (2011) introduced self-adaptive monitoring services to adapt to changes based on risk levels. This model focused on service monitoring capabilities to respond to change. Our proposed model is not only prepared for handling changes, but accomodates system evolution requirements as the growth of service items also becomes one of the actions of the monitoring results. Anna et al. (2019), proposed a model of requirements engineering for adaptive systems based on goal model, and Mendoca et al. (2016) proposed a model of contextualed runtime goal through a probabilistic approach. However, the dynamic evolution requirements are still not covered in these works, while our model provides this capability through the plug and play model. The expansion of autonomic computing (Abeywickrama & Ovaska, 2017) has now become a major concern of researchers in developing selfadaptive models. Arciani et al. (2015) introduced a framework for modeling and validating distributed self-adaptive service-oriented applications using the formal method. Further, Knauss et al. (2016) introduced a model of contextual requirements using machine learning and data mining approach. Paz and Arboleda (2016) also proposed a model for the guide of dynamic adaptation planning with formal methods. The works focused on the generic function of MAPE-K control loops for reasoning at runtime. However, in our model, the representation of entities is from the domain problem as a domain model (goal model) under additional concerns. So, it has advantages in terms of capturing the requirements domain. Meanwhile, Surendro et al. (2016) adopted the AESS metamodel to handle service variability in service catalogs limited to providing external services.
Based on these related works, we argue that handling service variability can be improved through the ability to realize the dynamic evolution of service requirements based on adaptation patterns embedded in the service level elements of the AESS metamodel. Service requirements are defined through GORE to represent domain models. Meanwhile, adaptation strategies are realized through a generic function of the MAPE-K pattern. The proposed method section discusses in more detail the approach used in this paper.

RESEARCH METHODOLOGY
This research is divided into five phases as presented in Table 1. Phase 1 reviews some related research to identify gaps and define research problems. In Phase 2, the research problem is identified, that is how to handle service variability in the AESS metamodel.  Journal of ICT, 19, No. 2 (April) 2020, pp: 225-250 Phase 3 defines the elements of the model needs by mapping the AESS metamodel into the requirements model and its control needs to obtain three views of the model, namely architecture, alignment, and adaptability. Furthermore, the third phase results are used in Phase 4 to realize the improvement of the adaptability view by introducing approaches at each level of the AESS metamodel through the integration of goal-oriented requirements engineering and the MAPE-K adaptation cycle. Finally, in Phase 5, an evaluation is carried out to prove that the proposed model provides relevant contributions. Empirical evaluation is conducted through a discussion of case studies using the domain Quality Attribute Scenarios (dQAS) and Adaptive Capability Maturity Model (ACMM).

PROPOSED METHOD
In order to realize the self-adaptation capabilities for service variability, we utilized the methods of some previous researches, for example, Abuseta and Swesi (2015) who proposed a design pattern for a model of MAPE-K. Some of the patterns are used in the model which we proposed and also expanded by adding the ability to plug and play as a form of service system evolution. Nakagawa, Ohsuga, and Honiden (2012) research also inspired the proposed model, in which our models can enrich its features. Further, Morandini (2017) proposed Tropos4AS where the primitive of goal model is used as requirements description, adopted and expanded in our model. This work has advantages in terms of capturing context variability and we equip it with domain assumption through the rule editor and embed the control loops approach. The configuration developed in the proposed model is an extension of our previous work (Aradea, Supriana, Surendro, & Darmawan, 2017a;2017b) to enrich the adaptability view of the RECCA model. We introduced three requirement views: architecture, alignment, and adaptability in the model. The architectural view is enabled to understand the environment. Then, the results of that understanding are mapped into the service system requirements through an alignment view. Finally, the adaptability view determines the adaptation mechanism. The adaptability view is realized through an event-conditionaction (ECA) method that represents the MAPE-K concept, but does not explicitly define the mapping mechanism of the goal model as a description of requirements. In addition, the function of the adaptability view is only focused on the cloud service variability. The construction of the proposed model is to complement the adaptability view by preparing a more relaxed adaptation mechanism for service variability.
In the AESS metamodel, the design principle consists of agility, a living system (system of systems) and service principles, where the core elements are divided into three levels: adaptive enterprise service system (enterprise level), adaptive service system (capability level) and adaptive service (service level) (Gill, 2015). Enterprise level is a conceptual element of the adaptive enterprise architecture metamodel, while capability level is defined as a system abstraction that can represent a human individual, function, business unit, department or team, etc. In the RECCA model, both levels are configured through an architectural view to capture environmental elements and alignment view to define their service requirements. In this paper, we focus on the adaptability view for service level expansion that is context aware and is continually evolving and self-adapting. Figure 1 illustrates an overview of the proposed self-adaptive model which is an extension of the RECCA model based on the principles of the AESS metamodel. Enterprise and capability levels are defined as the domain model by adopting goal-oriented requirements engineering (goal model) and service level is realized as control strategy through the implementation of the MAPE-K adaptation cycle, which consists of scans and senses (monitors  The decomposition of goal model (functional) can represent the service requirements (R) in every sub-goal that are influenced by each property and have positive or negative (++ / + or -/ -) contribution to soft-goal (nonfunctional). The role of the control strategy (MAPE-K) in this case is to identify and monitor the possibilities of changes in the service. Decomposition model adopts the concept of component mapping (Nakagawa et al., 2012) for The decomposition of goal model (functional) can represent the service requirements (R) in every sub-goal that are influenced by each property and have positive or negative (++ / + or -/ -) contribution to softgoal (non-functional). The role of the control strategy (MAPE-K) in this case is to identify and monitor the possibilities of changes in the service. Decomposition model adopts the concept of component mapping (Nakagawa et al., 2012) for software components (Hirsch, Kramer, Magee, & Uchitel, 2006) by utilizing some design patterns (Abuseta & Swesi, 2015) and modifies in accordance with the requirements of service systems. Figure 2 illustrates a model for transforming model goals into software components. Each parent goal with AND-Decomposition is defined as a goal to analyze and plan (AP), while every child's goal is defined as a 231 Journal of ICT, 19, No. 2 (April) 2020, pp: 225-250 software components (Hirsch, Kramer, Magee, & Uchitel, 2006) by utilizing some design patterns (Abuseta & Swesi, 2015) and modifies in accordance with the requirements of service systems. Figure 2 illustrates a model for transforming model goals into software components. Each parent goal with AND-Decomposition is defined as a goal to analyze and plan (AP), while every child's goal is defined as a goal to monitor (M) and execute (E), which is fully regulated in the knowledge (K).

Figure 2. Goal Mapping to MAPE-K Components.
The control strategy to adjust each component is started by the M (monitor) component function as shown in Algorithm 1. There are a number of properties (P) on the goal (G) model (m) that should be monitored concurrently. This activity represents the runtime states which are time-triggered or eventtriggered to respond to requests or events. State (S) of system at runtime is represented by a combination of internal and external property values. Violation of the state is detected by way of any violation of the threshold of each goal property and the new state will be stored in the system state log to be analyzed. The decomposition of goal model (functional) can represent the service requirements (R) in every sub-goal that are influenced by each property and have positive or negative (++ / + or -/ -) contribution to softgoal (non-functional). The role of the control strategy (MAPE-K) in this case is to identify and monitor the possibilities of changes in the service. Decomposition model adopts the concept of component mapping (Nakagawa et al., 2012) for software components (Hirsch, Kramer, Magee, & Uchitel, 2006) by utilizing some design patterns (Abuseta & Swesi, 2015) and modifies in accordance with the requirements of service systems. Figure 2 illustrates a model for transforming model goals into software components. Each parent goal with AND-Decomposition is defined as a goal to analyze and plan (AP), while every child's goal is defined as a goal to monitor (M) and execute (E), which is fully regulated in the knowledge (K).

Execute of Plan
for all δ is found do a construct correctiveAction(addAction) changePlan newChangePlan(a n ) send changePlan to one or more executors for each a in executor do actuator update(a n ) //one or more actuators S.system reconfiguration m with actuator // set new value for C(G.Node) systemStateLog saveState(S.system) end for end for Violation of the goal system is analyzed based on the symptoms list. If the results of the analysis detected the presence of some symptoms, the system will accept the adaptation request and then reconfigure based on rule engine. Algorithm 2 shows the reconfiguration algorithms for AP (analyze and plan) component. Rule engine contains high-level goals that control the operation and functions of related systems. The general form is event-condition-action (ECA) rules. In our version, the rule engine is extended with a rule editor model where the specification changes can be done by editing the knowledge base directly or putting back into the system. Each adaptation request is represented as a state of system (S). A set of S contains the context (goal model) and the expected action for the target system. Changing plan contains the adaptation actions to be executed by the E (execute) component. The execute component (Algorithm 3) will use a number of actuators for setting the new value of the target system property. Adaptation strategies are developed through two rules of adaptation, namely a set rule of variability for proactive adaptation and a set rule of evolution for reactive adaptation. Proactive adaptation is prepared to anticipate changes in context information identified based on symptoms or events arising. The type of adaptation is formulated through ECA rules as follows: WHEN <event> ; current situation when there is a change in service IF <condition> ; certain events that occur so that the appropriate action is activated THEN <action> ; adjustments to service changes for reactive adaptation behavior VALID-TIME <time_period> ; suitability for service adaptation Reactive adaptation is prepared to follow up on the need for service updates based on the results of operations from proactive adaptation. This type of adaptation utilizes the scheme of service levels in the AESS metamodel, where each service instance in the service catalog is generated based on the results of the MAPE-K pattern analysis, so that service requirements can be activated according to prevailing conditions. The rule specifications for both types of proactive and reactive adaptation can be defined through the rule editor according to the preferences and requirements of stakeholders.

EVALUATION AND RESULTS
The discussion presented in this section is an extension of the configuration management system case based on the ITIL Framework (OGC, 2007) which is now widely used by large companies in the world for the provision of IT services. The main target of this experiment is scalability, which accommodates users' requirements in accessing an application service based on the changes, assesses the characteristics of quality attributes in handling variability at runtime, and measures the adaptive capability maturity level. Goal modeling (GORE) is shown in Figure 3 while the mapping of the system components is shown in Figure 4.  Figure 2, that is each AND-Decomposition in goal modeling is generated into a composite component, so that three composite components are obtained, namely, user interface, service application, and service delivery. Each composite component represents the adaptation cycle through three types of primitive components, namely, M (monitor), AP (analyze and plan), and E (execute). The links of each composite and primitive components are defined by two types of ports, namely provider service ports and required service ports based on the links formed from the results of goal modeling. The mapping in Figure 4 generates an adaptability pattern for user requirements and service requirements represented by the service delivery function in the service application. 8 provision of IT services. The main target of this experiment is scalability, which accommodates users' requirements in accessing an application service based on the changes, assesses the characteristics of quality attributes in handling variability at runtime, and measures the adaptive capability maturity level. Goal modeling (GORE) is shown in Figure 3 while the mapping of the system components is shown in Figure 4.  Figure 2, that is each AND-Decomposition in goal modeling is generated into a composite component, so that three composite components are obtained, namely, user interface, service application, and service delivery. Each composite component represents the adaptation cycle through three types of primitive components, namely, M (monitor), AP (analyze and plan), and E (execute). The links of each composite and primitive components are defined by two types of ports, namely provider service ports and required service ports based on the links formed from the results of goal modeling. The mapping in Figure 4 generates an adaptability pattern for user requirements and service requirements represented by the service delivery function in the service application.

Experiment
The property on context element monitored is service item or known as configuration item (CI) which consists of hardware, software, peripheral and network equipment. Handling these changes can be classified into two types of adaptation, which are: proactive adaptation and reactive adaptation. Proactive adaptation is to determine which components need to be updated, added and/ or deleted. It can be assigned as a symptom or event that can be identified. The setting of all these events can be assigned as rules, for example: access device events, when a new device is detected or unavailable device; authority events, when the permission mismatch is detected by the user or user's role changes; feature and service time events, when unavailability of features and / or time of service beyond the threshold is detected in the service catalog.
For example, it will discuss the rule when access event of device appears. Based on the function of the component "role detection (M)" and "user authority (E)", there are a number of "access services (E)" which will be accessed by users via the interface options. Goal decomposition of this services access is OR-decomposition, thus showing variability related to the resource (device_type). In addition, based on the function components, it is possible to change "feature detection (M)" and "change detection (M)" in service due to unexpected events or errors (event_error). Based on the description, the plan can be represented as: plan (device_type, event_error). This plan is to create an alternative behavior in dealing with variability context; for example, the plan of "access method" and "determination of the status" must use the function of "service delivery (AP)" components, because it provides the full positive contribution (++) toward the "relevance" and "response time" soft-goal, as opposed to doing analyze and plan (AP); each of which only contributes to positive (+) that will affect, even negative contribution (-). Thus, the system has a consideration to analyze and plan (AP) for a "user interface" and "service application". Collection of this property value would set the system input variables "service delivery (AP)". The setting of behavior can be defined as follows: These rules can be mapped into the concept of ECA rules, as shown in Table 2. So, there are four action plans (P n ) as the alternative solution. Meanwhile, reactive adaptation can be done by determining the procedures for handling service disruptions. For example, the handling of service disruptions will identify single points of failure, as can be seen in Figure 5, in order to obtain the configuration item (CI) as shown in Table 3. The data is obtained based on the monitor (M) functions, "role detection" and "feature detection". Then, the component of "service observation (M)" will perform detection to determine components of CI that can be considered critically. Availabilityservice (As) in Table 6 consists of several CI components with different levels of availability-component (Ac).     The calculation of service availability of stand-alone and redundant are formulated differently (OGC, 2007). The availability of services with a number of stand-alone CI is calculated by the following equation As = Ac 1 * Ac 2 * Ac 3 ... Ac n . Thus, based on statistical data of MTBF (mean time between failures) and MTRS (mean time to restore service), the service availability of single point of failure in the availability column in Table 3 has a total value of 8.79%. The availability of each CI n is obtained by the Equations 1, 2, 3 where MTBF denotes the average time that a configuration item (CI) or IT service can perform its agreed The calculation of service availability of stand-alone and redundant are formulated differently (OGC, 2007). The availability of services with a number of stand-alone CI is calculated by the following equation As = Ac 1 * Ac 2 * Ac 3 ... Ac n . Thus, based on statistical data of MTBF (mean time between failures) and MTRS (mean time to restore service), the service availability of single point of failure in the availability column in Table 3 has a total value of 8.79%. The availability of each CI n is obtained by the Equations 1, 2, 3 where MTBF denotes the average time that a configuration item (CI) or IT service can perform its agreed Function without interruption. This is measured from when the CI or IT service starts working, until it next fails (Lloyd & Rudd, 2011).
where MTRS denotes the average time taken to restore a configuration item (CI) or IT service after a Failure. MTRS is measured from when the CI or IT service fails until it is fully restored and delivering its normal functionality (Lloyd & Rudd, 2011).
where Availability is the ability of a service, component or configuration item (CI) to perform its agreed function when required (Lloyd & Rudd, 2011).
Meanwhile, the calculation of the service availability with a number of redundant CI is conducted by the following equation: As = Ac 1 + ((1 -Ac 1 ) * Ac 2 ) for one CI and one redundant CI, As = Ac (n -1) + ((1 -Ac n (n -1)) * Ac n ) for a number of (n) redundant CI. The service availability of redundant CI with shared dependencies can be seen in Table 7 with the number of redundant (n) which varies between 2 and 4. So, the total value of the redundant CI is 30.47%. These data are used as input variables for the "service delivery (AP)" component in determining any CI that can be considered as critical. Thus, the list of critical CI status is obtained as shown in Table 4; there are 10 critical CI with shared dependency requiring reconfiguration actions. The illustration of SLA percentage (%) change for each CI can be seen in Figure 6 with a total availability of services increase by 21.68% that is from 8.79% to 30.47%. (1) (2) (3) lculation of service availability of stand-alone and redundant are formulated differently (OGC, 2007).
ailability of services with a number of stand-alone CI is calculated by the following equation As = Ac1 * c3 ... Acn. Thus, based on statistical data of MTBF (mean time between failures) and MTRS (mean restore service), the service availability of single point of failure in the availability column in Table 3 tal value of 8.79%. The availability of each CIn is obtained by the Equations 1, 2, 3 MTBF denotes the average time that a configuration item (CI) or IT service can perform its agreed n without interruption. This is measured from when the CI or IT service starts working, until it next loyd & Rudd, 2011). (2) MTRS denotes the average time taken to restore a configuration item (CI) or IT service after a Failure.
is measured from when the CI or IT service fails until it is fully restored and delivering its normal nality (Lloyd & Rudd, 2011).
Availability is the ability of a service, component or configuration item (CI) to perform its agreed n when required (Lloyd & Rudd, 2011 on of service availability of stand-alone and redundant are formulated differently (OGC, 2007).
ity of services with a number of stand-alone CI is calculated by the following equation As = Ac1 * Acn. Thus, based on statistical data of MTBF (mean time between failures) and MTRS (mean e service), the service availability of single point of failure in the availability column in Table 3 lue of 8.79%. The availability of each CIn is obtained by the Equations 1, 2, 3 denotes the average time that a configuration item (CI) or IT service can perform its agreed hout interruption. This is measured from when the CI or IT service starts working, until it next Rudd, 2011).
(2) denotes the average time taken to restore a configuration item (CI) or IT service after a Failure.
asured from when the CI or IT service fails until it is fully restored and delivering its normal (Lloyd & Rudd, 2011).
bility is the ability of a service, component or configuration item (CI) to perform its agreed n required (Lloyd & Rudd, 2011 ion of service availability of stand-alone and redundant are formulated differently (OGC, 2007).
lity of services with a number of stand-alone CI is calculated by the following equation As = Ac1 * . Acn. Thus, based on statistical data of MTBF (mean time between failures) and MTRS (mean re service), the service availability of single point of failure in the availability column in Table 3 lue of 8.79%. The availability of each CIn is obtained by the Equations 1, 2, 3 denotes the average time that a configuration item (CI) or IT service can perform its agreed hout interruption. This is measured from when the CI or IT service starts working, until it next Rudd, 2011).
(2) denotes the average time taken to restore a configuration item (CI) or IT service after a Failure. easured from when the CI or IT service fails until it is fully restored and delivering its normal (Lloyd & Rudd, 2011).
ability is the ability of a service, component or configuration item (CI) to perform its agreed n required (Lloyd & Rudd, 2011).  Based on data of critical CI, the system will then do a reconfiguration through the components of the "service reconfiguration (E)". Finally, the "service release (E)" component will deliver new services. For example: ! CI 1 is detected as critical CI, where "Server-1" is based on monitoring CPU usage necessary to Based on data of critical CI, the system will then do a reconfiguration through the components of the "service reconfiguration (E)". Finally, the "service release (E)" component will deliver new services. For example: CI -1 is detected as critical CI, where "Server-1" is based on monitoring CPU usage necessary to improve and to avoid over utilization and contention. Thus, the system will add adaptation functions using a load balancing system. CI -7 is detected as critical CI, where "Application-1" is based on the monitoring of facilities provided which require the model to establish baseline performance through the addition of new features. Thus, the system will add new features through the cloud service so that the system will determine the cloud adoption mechanism. Treatment of any other critical CI is adjusted based on the event detected respectively.
The dashed line in Figure 7 shows the new added functions of the components. The following are previous works discussing the adaptation process of load balancing functions (Abuseta & Swesi, 2015;Darmawan & Aradea, 2017); the illustration of mapping the goal model into system components for load balancing is shown in Figure 8. The numbers of properties which should be monitored are shown in Tables 8 and 9. System will have the consideration to analyze and plan (AP) to organize "user's access". In addition, the system will also analyze and plan (AP) to set "performance of server farm". Based on the combination of each property value in Tables 8 and  9 will be the input variables for the system to "manage load (AP)" through "workload observation (    where x = server (task); n = number of clients; c = client.
in order to obtain the total task to be executed, that is 3209 tasks. The next stage "capability observation (M)" component will determine the ability of each server to the total task to be processed, through Equation 5.
where x = server (task); n = number of clients; c = client; d = distance; m = memory; v = speed.
Then, the server capability that can perform the task is sorted as quickly as possible based on the total task. Constraints to any desired process is set as k = 2 ms. The setting of system behavior to manage the server load is associated with some rules in response to symptoms or events, for example, high load event à when the server load is detected to be larger than 80%; unresponsive or very low load event à when detected, server does not perform the process.  where x = server (task); n = number of clients; c = client.
in order to obtain the total task to be executed, that is 3209 tasks. The next stage "capability observation (M)" component will determine the ability of each server to the total task to be processed, through Equation 5.
(4) (5) n the total task to be executed, that is 3209 tasks. The next stage "capability observation (M)" determine the ability of each server to the total task to be processed, through Equation 5. capability that can perform the task is sorted as quickly as possible based on the total task. y desired process is set as k = 2 ms. The setting of system behavior to manage the server load h some rules in response to symptoms or events, for example, high load event when the etected to be larger than 80%; unresponsive or very low load event when detected, perform the process.
243 Journal of ICT, 19, No. 2 (April) 2020, pp: 225-250 Server (  Furthermore, the system performs server activation (E)" by considering events for high load and a very low load / unresponsive through calculation of servers used and determining the value of balance by assessing the value of the smallest fitness divided by the number of servers used as in Equation 6. Thus, the system can adjust and balance the ability of the server and specify the number of servers needed as can be seen in Table 7. A total of 50 client needs with 3209 tasks need 16 servers; the average load balance of each server is 76%. The illustration of these functions is shown in Figure 9; the top picture is a real condition or CPU maximum capabilities after balancing process is obtained with the required number of servers with respective load balancing as presented in the following table.

Evaluation
The evaluation consists of three activities: first, illustrating the service scalability; second, the model comparison to assess the design support in handling variability using the domain Quality Attribute Scenarios (dQAS); and third, evaluating adaptation maturity levels using Adaptive Capability Maturity Model (ACMM). em can adjust and balance the ability of the server and specify the number of servers needed as Table 7. A total of 50 client needs with 3209 tasks need 16 servers; the average load balance of 76%. The illustration of these functions is shown in Figure 9; the top picture is a real condition or m capabilities after balancing process is obtained with the required number of servers with d balancing as presented in the following table.

Evaluation
The evaluation consists of three activities: first, illustrating the service scalability; second, the model comparison to assess the design support in handling variability using the domain Quality Attribute Scenarios (dQAS); and third, evaluating adaptation maturity levels using Adaptive Capability Maturity Model (ACMM). The scalability of service system is related to the growth in the number of each CI in the service catalog at runtime. As an example of evaluation, scalability description of load balancing system is represented by growth in the number of clients and the load of each client that can continue to grow and change at runtime. As shown in Figure 10, the total 3209 tasks of 50 clients require 16 servers with an average load of 76%; but if the total tasks of clients change, for example increase or decrease the need for servers, then the average balance of the load will be adjusted. For example, with a maximum number of tasks of 45 clients, then 14 servers with an average load balance is activated automatically; if the maximum number of tasks is for 24 clients, then only 7 servers are enabled. If the maximum number of tasks is for 30 clients, then only 9 servers are enabled, and so on. Thus, the evaluation results show that the scale is linear with the number of clients and the number of server load for balancing size. Thus, the system is able to handle change and growth in context. Furthermore, we evaluated the model through dQAS (Abbas, Andersson, & Weyns, 2012) to compare the adaptability of the RECCA model with the proposed model in the same case study. The dQAS characterized the quality attributes in the configuration management system experiment (Figure 3) Furthermore, we evaluated the model through dQAS (Abbas, Andersson, & Weyns, 2012) to compare the adaptability of the RECCA model with the proposed model in the same case study. The dQAS characterized the quality attributes in the configuration management system experiment ( Figure 3) through 9 dQAS elements, as shown in Table 8. The evaluation results showed that the proposed model provided more response alternatives in each variant, VC. From 6 combinations (variant VC), there were 20 total responses that could be alternative solutions for all stimuli. In addition, the responses were given to both normal and overload operating conditions. Meanwhile, the RECCA model had 13 total responses and was applied only under normal operating conditions, and there were some unsupported response requirements, such as R2, R5 and R6. This suggested that the proposed model could reduce the uncertainty factor caused by variability at runtime where requirements could be realized through alternative designs.
In the RECCA model, we describe the evaluation based on the criteria and controls of the ACMM through 9 dQAS elements, as shown in Table 8. The evaluation results showed that the proposed model provided more response alternatives in each variant, VC. From 6 combinations (variant VC), there were 20 total responses that could be alternative solutions for all stimuli. In addition, the responses were given to both normal and overload operating conditions. Meanwhile, the RECCA model had 13 total responses and was applied only under normal operating conditions, and there were some unsupported response requirements, such as R2, R5 and R6. This suggested that the proposed model could reduce the uncertainty factor caused by variability at runtime where requirements could be realized through alternative designs. In the RECCA model, we describe the evaluation based on the criteria and controls of the ACMM (Gill, 2015). The evaluation results indicated that the maturity level of the adaptation model varies (level-4 or 5) depending on the adaptation cycle applied. Meanwhile, based on the proposed model and the artifacts generated from the RECCA model, adaptation maturity is definitely at level-5 (adapting) which is the highest level of adaptive capability maturity model. Figure 11 shows the maturity criteria of each level in the ACMM, and the achievement of level-5 is made through the interaction cycle with scans and sense patterns of changes and adjustments between the context and rationalization (service level) based on the MAPE-K pattern, i.e.: the ability to monitor, assess, and respond to changes for continuous adaptation is realized through the integration of goal models as contextual environments (target system) and adaptation cycles of the MAPE-K pattern. There is integrated engagement and governance for adaptation through artifacts generated from the architecture and alignment views that are managed through an adaptability view and there is good support for adaptation through automatic computing mechanisms at the service level.

CONCLUSION AND FUTURE WORK
This paper introduces an adaptation model to address service variability. The lifecycle of each service element within the AESS metamodel is formulated as a control loop (MAPE-K) pattern based on the description of

CONCLUSION AND FUTURE WORK
This paper introduces an adaptation model to address service variability. The lifecycle of each service element within the AESS metamodel is formulated as a control loop (MAPE-K) pattern based on the description of requirements. Adaptation mechanisms are realized through proactive and reactive adaptation scenarios, both of which are service requirements that can be viewed as a set of variants selected at runtime through a concept of variability rules for service change and evolution. The evaluation results showed that the proposed model is able to describe the scalability of services related to change and growth of new service requirements. The proposed model has an alternative design that is better than the previous work in variability modeling, where an alternative response in each variant, VC is capable of handling any stimulus under normal operating and overload conditions. In addition to these achievements, the adaptive capability maturity of the proposed model also improves the results of previous work. Future research could detail additional features to enrich the description of service system requirements, as well as expand the context inference mechanism of the rule editor to accommodate more sophisticated conflict resolutions. Some approaches, such as strategies for machine learning and requirements reflection, could be taken into account in further studies.