PAPERmaking! Vol8 Nr1 2022

Processes 2021 , 9 , 274

11of 24

Algorithm 1 Cont. 46.

Mutation operation is applied to y with probability Pm to generate a new y  .

47.

endfor

48. Update EP : select a new non-dominated set from EP and the new population as the new

EP 49. 50. 51. 52.

if the stopping criterion is not satisfied

Continue.

else

Stop and output EP .

53. end if 54. endwhile

The weight vectors can determine the quality of solution obtained by MOEA/DTL. With the uniformly distributed weight vectors, the solution obtained by MOEA/DTL is more evenly distributed and closer to the Pareto optimal solution. Thus, the method of gen- erating some weight vectors with even distribution is of great importance to MOEA/DTL. In this study, a number of weight vectors are generated as follows: Step (1) Set γ =  γ 1 , γ 2 , . . . , γ N + 1  ,where γ i =  γ i 1 , γ i 2  For each i = 0 to N , do: γ i 1 = i N , γ i 2 = 1 − i N . Step (2) Generate weight vectors λ =  λ 1 , λ 2 , . . . , λ N  by randomly selecting N vectors from γ . The parameter Ps is one of the important factors affecting the convergence speed of the algorithm. If Ps is too small, it means that most of the teachers are from the global optimal individual, and it is easy to fall into the local optimal value. If Ps is too large, it means that most of the teachers come from the best individuals in the neighborhood, and the convergence speed will be slower. The value of Ps between 0.7 and 0.9 is reasonable. The decomposition approach is also important for MOEA/DTL. The makespan and the energy cost are converted into an objective using the weighted sum approach. The makespan and the energy cost are the two different scalar objectives. Therefore, the decomposition method must have the function of normalization. The decomposition approach is as below:

j ×  f j  x f max j

j 

i  − f min min j

λ i

m ∑ j = 1

g i =

(27)

− f

where g i is the scalar optimization objective of solution x i , f max j

is the maximum of the j -th

objective, and f min j

is the minimum of the j -th objective.

3.2. Variable Neighborhood Search With the advantages of high efficiency and simple implementation, the Variable Neighborhood Search (VNS) algorithm has been successfully applied in many engineering fields [29,30]. The VNS algorithm uses the neighborhood structure of different actions to search alternately, thus achieving a good balance between concentration and evacuation. Firstly, the VNS is carried out in a small neighborhood. If the quality of the solution cannot be improved, VNS will be carried out in a larger neighborhood. When a better solution is found, the VNS is conducted in a smaller neighborhood again. The cycle is repeated until the end of the algorithm. The VNS algorithm is simple in principle, easy to implement, and has good optimization performance. The procedure of the VNS algorithm used in this paper is shown in Algorithm 2:

Made with FlippingBook - Online magazine maker