PAPERmaking! Vol6 Nr1 2020

SAMSE 2018 IOP Conf. Series: Materials Science and Engineering 490 (2019) 062027

IOP Publishing doi:10.1088/1757-899X/490/6/062027

1 {( , )} n i i

i y t , where,

d i R  t is the input vector of

Suppose the modeling sample data set is

the i th -dimension pivot element in the latent vector space expanded by the d th -dimension pivot element, and i y R  is the target output variable of effluent COD of the papermaking wastewater. In the high-dimension feature space constructed by the nonlinear mapping function ( ) M t , the model establishment between the output variable and the input variable is to find the best fitting function: T ( ) ( ) y b M  t w t     (3) Where, w is the weight coefficient vector to be estimated in the high-dimension feature space, b is the constant deviation term. For the LSSVM method, the parameter estimate in the above formula can be transformed to satisfy the constraint of Formula (4): T ( ) , 1,2, , i i i y b i n M [    w t (4) The minimization optimization problem was solved as below: T 2 , , 1 1 1 min ( , , ) 2 2 n i b i J b [ J [  ¦ w ȟ w w w (5) In the formula, J is a penalty factor, used to balance the complexity and approximation accuracy of the model, i [ is the training error of the i th sample point. The Lagrange multiplier i D is now introduced to transform the above-mentioned constraint optimization problem of the formula into an unconstrained optimization problem:

( , , , ) ( , , ) J b w ȟ w ȟ Į

L b

(6)

n

¦

( D M T

( ) w b t

)

y



i [   

i

i

i

1

i

Using the KKT optimization condition to solve the above formula (Zhou Xinran, 2012), that is, to solve the partial derivatives of w , b ˈ i [ and i D , we can obtain:

0 w ° o ° w °w ° o °w ® ° w o °w ° ° w 0 0 i [ L L b L L w

n

¦

( ) t

w

DM

i

i

1

i

n

¦

0

i D

1

i

(7)

 

, 1,2, , i 

n

D J[

i

i

T w t ( ) o    b I

0

0, 1,2, , 

y i

n

i [

° w ¯

i

i

i D

Eliminating the elements from the above equation set, we will obtain the following linear equation set: T 1 0 0 1 1 v v b J  ª º ª º ª º « » « » « »  ¬ ¼ ¬ ¼ ¬ ¼ I Į y K  (8) Where, T 1 [1,1, ,1 ] v n  , T 1 2 [ , , , ] n D D D Į  , T 1 2 [ , , , ] n y y y y  , T ( , ) ( ) ( ) ij i j i j K M M t t t t ˈ 1,2, , i j n  ˈ ˈ and I is the unit matrix. After solving the parameters of i D and b in Formula (8) and via the least square method, the LSSVM model will be obtained as below:

n

( ) K b D  ¦ t t t ( , ) i i

ˆ y f

(9)

1

i

2 2

If the LSSVM model uses the RBF kernel function , the different values of the kernel function width V and the penalty factor J in Formula (5) will affect the actual performance of the LSSVM model (Zhao et al., 2000). To this end, this paper completes the optimization of the two parameters by taking the minimum of the sum of squared error ( , , ) exp( V t t ) i i K V   t t

4

Made with FlippingBook Digital Publishing Software