生活随笔
收集整理的這篇文章主要介紹了
电池SOH仿真系列-基于LSTM神经网络的电池SOH估算方法
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
基于LSTM神經(jīng)網(wǎng)絡(luò)的電池SOH估算
??循環(huán)神經(jīng)網(wǎng)絡(luò)(Recurrent Neural Network,RNN)與BP神經(jīng)網(wǎng)絡(luò)不同,RNN網(wǎng)絡(luò)不僅考慮前一時刻的輸入,同時還賦予網(wǎng)絡(luò)對前面時刻信息的記憶能力。盡管RNN網(wǎng)絡(luò)具有較高的精度,但其存在著梯度消失的問題。對此,出現(xiàn)了一系列改進的RNN網(wǎng)絡(luò),而LSTM神經(jīng)網(wǎng)絡(luò)就是其中改進效果最好的一種。基于LSTM神經(jīng)網(wǎng)絡(luò)的電池SOH估算方法具體如下所示:
??(1)鋰離子電池循環(huán)壽命數(shù)據(jù)
??數(shù)據(jù)來源于NASA研究中心搭建的鋰離子電池測試平臺,選取5號鋰離子電池(額定容量為2Ah)。循環(huán)測試實驗在室溫下進行:以1.5A的恒流電流為電池充電,直到充電截止電壓(4.2V),以恒壓電流充電,直到充電電流降至20mA;在2A恒流(CC)模式下放電,直至電池分別降至2.7V、2.5V、2.2V、2.5V。當(dāng)電池達(dá)到EOL(end-of-life, EOL)標(biāo)準(zhǔn)時,實驗停止,額定容量下降30%。
??(2)數(shù)據(jù)預(yù)處理
??利用最大最小值歸一化方法將數(shù)據(jù)放縮到0和1之間。
??式中,xmax是輸入數(shù)據(jù)x的最大值,xmin是輸入數(shù)據(jù)x的最小值。
??(3)仿真分析
??將NASA鋰離子電池實驗數(shù)據(jù)中的充電容量作為模型輸入,模型的輸出為以放電容量為參考的電池 SOH。經(jīng)過訓(xùn)練,神經(jīng)網(wǎng)絡(luò)可以迭代得到網(wǎng)絡(luò)權(quán)重及偏置具體參數(shù)。
??從圖中可以看出,模型對于電池容量衰減的整體趨勢預(yù)測是非常精準(zhǔn)的。其具體代碼如下所示:
function pre_data
= LSTM_main(d
,h
,train_data
,test_data
)
%% 預(yù)處理
lag
= 8;
% d
= 51;
[train_input,train_output] = LSTM_data_process(d
,train_data
,lag
); %數(shù)據(jù)處理
[train_input,min_input,max_input,train_output,min_output,max_output] = premnmx(train_input'
,train_output'
);input_length
= size(train_input
,1); %樣本輸入長度
output_length
= size(train_output
,1); %樣本輸出長度
train_num
= size(train_input
,2); %訓(xùn)練樣本個數(shù)
test_num
= size(test_data
,2); %測試樣本個數(shù)
%% 網(wǎng)絡(luò)參數(shù)初始化
% 結(jié)點數(shù)設(shè)置
input_num
= input_length
;
cell_num
= 10;
output_num
= output_length
;
% 網(wǎng)絡(luò)中門的偏置
bias_input_gate
= rand(1,cell_num
);
bias_forget_gate
= rand(1,cell_num
);
bias_output_gate
= rand(1,cell_num
);
%網(wǎng)絡(luò)權(quán)重初始化
ab
= 20;
weight_input_x
= rand(input_num
,cell_num
)/ab
;
weight_input_h
= rand(output_num
,cell_num
)/ab
;
weight_inputgate_x
= rand(input_num
,cell_num
)/ab
;
weight_inputgate_h
= rand(cell_num
,cell_num
)/ab
;
weight_forgetgate_x
= rand(input_num
,cell_num
)/ab
;
weight_forgetgate_h
= rand(cell_num
,cell_num
)/ab
;
weight_outputgate_x
= rand(input_num
,cell_num
)/ab
;
weight_outputgate_h
= rand(cell_num
,cell_num
)/ab
;
%hidden_output權(quán)重
weight_preh_h
= rand(cell_num
,output_num
);
%網(wǎng)絡(luò)狀態(tài)初始化
cost_gate
= 1e-6;
h_state
= rand(output_num
,train_num
+test_num
);
cell_state
= rand(cell_num
,train_num
+test_num
);
%% 網(wǎng)絡(luò)訓(xùn)練學(xué)習(xí)
for iter
= 1:3000 %迭代次數(shù)yita
= 0.01; %每次迭代權(quán)重調(diào)整比例
for m
= 1:train_num
%前饋部分
if(m
==1)gate
= tanh(train_input(:,m
)'
* weight_input_x
);input_gate_input
= train_input(:,m
)'
* weight_inputgate_x
+ bias_input_gate
;output_gate_input
= train_input(:,m
)'
* weight_outputgate_x
+ bias_output_gate
;for n
= 1:cell_numinput_gate(1,n
) = 1 / (1 + exp(-input_gate_input(1,n
)));%輸入門
output_gate(1,n
) = 1 / (1 + exp(-output_gate_input(1,n
)));%輸出門
%sigmoid函數(shù)
endforget_gate
= zeros(1,cell_num
);forget_gate_input
= zeros(1,cell_num
);cell_state(:,m
) = (input_gate
.* gate
)'
;elsegate
= tanh(train_input(:,m
)'
* weight_input_x
+ h_state(:,m
-1)'
* weight_input_h
);input_gate_input
= train_input(:,m
)'
* weight_inputgate_x
+ cell_state(:,m
-1)'
* weight_inputgate_h
+ bias_input_gate
;forget_gate_input
= train_input(:,m
)'
* weight_forgetgate_x
+ cell_state(:,m
-1)'
* weight_forgetgate_h
+ bias_forget_gate
;output_gate_input
= train_input(:,m
)'
* weight_outputgate_x
+ cell_state(:,m
-1)'
* weight_outputgate_h
+ bias_output_gate
;for n
= 1:cell_numinput_gate(1,n
) = 1/(1+exp(-input_gate_input(1,n
)));forget_gate(1,n
) = 1/(1+exp(-forget_gate_input(1,n
)));output_gate(1,n
) = 1/(1+exp(-output_gate_input(1,n
)));endcell_state(:,m
) = (input_gate
.* gate
+ cell_state(:,m
-1)'
.* forget_gate
)'
; endpre_h_state
= tanh(cell_state(:,m
)'
) .* output_gate
;h_state(:,m
) = (pre_h_state
* weight_preh_h
)'
;%誤差計算Error
= h_state(:,m
) - train_output(:,m
);Error_Cost(1,iter
)=sum(Error
.^2); %誤差的平方和(
4個點的平方和)
if(Error_Cost(1,iter
)<cost_gate
) %判斷是否滿足誤差最小條件flag
= 1;break;else [ weight_input_x
,...weight_input_h
,...weight_inputgate_x
,...weight_inputgate_h
,...weight_forgetgate_x
,...weight_forgetgate_h
,...weight_outputgate_x
,...weight_outputgate_h
,...weight_preh_h
] = LSTM_updata_weight(m
,yita
,Error
,...weight_input_x
,...weight_input_h
,...weight_inputgate_x
,...weight_inputgate_h
,...weight_forgetgate_x
,...weight_forgetgate_h
,...weight_outputgate_x
,...weight_outputgate_h
,...weight_preh_h
,...cell_state
,h_state
,...input_gate
,forget_gate
,...output_gate
,gate
,...train_input
,pre_h_state
,...input_gate_input
,...output_gate_input
,...forget_gate_input
,input_num
,cell_num
);endend
if(Error_Cost(1,iter
)<cost_gate
)break;end
end
%% 測試階段
%數(shù)據(jù)加載
test_input
= train_data(end
-lag
+1:end
);
test_input
= tramnmx(test_input'
,min_input
,max_input
);
% test_input
= mapminmax('apply'
,test_input'
,ps_input
);%前饋
for m
= train_num
+ 1:train_num
+ test_numgate
= tanh(test_input'
* weight_input_x
+ h_state(:,m
-1)'
* weight_input_h
);input_gate_input
= test_input'
* weight_inputgate_x
+ h_state(:,m
-1)'
* weight_inputgate_h
+ bias_input_gate
;forget_gate_input
= test_input'
* weight_forgetgate_x
+ h_state(:,m
-1)'
* weight_forgetgate_h
+ bias_forget_gate
;output_gate_input
= test_input'
* weight_outputgate_x
+ h_state(:,m
-1)'
* weight_outputgate_h
+ bias_output_gate
;for n
= 1:cell_numinput_gate(1,n
) = 1/(1+exp(-input_gate_input(1,n
)));forget_gate(1,n
) = 1/(1+exp(-forget_gate_input(1,n
)));output_gate(1,n
) = 1/(1+exp(-output_gate_input(1,n
))); endcell_state(:,m
) = (input_gate
.* gate
+ cell_state(:,m
-1)'
.* forget_gate
)'
; pre_h_state
= tanh(cell_state(:,m
)'
) .* output_gate
;h_state(:,m
) = (pre_h_state
* weight_preh_h
)'
;% 將當(dāng)前預(yù)測點作為下一步輸入數(shù)據(jù)test_input
= postmnmx(test_input
,min_input
,max_input
);now_prepoint
= postmnmx(h_state(:,m
),min_output
,max_output
);%test_input
= mapminmax('reverse'
,test_input
,ps_input
);test_input
= [test_input(2:end
); now_prepoint
];test_input
= tramnmx(test_input
,min_input
,max_input
);
endpre_data
= postmnmx(h_state(:,train_num
+ h
:h
:train_num
+ test_num
),min_output
,max_output
);all_pre
= postmnmx(h_state(:,1:h
:train_num
+ test_num
),min_output
,max_output
);% 畫圖
figure
title('LSTM預(yù)測'
)
hold
on
plot(1:size([train_data test_data
],2),[train_data test_data
], 'o
-'
, 'color
','r'
, 'linewidth'
, 1);
plot(size(train_data
,2) + h
:h
:size([train_data test_data
],2),pre_data
, '
*-','color
','b
','linewidth'
, 1);
plot([size
(train_data,2) size(train_data
,2)],[-0.01 0.01],'g
-','LineWidth'
,4);
legend({ '真實值'
, '預(yù)測值'
});
end
??想了解更過相關(guān)仿真,可以關(guān)注我的微信公眾號。
總結(jié)
以上是生活随笔為你收集整理的电池SOH仿真系列-基于LSTM神经网络的电池SOH估算方法的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。