日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

一次典型的心跳IP被占用导致的RAC节点主机重启分析

發布時間:2023/12/20 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 一次典型的心跳IP被占用导致的RAC节点主机重启分析 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

如下是另一次典型的由于網絡心跳異常導致的RAC節點GRID軟件重啟分析(由于11gR2 新特性Rebootless Restart特性默認是重啟GRID不重啟主機,但是此問題由于當時私網IP被占用,發生了多次GRID重啟,某次關閉時未正常關閉,所以觸發了主機重啟),當時寫給用戶的故障分析報告,供參考。

一、服務概述

某客戶CRM系統 RAC集群節點2異常重啟問題;**工程師接到故障申報后,及時進行響應,通過對相關日志等信息的深入分析,整理匯總此文檔。

?

二、數據庫集群節點2異常重啟問題分析

通過查看集群日志,可以發現在2019-03-08 15:30:22開始,集群的兩個節點間網絡心跳異常,在2019-03-08 15:30:36.797根據集群腦裂時的機制,將集群節點2從集群中驅逐。如下是相應集群日志:

1.節點2被驅逐出集群的日志

1.1 集群節點1日志信息

?

節點1ALERT

節點1集群日志:

2019-03-08 15:30:22.278:[cssd(5543)]CRS-1612:Network communication with node swcrm2 (2) missing for 50% of timeout interval.? Removal of this node from cluster in 14.516 seconds2019-03-08 15:30:30.282:[cssd(5543)]CRS-1611:Network communication with node swcrm2 (2) missing for 75% of timeout interval.? Removal of this node from cluster in 6.512 seconds2019-03-08 15:30:34.284:[cssd(5543)]CRS-1610:Network communication with node swcrm2 (2) missing for 90% of timeout interval.? Removal of this node from cluster in 2.510 seconds2019-03-08 15:30:36.797:[cssd(5543)]CRS-1607:Node swcrm2 is being evicted in cluster incarnation 433013533; details at (:CSSNM00007:) in /oracle/11.2.0/grid/log/crm1/cssd/ocssd.log.

1.2 集群節點2日志信息

節點2集群日志:

2018-09-27 01:50:21.772:[crsd(10018)]CRS-2772:Server 'swcrm1' has been assigned to pool 'ora.crm'.2019-03-08 15:30:30.568:[cssd(9624)]CRS-1612:Network communication with node swcrm1 (1) missing for 50% of timeout interval.? Removal of this node from cluster in 14.018 seconds2019-03-08 15:30:37.130:[cssd(9624)]CRS-1608:This node was evicted by node 1, swcrm1; details at (:CSSNM00005:) in /oracle/11.2.0/grid/log/crm2/cssd/ocssd.log.2019-03-08 15:30:37.131:[cssd(9624)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /oracle/11.2.0/grid/log/crm2/cssd/ocssd.log2019-03-08 15:30:37.132:[cssd(9624)]CRS-1652:Starting clean up of CRSD resources..

?

1.3 集群節點1 CSSD進程的詳細信息

節點1 CSSD進程日志:

15:30:34.284: [??? CSSD][39]clssnmPollingThread: node swcrm2 (2) at 90% heartbeat fatal, removal in 2.510 seconds,2019-03-08…………2019-03-08 15:30:36.796: [??? CSSD][41]clssnmDoSyncUpdate: Terminating node 2, swcrm2, misstime(30000) state(5)2019-03-08 15:30:36.796: [??? CSSD][41]clssnmDoSyncUpdate: Wait for 0 vote ack(s)2019-03-08 15:30:36.796: [??? CSSD][41]clssnmCheckDskInfo: Checking disk info...2019-03-08 15:30:36.796: [??? CSSD][41]clssnmCheckSplit: Node 2, swcrm2, is alive, DHB (1552030236, 4056057127) more than disk timeout of 27000 after the last NHB (1552030206, 4056027274)2019-03-08 15:30:36.796: [??? CSSD][41]clssnmCheckDskInfo: My cohort: 12019-03-08 15:30:36.796: [??? CSSD][41]clssnmRemove: Start2019-03-08 15:30:36.796: [??? CSSD][41](:CSSNM00007:)clssnmrRemoveNode: Evicting node 2, swcrm2, from the cluster in incarnation 433013533, node birth incarnation 433013530, death incarnation 433013533, stateflags 0x264000 uniqueness value 15379788542019-03-08 15:30:36.797: [??? CSSD][1]clssgmQueueGrockEvent: groupName(IGCRMSYS$USERS)…………2019-03-08 15:30:44.814: [??? CSSD][42]clssnmUpdateNodeState: node swcrm1, number 1, current state 3, proposed state 3, current unique 1537984150, proposed unique 1537984150, prevConuni 0, birth 4330135322019-03-08 15:30:44.814: [??? CSSD][42]clssnmUpdateNodeState: node swcrm2, number 2, current state 5, proposed state 0, current unique 1537978854, proposed unique 1537978854, prevConuni 1537978854, birth 4330135302019-03-08 15:30:44.814: [??? CSSD][42]clssnmDeactivateNode: node 2, state 52019-03-08 15:30:44.814: [??? CSSD][42]clssnmDeactivateNode: node 2 (swcrm2) left cluster

1.4 集群節點2 CSSD進程的詳細信息

節點2 CSSD進程日志:

2019-03-08 15:30:30.569顯示與節點1網絡心跳異常,磁盤心跳正常node 1, swcrm1, has a disk HB, but no network HB2019-03-08 15:30:29.812: [??? CSSD][40]clssnmSendingThread: sending status msg to all nodes2019-03-08 15:30:29.812: [??? CSSD][40]clssnmSendingThread: sent 4 status msgs to all nodes2019-03-08 15:30:29.815: [??? CSSD][5]clssgmpcBuildNodeList: nodename for node 0 is NULL2019-03-08 15:30:30.569: [??? CSSD][39]clssnmPollingThread: node swcrm1 (1) at 50% heartbeat fatal, removal in 14.018 seconds2019-03-08 15:30:30.569: [??? CSSD][39]clssnmPollingThread: node swcrm1 (1) is impending reconfig, flag 2229260, misstime 159822019-03-08 15:30:30.569: [??? CSSD][39]clssnmPollingThread: local diskTimeout set to 27000 ms, remote disk timeout set to 27000, impending reconfig status(1)2019-03-08 15:30:30.569: [??? CSSD][29]clssnmvDHBValidateNcopy: node 1, swcrm1, has a disk HB, but no network HB, DHB has rcfg 433013533, wrtcnt, 50763125, LATS 4056051052, lastSeqNo 50762918, uniqueness 1537984150, timestamp 1552030230/41356106882019-03-08 15:30:30.569: [??? CSSD][32]clssnmvDHBValidateNcopy: node 1, swcrm1, has a disk HB, but no network HB, DHB has rcfg 433013533, wrtcnt, 50763126, LATS 4056051052, lastSeqNo 50763087, uniqueness 1537984150, timestamp 1552030230/41356110382019-03-08 15:30:30.570: [??? CSSD][35]clssnmvDHBValidateNcopy: node 1, swcrm1, has a disk HB, but no network HB, DHB has rcfg 433013533, wrtcnt, 50763127, LATS 4056051052, lastSeqNo 50762920, uniqueness 1537984150, timestamp 1552030230/41356110562019-03-08 15:30:30.621: [??? CSSD][30]clssnmvDiskPing: Writing with status 0x3, timestamp 1552030230/40560511042019-03-08 15:30:30.635: [??? CSSD][33]clssnmvDiskPing: Writing with status 0x3, timestamp 1552030230/40560511182019-03-08 15:30:30.907: [??? CSSD][5]clssscSelect: cookie accept request 100a27af0…………2019-03-08 15:30:37.131: [??? CSSD][28](:CSSNM00005:)clssnmvDiskKillCheck: Aborting, evicted by node swcrm1, number 1, sync 433013533, stamp 41356173592019-03-08 15:30:37.131: [??? CSSD][28]clssnmRemoveNodeInTerm: node 2, swcrm2 terminated due to Normal Shutdown. Removing from member and connected bitmaps2019-03-08 15:30:37.131: [??? CSSD][30]clssnmvDiskPing: Writing with status 0x3, timestamp 1552030237/40560576142019-03-08 15:30:37.131: [??? CSSD][28]###################################2019-03-08 15:30:37.131: [??? CSSD][28]clssscExit: CSSD aborting from thread clssnmvKillBlockThread2019-03-08 15:30:37.131: [??? CSSD][28]###################################2019-03-08 15:30:37.131: [??? CSSD][28](:CSSSC00012:)clssscExit: A fatal error occurred and the CSS daemon is terminating abnormally2019-03-08 15:30:37.131: [??? CSSD][28]clssnmSendMeltdownStatus: node swcrm2, number 2, has experienced a failure in thread number 10 and is shutting down2019-03-08 15:30:37.132: [??? CSSD][28]clssscExit: Starting CRSD cleanup…………開始清理CRSD資源2019-03-08 15:30:40.767: [??? CSSD][28]clssscExit: CRSD cleanup status 02019-03-08 15:30:40.768: [??? CSSD][28]clssscExit: CRSD cleanup successfully completed

?

?

2.節點2操作系統重啟分析

關于11.2集群的rebootless restart特性:

從集群版本11.2.0.2 開始,oracle新特性rebootless restart當出現以下情況的時候,集群件(GI)會重新啟動集群管理軟件,而不是將節點重啟。

1.當某個節點連續丟失網絡心跳超過misscount時。

2.當某個節點不能訪問大多數表決盤(VF)時。

3.當member kill 被升級成為node kill的時候。

在之前的版本,以上情況,集群管理軟件(CRS)會直接重啟節點。

而GRID 在重啟集群之前,首先要對集群進行graceful shutdown, 基本的步驟如下。

1.停止本地節點的所有心跳(網絡心跳,磁盤心跳和本地心跳)。

2.通知cssd agent,ocssd.bin即將停止

3.停止所有注冊到css的具有i/o能力的進程,例如 lmon。

4.cssd通知crsd 停止所有資源,如果crsd不能成功的停止所有的資源,節點重啟仍然會發生。

5.Cssd等待所有的具有i/o能力的進程退出,如果這些進程在short i/o timeout時間內不能不能全部推遲,節點重啟仍然會發生。

6.通知cssd agent 所有的有i/o能力的進程全部退出。

7.Ohasd 重新啟動集群。

8.本地節點通知其他節點進行集群重配置。

特殊情況下如由于ocssd.bin 出現問題(例如:掛起),或者操作系統性能引起的重啟,rebootless restart是無法起作用的,因為,對于這種情況ocssd.bin已經不能正常工作,節點重啟仍然不可避免。

?

本次數據庫集群版本為11.2.0.4.0,在由于網絡心跳丟失發生節點驅逐時,正常情況下應該是只重啟集群軟件,不會導致重啟操作系統。經分析集群節點2的CSSD進程日志可以發現,在2019-03-08 15:30分節點2被集群驅逐后,節點2的集群軟件GRID又重新啟動了4次,在最后一次重啟前發生了主機重啟。如下為詳細的日志信息分析

2.1 CSSD進程的啟動次數信息

2019-03-08 15:30:48.991: [??? CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.4.0, in (clustered) mode with uniqueness value 1552030248=====>>>>>本次啟動前未發生主機重啟2019-03-08 16:06:37.698: [??? CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.4.0, in (clustered) mode with uniqueness value 1552032397=====>>>>>本次啟動前未發生主機重啟2019-03-08 16:17:27.769: [??? CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.4.0, in (clustered) mode with uniqueness value 1552033047=====>>>>>本次啟動前未發生主機重啟2019-03-08 16:27:26.269: [??? CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.4.0, in (clustered) mode with uniqueness value 1552033646=====>>>>>本次啟動前發生主機重啟

2.2 GRID重啟前的關閉日志信息:

節點2的第一次集群關閉正常完成2019-03-08 15:30:40.767: [??? CSSD][28]clssscExit: CRSD cleanup status 02019-03-08 15:30:40.768: [??? CSSD][28]clssscExit: CRSD cleanup successfully completed節點2的第二次集群關閉正常完成2019-03-08 16:06:32.101: [??? CSSD][28]clssscExit: CRSD cleanup status 02019-03-08 16:06:32.102: [??? CSSD][5]clssgmProcClientReqs: Checking RPC Q2019-03-08 16:06:32.102: [??? CSSD][5]clssgmProcClientReqs: Checking dead client Q2019-03-08 16:06:32.102: [??? CSSD][5]clssgmProcClientReqs: Checking dead proc Q2019-03-08 16:06:32.102: [??? CSSD][28]clssscExit: CRSD cleanup successfully completed節點2的第三次集群關閉未正常完成本次關閉集群軟件時在清理CRSD資源時失敗,所以出現了主機重啟。2019-03-08 16:20:29.538: [??? CSSD][28]clssscExit: CRSD cleanup status 1842019-03-08 16:20:29.538: [??? CSSD][28]clssscExit: CRSD cleanup failed with 184具體如下:2019-03-08 16:20:29.296: [??? CSSD][28]clssnmRemoveNodeInTerm: node 2, swcrm2 terminated due to Normal Shutdown. Removing from member and connected bitmaps2019-03-08 16:20:29.296: [??? CSSD][28]###################################2019-03-08 16:20:29.296: [??? CSSD][28]clssscExit: CSSD aborting from thread clssnmvKillBlockThread2019-03-08 16:20:29.296: [??? CSSD][28]###################################2019-03-08 16:20:29.296: [??? CSSD][28](:CSSSC00012:)clssscExit: A fatal error occurred and the CSS daemon is terminating abnormally2019-03-08 16:20:29.296: [??? CSSD][28]clssnmSendMeltdownStatus: node swcrm2, number 2, has experienced a failure in thread number 10 and is shutting down2019-03-08 16:20:29.296: [??? CSSD][34](:CSSNM00005:)clssnmvDiskKillCheck: Aborting, evicted by node swcrm1, number 1, sync 433013537, stamp 41386094512019-03-08 16:20:29.296: [??? CSSD][34]clssnmRemoveNodeInTerm: node 2, swcrm2 terminated due to Normal Shutdown. Removing from member and connected bitmaps2019-03-08 16:20:29.296: [??? CSSD][34]clssscExit: abort already set 12019-03-08 16:20:29.296: [??? CSSD][28]clssscExit: Starting CRSD cleanup^^^^^^2019-03-08 16:20:29.538: [??? CSSD][28]clssscExit: CRSD cleanup status 1842019-03-08 16:20:29.538: [??? CSSD][28]clssscExit: CRSD cleanup failed with 1842019-03-08 16:27:26.268: [??? CSSD][1]clsu_load_ENV_levels: Module = CSSD, LogLevel = 2, TraceLevel = 0

三、總結與后續處理建議

本次數據庫集群在2019-03-08 15:30:40出現了節點間因為丟失網絡心跳從而發生節點2被驅逐出集群。節點2之后自動重新啟動集群軟件,在2019-03-08 15:30:48、2019-03-08 16:06:37、2019-03-08 16:17:27時反復嘗試重新啟動集群,由于網絡心跳異常加入集群未成功。

在重啟GRID加入集群不成功后節點2會被集群的腦裂機制反復驅逐,此時由于集群版本11.2.0.2 開始的oracle新特性rebootless restart機制會不重啟主機只嘗試重啟GRID集群軟件;而在16:20:29時由于在關閉集群時未能成功清理CRSD資源,此時觸發了重啟操作系統的機制,導致操作系統也發生了重啟。

由于本次集群間網絡心跳異常原因是網絡中的其它IP與當前集群私網IP地址沖突,在處理了此問題后,集群在主機重啟后已經可以正常啟動;目前集群狀態均正常。

總結

以上是生活随笔為你收集整理的一次典型的心跳IP被占用导致的RAC节点主机重启分析的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。