Oracle ASM理论及实践介绍
ASM
首先講ASM之前,我們先了解一下RAID0和RAID1的故事吧:
RAID0:最少由兩塊磁盤組成,以兩個100G的磁盤為例,組成200G的磁盤陣列,那用戶寫入的數據就會往200G的磁盤內進行寫入,但是如果一個磁盤損壞,會導致整個RAID0磁盤陣列內的磁盤都不可使用,安全性差,但是讀寫效率高,硬盤使用率高。
RAID1:最少由兩塊磁盤組成,以兩個100G的磁盤為例,組成100g的磁盤陣列,用戶寫入的數據只會寫入100G的空間內,意思是用戶寫入的數據會同時寫入兩個磁盤內,兩個磁盤的數據是完全一模一樣的,磁盤之間也是數據互通,可以形成一個很好的備份,如果一個磁盤損壞了,第二個磁盤可以接著使用,安全性高,但是寫的效率低,無論添加多少磁盤,都是n分之一的寫速度,目前是兩個100g的硬盤,加多一塊100g的硬盤,整個磁盤組大小也是100g不變。
- 而ASM就跟我們的RAID1就類似了,在win或Linux環境下,配置RAID是需要成本的,要實現高可用的物理成本以及學習成本都是很高,但是Oracle提供了一種服務,ASM(自動磁盤管理)可以通過Oracle命令給我們完成高可用的磁盤陣列。
以上給大家簡單的介紹了ASM組成,之前在學習搭建11gASM在網上查資料的時候,發現有位哥們寫的文章是非常精細的,所以本章我就不細致介紹網格的搭建了,可以參考以下鏈接學習(11g
11.2.0.4 ASM+單實例靜默安裝)
Oracle ASM單實例靜默安裝+升級_總結、分享、交流-CSDN博客_oracle靜默安裝asm
說說我在安裝ASM過程中遇到的一些問題吧:1. ASM磁盤建議不分區(使用裸盤)或者只分一個物理分區。(ASM磁盤無需掛載)2. 在對裸磁盤分區后,可能會遇到udev不識別磁盤的問題,建議重新掛載磁盤或更換規則綁定方式,如以上還是不能成功,建議使用裸設備作為ASM磁盤組。3. ASM磁盤組磁盤分區大小或者裸設備大小建議相同,例如目前有三塊磁盤,建議都是同等大小,利于數據庫識別以及寫入。4. 在安裝ASM磁盤建議能用RPM包的方式安裝就使用,對linux udev熟悉的可以采用udev綁定,推薦優先使用udev,畢竟是服務器自帶的工具,應對各場景下安裝ASM。5. 對大于2T的磁盤分區要先用parted將分區轉換為gpt格式,不然安裝ASM無法識別磁盤。以上就是我在安裝ASM過程中有遇到過的問題,我是在linux環境下靜默安裝的,如果各位還有遇到過別的問題,歡迎補充,大家一起解決商量。/*
Oracle ASM 安裝步驟
*/
–格式化磁盤,分區
大于2T以上的磁盤使用parted分區,小于則用fdisk,大于2T磁盤必須改為gpt分區
–udev規則綁定磁盤分區,規則文件位置,如果沒有此文件則自行創建
/etc/udev/rules.d/99-oracle-asmdevices.rules
–查看磁盤uuid,通過uuid綁定udev規則指定(linux 6)
/sbin/scsi_id -g -u -d /dev/sdb
/*
這里我將三個磁盤進行分區,其中兩塊是固態盤,因為文件格式的關系,我分了6個分區,而剩余的機械盤同理文件格式問題,分了8個區,
以下為我的具體規則:
–KERNEL(分區名稱/磁盤名稱,可以用組顯示sdb[1-5],代表sdb1,sdb2…sdb5)
–SUBSYSTEM(設備的子系統名稱,例如:sda 的子系統為 block。)
–PROGRAM(調用外部命令指定規則,可以通過父盤綁定也可以通過自身盤綁定)
–RESULT(uudi查詢結果,PROGRAM 返回的結果。)
–NAME(別名,指定磁盤的別名,成功綁定后,原有被綁定的磁盤會以別名的形式出現在/dev目錄下)
–OWNER(操作系統用戶,ASM磁盤指定的用戶)
–GROUP(操作系統組,ASM磁盤指定的分組)
–MODE(磁盤權限,默認0660)
/
–設置規則
[root@sv133-db1 dev]# vim /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL==“nvme0n1p1”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/ p a r e n t " , R E S U L T = = " S N V M e H W E 32 P 43032 M 000031 Y S R F S K 5000287 " , N A M E = " a s m d i s k s / a s m ? s 0 d 1 " , O W N E R = " g r i d " , G R O U P = " a s m a d m i n " , M O D E = " 0660 " K E R N E L = = " n v m e 0 n 1 p 2 " , S U B S Y S T E M = = " b l o c k " , P R O G R A M = = " / s b i n / s c s i i d ? g ? u ? d / d e v / parent",RESULT=="SNVMe_HWE32P43032M000031YSRFSK5000287",NAME="asmdisks/asm-s0d1",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="nvme0n1p2",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id -g -u -d /dev/ parent",RESULT=="SNVMeH?WE32P43032M000031YSRFSK5000287",NAME="asmdisks/asm?s0d1",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="nvme0n1p2",SUBSYSTEM=="block",PROGRAM=="/sbin/scsii?d?g?u?d/dev/parent",RESULT==“SNVMe_HWE32P43032M000031YSRFSK5000287”,NAME=“asmdisks/asm-s0d2”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“0660”
KERNEL==“nvme0n1p3”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/ p a r e n t " , R E S U L T = = " S N V M e H W E 32 P 43032 M 000031 Y S R F S K 5000287 " , N A M E = " a s m d i s k s / a s m ? s 0 d 3 " , O W N E R = " g r i d " , G R O U P = " a s m a d m i n " , M O D E = " 0660 " K E R N E L = = " n v m e 0 n 1 p 4 " , S U B S Y S T E M = = " b l o c k " , P R O G R A M = = " / s b i n / s c s i i d ? g ? u ? d / d e v / parent",RESULT=="SNVMe_HWE32P43032M000031YSRFSK5000287",NAME="asmdisks/asm-s0d3",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="nvme0n1p4",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id -g -u -d /dev/ parent",RESULT=="SNVMeH?WE32P43032M000031YSRFSK5000287",NAME="asmdisks/asm?s0d3",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="nvme0n1p4",SUBSYSTEM=="block",PROGRAM=="/sbin/scsii?d?g?u?d/dev/parent",RESULT==“SNVMe_HWE32P43032M000031YSRFSK5000287”,NAME=“asmdisks/asm-s0d4”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“0660”
KERNEL==“nvme0n1p5”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/ p a r e n t " , R E S U L T = = " S N V M e H W E 32 P 43032 M 000031 Y S R F S K 5000287 " , N A M E = " a s m d i s k s / a s m ? s 0 d 5 " , O W N E R = " g r i d " , G R O U P = " a s m a d m i n " , M O D E = " 0660 " K E R N E L = = " n v m e 1 n 1 p 2 " , S U B S Y S T E M = = " b l o c k " , P R O G R A M = = " / s b i n / s c s i i d ? g ? u ? d / d e v / parent",RESULT=="SNVMe_HWE32P43032M000031YSRFSK5000287",NAME="asmdisks/asm-s0d5",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="nvme1n1p2",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id -g -u -d /dev/ parent",RESULT=="SNVMeH?WE32P43032M000031YSRFSK5000287",NAME="asmdisks/asm?s0d5",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="nvme1n1p2",SUBSYSTEM=="block",PROGRAM=="/sbin/scsii?d?g?u?d/dev/parent",RESULT==“SNVMe_HWE32P43032M000031YSRFSJ7000570”,NAME=“asmdisks/asm-s1d2”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“0660”
KERNEL==“nvme1n1p3”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/ p a r e n t " , R E S U L T = = " S N V M e H W E 32 P 43032 M 000031 Y S R F S J 7000570 " , N A M E = " a s m d i s k s / a s m ? s 1 d 3 " , O W N E R = " g r i d " , G R O U P = " a s m a d m i n " , M O D E = " 0660 " K E R N E L = = " n v m e 1 n 1 p 4 " , S U B S Y S T E M = = " b l o c k " , P R O G R A M = = " / s b i n / s c s i i d ? g ? u ? d / d e v / parent",RESULT=="SNVMe_HWE32P43032M000031YSRFSJ7000570",NAME="asmdisks/asm-s1d3",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="nvme1n1p4",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id -g -u -d /dev/ parent",RESULT=="SNVMeH?WE32P43032M000031YSRFSJ7000570",NAME="asmdisks/asm?s1d3",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="nvme1n1p4",SUBSYSTEM=="block",PROGRAM=="/sbin/scsii?d?g?u?d/dev/parent",RESULT==“SNVMe_HWE32P43032M000031YSRFSJ7000570”,NAME=“asmdisks/asm-s1d4”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“0660”
KERNEL==“nvme1n1p5”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/ p a r e n t " , R E S U L T = = " S N V M e H W E 32 P 43032 M 000031 Y S R F S J 7000570 " , N A M E = " a s m d i s k s / a s m ? s 1 d 5 " , O W N E R = " g r i d " , G R O U P = " a s m a d m i n " , M O D E = " 0660 " K E R N E L = = " n v m e 1 n 1 p 6 " , S U B S Y S T E M = = " b l o c k " , P R O G R A M = = " / s b i n / s c s i i d ? g ? u ? d / d e v / parent",RESULT=="SNVMe_HWE32P43032M000031YSRFSJ7000570",NAME="asmdisks/asm-s1d5",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="nvme1n1p6",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id -g -u -d /dev/ parent",RESULT=="SNVMeH?WE32P43032M000031YSRFSJ7000570",NAME="asmdisks/asm?s1d5",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="nvme1n1p6",SUBSYSTEM=="block",PROGRAM=="/sbin/scsii?d?g?u?d/dev/parent",RESULT==“SNVMe_HWE32P43032M000031YSRFSJ7000570”,NAME=“asmdisks/asm-s1d6”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“0660”
KERNEL==“sdb2”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/ p a r e n t " , R E S U L T = = " 36 b 44326 c a c f 94000265 f 888 e a 03 e 21 b 6 " , N A M E = " a s m d i s k s / a s m ? h 0 d 2 " , O W N E R = " g r i d " , G R O U P = " a s m a d m i n " , M O D E = " 0660 " K E R N E L = = " s d b 3 " , S U B S Y S T E M = = " b l o c k " , P R O G R A M = = " / s b i n / s c s i i d ? g ? u ? d / d e v / parent",RESULT=="36b44326cacf94000265f888ea03e21b6",NAME="asmdisks/asm-h0d2",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sdb3",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id -g -u -d /dev/ parent",RESULT=="36b44326cacf94000265f888ea03e21b6",NAME="asmdisks/asm?h0d2",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="sdb3",SUBSYSTEM=="block",PROGRAM=="/sbin/scsii?d?g?u?d/dev/parent",RESULT==“36b44326cacf94000265f888ea03e21b6”,NAME=“asmdisks/asm-h0d3”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“0660”
KERNEL==“sdb4”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/ p a r e n t " , R E S U L T = = " 36 b 44326 c a c f 94000265 f 888 e a 03 e 21 b 6 " , N A M E = " a s m d i s k s / a s m ? h 0 d 4 " , O W N E R = " g r i d " , G R O U P = " a s m a d m i n " , M O D E = " 0660 " K E R N E L = = " s d b 5 " , S U B S Y S T E M = = " b l o c k " , P R O G R A M = = " / s b i n / s c s i i d ? g ? u ? d / d e v / parent",RESULT=="36b44326cacf94000265f888ea03e21b6",NAME="asmdisks/asm-h0d4",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sdb5",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id -g -u -d /dev/ parent",RESULT=="36b44326cacf94000265f888ea03e21b6",NAME="asmdisks/asm?h0d4",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="sdb5",SUBSYSTEM=="block",PROGRAM=="/sbin/scsii?d?g?u?d/dev/parent",RESULT==“36b44326cacf94000265f888ea03e21b6”,NAME=“asmdisks/asm-h0d5”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“0660”
KERNEL==“sdb6”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/ p a r e n t " , R E S U L T = = " 36 b 44326 c a c f 94000265 f 888 e a 03 e 21 b 6 " , N A M E = " a s m d i s k s / a s m ? h 0 d 6 " , O W N E R = " g r i d " , G R O U P = " a s m a d m i n " , M O D E = " 0660 " K E R N E L = = " s d b 7 " , S U B S Y S T E M = = " b l o c k " , P R O G R A M = = " / s b i n / s c s i i d ? g ? u ? d / d e v / parent",RESULT=="36b44326cacf94000265f888ea03e21b6",NAME="asmdisks/asm-h0d6",OWNER="grid",GROUP="asmadmin",MODE="0660" KERNEL=="sdb7",SUBSYSTEM=="block",PROGRAM=="/sbin/scsi_id -g -u -d /dev/ parent",RESULT=="36b44326cacf94000265f888ea03e21b6",NAME="asmdisks/asm?h0d6",OWNER="grid",GROUP="asmadmin",MODE="0660"KERNEL=="sdb7",SUBSYSTEM=="block",PROGRAM=="/sbin/scsii?d?g?u?d/dev/parent",RESULT==“36b44326cacf94000265f888ea03e21b6”,NAME=“asmdisks/asm-h0d7”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“0660”
KERNEL==“sdb8”,SUBSYSTEM==“block”,PROGRAM=="/sbin/scsi_id -g -u -d /dev/KaTeX parse error: Expected 'EOF', got '#' at position 226: …@sv133-db1 dev]#? ll /dev/asmdis… vim /opt/soft/grid/response/grid_install.rsp
–響應文件內容基本上按網格的環境變量內容寫上即可,需要特別注意是為以下參數:
oracle.install.option=HA_CONFIG --這里因為我是為單實例數據庫安裝ASM,所以選擇了HA_CONFIG,如果有別的需要集群可以選擇CRS_CONFIG,視情況而定。
oracle.install.crs.config.storageOption=ASM_STORAGE --按照參數的提示,我們這里需要指定ASM_STORAGE的存儲方式,而不是FILE_SYSTEM_STORAGE。
oracle.install.asm.diskGroup.name=SDATA --這里要注意的,上面我的磁盤組分成s和h開頭的兩個盤,s代表固態盤,h代表機械盤,
–為了ASM更好的管理(同類型、同大小的設備會產生更優越的性能),首先將兩塊固態
–盤作為第一個ASM磁盤組(SDATA),而機械盤可以等網格安裝完后再添加上去,取名為
–HDATA。
oracle.install.asm.diskGroup.redundancy=EXTERNAL --這里有三個選項,是ASM的冗余等級,直接選擇EXTERNAL最高等級冗余。
oracle.install.asm.diskGroup.AUSize=1 --視情況而定,這里是分配給ASM的使用單元,1為最小。
oracle.install.asm.diskGroup.disks=/dev/asmdisks/asm-s0d1,/dev/asmdisks/asm-s0d2,/dev/asmdisks/asm-s0d3,/dev/asmdisks/asm-s0d4,/dev/asmdisks/asm-s0d5,
/dev/asmdisks/asm-s1d2,/dev/asmdisks/asm-s1d3,/dev/asmdisks/asm-s1d4,/dev/asmdisks/asm-s1d5,/dev/asmdisks/asm-s1d6 --綁定ASM磁盤組,通過別名指定。
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asmdisks/ --同理,掃描ASM磁盤,這里不用寫絕對路徑。
–檢查響應文件,
[grid@sv133-db1 response]$ cat grid_install.rsp |grep -v ^#|tr -s ‘\n’ #去空行和注釋
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0
ORACLE_HOSTNAME=sv133-db1
INVENTORY_LOCATION=/opt/app/oraInventory
SELECTED_LANGUAGES=en,zh_CN
oracle.install.option=HA_CONFIG
ORACLE_BASE=/opt/app/grid
ORACLE_HOME=/opt/app/grid/11.2.0/grid_home
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.gpnp.scanName=
oracle.install.crs.config.gpnp.scanPort=
oracle.install.crs.config.clusterName=
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=
oracle.install.crs.config.clusterNodes=
oracle.install.crs.config.networkInterfaceList=
oracle.install.crs.config.storageOption=ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.diskDriveMapping=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=NORMAL
oracle.install.crs.config.useIPMI=false
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
oracle.install.asm.SYSASMPassword=oracle
oracle.install.asm.diskGroup.name=SDATA
oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=1
oracle.install.asm.diskGroup.disks=/dev/asmdisks/asm-s0d1,/dev/asmdisks/asm-s0d2,/dev/asmdisks/asm-s0d3,/dev/asmdisks/asm-s0d4,/dev/asmdisks/asm-s0d5,
/dev/asmdisks/asm-s1d2,/dev/asmdisks/asm-s1d3,/dev/asmdisks/asm-s1d4,/dev/asmdisks/asm-s1d5,/dev/asmdisks/asm-s1d6
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/asmdisks/*
oracle.install.asm.monitorPassword=oracle
oracle.install.crs.upgrade.clusterNodes=
oracle.install.asm.upgradeASM=false
oracle.installer.autoupdates.option=
oracle.installer.autoupdates.downloadUpdatesLoc=
AUTOUPDATES_MYORACLESUPPORT_USERNAME=
AUTOUPDATES_MYORACLESUPPORT_PASSWORD=
PROXY_HOST=
PROXY_PORT=
PROXY_USER=
PROXY_PWD=
PROXY_REALM=
–安裝網格
[grid@sv133-db1 grid]$ ./runInstaller -silent -showProgress -ignoreSysPrereqs -ignorePrereq -responseFile /opt/soft/grid/response/grid_install.rsp
Starting Oracle Universal Installer…
Checking Temp space: must be greater than 120 MB. Actual 7505 MB Passed
Checking swap space: must be greater than 150 MB. Actual 16383 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-09-07_02-39-55PM. Please wait …[grid@sv133-db1 grid]$ [WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended standards.
CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
ACTION: Provide a password that conforms to the Oracle recommended standards.
[WARNING] [INS-30011] The ASMSNMP password entered does not conform to the Oracle recommended standards.
CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
ACTION: Provide a password that conforms to the Oracle recommended standards.
[WARNING] [INS-32016] The selected Oracle home contains directories or files.
CAUSE: The selected Oracle home contained directories or files.
ACTION: To start with an empty Oracle home, either remove its contents or choose another location.
You can find the log of this install session at:
/opt/app/oraInventory/logs/installActions2020-09-07_02-39-55PM.log
Prepare in progress.
… 9% Done.
Prepare successful.
Copy files in progress.
… 14% Done.
… 19% Done.
… 24% Done.
… 29% Done.
… 35% Done.
Copy files successful.
… 57% Done.
Link binaries in progress.
Link binaries successful.
… 73% Done.
Setup files in progress.
… 89% Done.
Setup files successful.
The installation of Oracle Grid Infrastructure 11g was successful.
Please check ‘/opt/app/oraInventory/logs/silentInstall2020-09-07_02-39-55PM.log’ for more details.
… 94% Done.
Execute Root Scripts in progress.
As a root user, execute the following script(s):
1. /opt/app/oraInventory/orainstRoot.sh
2. /opt/app/grid/11.2.0/grid_home/root.sh
… 100% Done.
Execute Root Scripts successful.
As install user, execute the following script to complete the configuration.
1. /opt/app/grid/11.2.0/grid_home/cfgtoollogs/configToolAllCommands RESPONSE_FILE=<response_file>
Successfully Setup Software.
–如果由以上輸出基本上就代表已經安裝完成了,但是還需要執行以下兩個腳本,開多一個命令窗口(執行安裝的窗口不可關閉,等安裝自行斷開),用root用戶執行以下命令:
/opt/app/oraInventory/orainstRoot.sh
/opt/app/grid/11.2.0/grid_home/root.sh
–當執行完第二個腳本后,輸出以下結果表示成功:
Using configuration parameter file: /opt/app/grid/11.2.0/grid_home/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user ‘grid’, privgrp ‘oinstall’…
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’…
Operation successful.
CRS-4664: Node sv133-db1 successfully pinned.
Adding Clusterware entries to upstart
sv133-db1 2020/09/07 14:43:53 /opt/app/grid/11.2.0/grid_home/cdata/sv133-db1/backup_20200907_144353.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
–創建保存ASM磁盤密碼文件
vi /opt/soft/cfgrsp.properties --這里的信息,跟響應文件內的密碼同步
oracle.assistants.asm|S_ASMPASSWORD=oracle
oracle.assistants.asm|S_ASMMONITORPASSWORD=oracle
–按照安裝信息,繼續執行(As install user, execute the following script to complete the configuration.)
/opt/app/grid/11.2.0/grid_home/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/opt/softwater/cfgrsp.properties --指定密碼文件信息運行。
perform - mode is starting for action: configure
perform - mode finished for action: configure
You can see the log file: /opt/app/grid/11.2.0/grid_home/cfgtoollogs/oui/configActions2020-09-07_02-52-01-PM.log
–打印以上結果后,檢查日志。
[grid@sv133-db1 opt]$ cat /opt/app/grid/11.2.0/grid_home/cfgtoollogs/oui/configActions2020-09-07_02-52-01-PM.log
###################################################
The action configuration is performing
The plug-in Update Inventory is running
/opt/app/grid/11.2.0/grid_home/oui/bin/runInstaller -nowait -noconsole -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true “CLUSTER_NODES={}” ORACLE_HOME=/opt/app/grid/11.2.0/grid_home
Starting Oracle Universal Installer…
Checking swap space: must be greater than 500 MB. Actual 16383 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /opt/app/oraInventory
The plug-in Update Inventory has successfully been performed
The plug-in Oracle Cluster Verification Utility is running
Performing post-checks for Oracle Restart configuration
Checking Oracle Restart integrity…
Oracle Restart integrity check passed
Checking OLR integrity…
Checking OLR config file…
OLR config file check successful
Checking OLR file attributes…
OLR file check successful
WARNING:
This check does not verify the integrity of the OLR contents. Execute ‘ocrcheck -local’ as a privileged user to verify the contents of OLR.
OLR integrity check passed
Post-check for Oracle Restart configuration was successful.
The plug-in Oracle Cluster Verification Utility has successfully been performed
The action configuration has successfully completed
##################################################
–出現以上信息,waring不用理會以外表示已安裝ASM網格成功
【刪除ASM】
1.將grid用戶環境變量下所有路徑下的文件全部刪除。
2.通過安裝腳本查看ocr文件和olr文件的放置位置,并進行刪除,一般會有兩個路徑:
OCR_LOC=/var/opt/oracle/ocr.loc
OCR_LOC=/etc/oracle/ocr.loc
3.再重新安裝即可。
4.前提是還沒創建磁盤組,只是單純刪除ASM軟件。
5.dd if=/dev/zero of=/dev/asmdisks/asm-s1d6 bs=8192 count=128000 --清空ASM磁盤數據,用于重裝ASM。
【安裝過程遇到的問題】
1.安裝ASM軟件完成,但是磁盤組并沒有按照響應文件的要求安裝,可以手動使用以下命令安裝( O R A C L E H O M E / B I N ) : ? 后 面 的 是 參 數 類 型 和 對 應 的 值 , 都 是 和 響 應 文 件 里 面 匹 配 : ? ? 先 綁 定 固 態 盤 . / a s m c a ? s i l e n t ? c o n f i g u r e A S M ? d i s k S t r i n g ′ / d e v / a s m d i s k s / ? ′ ? d i s k G r o u p N a m e S D A T A ? d i s k L i s t / d e v / a s m d i s k s / a s m ? s 0 d 1 , / d e v / a s m d i s k s / a s m ? s 0 d 2 , / d e v / a s m d i s k s / a s m ? s 0 d 3 , / d e v / a s m d i s k s / a s m ? s 0 d 4 , / d e v / a s m d i s k s / a s m ? s 0 d 5 , / d e v / a s m d i s k s / a s m ? s 1 d 2 , / d e v / a s m d i s k s / a s m ? s 1 d 3 , / d e v / a s m d i s k s / a s m ? s 1 d 4 , / d e v / a s m d i s k s / a s m ? s 1 d 5 , / d e v / a s m d i s k s / a s m ? s 1 d 6 ? r e d u n d a n c y E X T E R N A L ? a u s i z e 1 ? ? 綁 定 機 械 盤 . / a s m c a ? s i l e n t ? c o n f i g u r e A S M ? d i s k S t r i n g ′ / d e v / a s m d i s k s / ? ′ ? d i s k G r o u p N a m e H D A T A ? d i s k L i s t / d e v / a s m d i s k s / a s m ? h 0 d 2 , / d e v / a s m d i s k s / a s m ? h 0 d 3 , / d e v / a s m d i s k s / a s m ? h 0 d 4 , / d e v / a s m d i s k s / a s m ? h 0 d 5 , / d e v / a s m d i s k s / a s m ? h 0 d 6 , / d e v / a s m d i s k s / a s m ? h 0 d 7 , / d e v / a s m d i s k s / a s m ? h 0 d 8 ? r e d u n d a n c y E X T E R N A L ? a u s i z e 12. 最 后 如 果 用 集 群 檢 查 命 令 沒 有 出 現 以 下 磁 盤 ( 是 否 啟 動 沒 關 系 ) , 那 就 代 表 沒 安 裝 成 功 . [ g r i d @ s v 133 ? d b 1 b i n ] ORACLE_HOME/BIN):-后面的是參數類型和對應的值,都 是和響應文件里面匹配: --先綁定固態盤 ./asmca -silent -configureASM -diskString '/dev/asmdisks/*' -diskGroupName SDATA -diskList /dev/asmdisks/asm-s0d1,/dev/asmdisks/asm-s0d2,/dev/asmdisks/asm-s0d3,/dev/asmdisks/asm-s0d4,/dev/asmdisks/asm-s0d5,/dev/asmdisks/asm-s1d2,/dev/asmdisks/asm-s1d3,/dev/asmdisks/asm-s1d4,/dev/asmdisks/asm-s1d5,/dev/asmdisks/asm-s1d6 -redundancy EXTERNAL -au_size 1 --綁定機械盤 ./asmca -silent -configureASM -diskString '/dev/asmdisks/*' -diskGroupName HDATA -diskList /dev/asmdisks/asm-h0d2,/dev/asmdisks/asm-h0d3,/dev/asmdisks/asm-h0d4,/dev/asmdisks/asm-h0d5,/dev/asmdisks/asm-h0d6,/dev/asmdisks/asm-h0d7,/dev/asmdisks/asm-h0d8 -redundancy EXTERNAL -au_size 1 2.最后如果用集群檢查命令沒有出現以下磁盤(是否啟動沒關系),那就代表沒安裝成功. [grid@sv133-db1 bin] ORACLEH?OME/BIN):?后面的是參數類型和對應的值,都是和響應文件里面匹配:??先綁定固態盤./asmca?silent?configureASM?diskString′/dev/asmdisks/?′?diskGroupNameSDATA?diskList/dev/asmdisks/asm?s0d1,/dev/asmdisks/asm?s0d2,/dev/asmdisks/asm?s0d3,/dev/asmdisks/asm?s0d4,/dev/asmdisks/asm?s0d5,/dev/asmdisks/asm?s1d2,/dev/asmdisks/asm?s1d3,/dev/asmdisks/asm?s1d4,/dev/asmdisks/asm?s1d5,/dev/asmdisks/asm?s1d6?redundancyEXTERNAL?aus?ize1??綁定機械盤./asmca?silent?configureASM?diskString′/dev/asmdisks/?′?diskGroupNameHDATA?diskList/dev/asmdisks/asm?h0d2,/dev/asmdisks/asm?h0d3,/dev/asmdisks/asm?h0d4,/dev/asmdisks/asm?h0d5,/dev/asmdisks/asm?h0d6,/dev/asmdisks/asm?h0d7,/dev/asmdisks/asm?h0d8?redundancyEXTERNAL?aus?ize12.最后如果用集群檢查命令沒有出現以下磁盤(是否啟動沒關系),那就代表沒安裝成功.[grid@sv133?db1bin] crs_stat -t -v
Name Type R/RA F/FT Target State Host
ora.HDATA.dg ora…up.type 0/5 0/ ONLINE ONLINE sv133-db1
ora…ER.lsnr ora…er.type 0/5 0/ ONLINE ONLINE sv133-db1
ora.SDATA.dg ora…up.type 0/5 0/ ONLINE ONLINE sv133-db1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE sv133-db1
ora.cssd ora.cssd.type 0/5 0/5 ONLINE ONLINE sv133-db1
ora.diskmon ora…on.type 0/10 0/5 OFFLINE OFFLINE
ora.evmd ora.evm.type 0/10 0/5 ONLINE ONLINE sv133-db1
ora.ons ora.ons.type 0/3 0/ OFFLINE OFFLINE
總結
以上是生活随笔為你收集整理的Oracle ASM理论及实践介绍的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: moment判断日期时间是否在另一个日期
- 下一篇: Oracle系列:start with