AIX 静默安装11gR2 RAC

2019年11月08日 阅读数:658
这篇文章主要向大家介绍AIX 静默安装11gR2 RAC,主要内容包括基础应用、实用技巧、原理机制等方面,希望对大家有所帮助。

AIX安装11gR2  RACcss

 

一.1  BLOG文档结构图

wps48E8.tmp 

 

 

一.2  前言部分

 

一.2.1  导读和注意事项

各位技术爱好者,看完本文后,你能够掌握以下的技能,也能够学到一些其它你所不知道的知识,~O(∩_∩)O~:html

① 基于aix安装rac(重点)java

② 静默安装rac软件node

③ dbca静默建立rac数据库linux

 

  Tipsgit

       ① 若文章代码格式有错乱,推荐使用QQ、搜狗或360浏览器,也能够下载pdf格式的文档来查看,pdf文档下载地址:http://yunpan.cn/cdEQedhCs2kFz (提取码:ed9b sql

       ② 本篇BLOG中命令的输出部分须要特别关注的地方我都用灰色背景和粉红色字体来表示,好比下边的例子中,thread 1的最大归档日志号为33,thread 2的最大归档日志号为43是须要特别关注的地方;而命令通常使用黄色背景和红色字体注;对代码或代码输出部分的注释通常采用蓝色字体表示shell

 

  List of Archived Logs in backup set 11数据库

  Thrd Seq     Low SCN    Low Time            Next SCN   Next Timewindows

  ---- ------- ---------- ------------------- ---------- ---------

  1    32      1621589    2015-05-29 11:09:52 1625242    2015-05-29 11:15:48

  1    33      1625242    2015-05-29 11:15:48 1625293    2015-05-29 11:15:58

  2    42      1613951    2015-05-29 10:41:18 1625245    2015-05-29 11:15:49

  2    43      1625245    2015-05-29 11:15:49 1625253    2015-05-29 11:15:53

 

 

 

 

[ZFXXDB1:root]:/>lsvg -o

T_XDESK_APP1_vg

rootvg

[ZFXXDB1:root]:/>

00:27:22 SQL> alter tablespace idxtbs read write;

 

 

====》2097152*512/1024/1024/1024=1G 

 

 

 

 

 

 

 

 

本文若有错误或不完善的地方请你们多多指正,ITPUB留言或QQ皆可,您的批评指正是我写做的最大动力。

 

 

一.2.2  相关参考文章连接

linux 环境下rac的搭建:

一步一步搭建 oracle 11gR2 rac + dg 以前传(一) http://blog.itpub.net/26736162/viewspace-1290405/  

一步一步搭建oracle 11gR2 rac+dg之环境准备(二)  http://blog.itpub.net/26736162/viewspace-1290416/ 

一步一步搭建oracle 11gR2 rac+dg之共享磁盘设置(三) http://blog.itpub.net/26736162/viewspace-1291144/ 

一步一步搭建 oracle 11gR2 rac+dg之grid安装(四)  http://blog.itpub.net/26736162/viewspace-1297101/ 

一步一步搭建oracle 11gR2 rac+dg之database安装(五) http://blog.itpub.net/26736162/viewspace-1297113/ 

一步一步搭建11gR2 rac+dg之安装rac出现问题解决(六) http://blog.itpub.net/26736162/viewspace-1297128/ 

一步一步搭建11gR2 rac+dg之DG 机器配置(七)  http://blog.itpub.net/26736162/viewspace-1298733/ 

一步一步搭建11gR2 rac+dg之配置单实例的DG(八)  http://blog.itpub.net/26736162/viewspace-1298735/  

一步一步搭建11gR2 rac+dg之DG SWITCHOVER功能(九) http://blog.itpub.net/26736162/viewspace-1328050/ 

一步一步搭建11gR2 rac+dg之结尾篇(十)  http://blog.itpub.net/26736162/viewspace-1328156/ 

【RAC】如何让Oracle RAC crs_stat 命令显示完整  http://blog.itpub.net/26736162/viewspace-1610957/ 

如何建立ASM磁盘  http://blog.itpub.net/26736162/viewspace-1401193/ 

linux下rac的卸载: http://blog.itpub.net/26736162/viewspace-1630145/

 

 

【RAC】 RAC For W2K8R2 安装--整体规划 (一) : http://blog.itpub.net/26736162/viewspace-1721232/

【RAC】 RAC For W2K8R2 安装--操做系统环境配置 (二):http://blog.itpub.net/26736162/viewspace-1721253/

RAC】 RAC For W2K8R2 安装--共享磁盘的配置(三):http://blog.itpub.net/26736162/viewspace-1721270/

【RAC】 RAC For W2K8R2 安装--grid的安装(四):http://blog.itpub.net/26736162/viewspace-1721281/

【RAC】 RAC For W2K8R2 安装--RDBMS软件的安装(五):http://blog.itpub.net/26736162/viewspace-1721304/

【RAC】 RAC For W2K8R2 安装--建立ASM磁盘组(六):http://blog.itpub.net/26736162/viewspace-1721314/

【RAC】 RAC For W2K8R2 安装--dbca建立数据库(七):http://blog.itpub.net/26736162/viewspace-1721324/

【RAC】 RAC For W2K8R2 安装--卸载(八):http://blog.itpub.net/26736162/viewspace-1721331/

【RAC】 RAC For W2K8R2 安装--安装过程当中碰到的问题(九):http://blog.itpub.net/26736162/viewspace-1721373/

RACRAC For W2K8R2 安装--结尾篇()http://blog.itpub.net/26736162/viewspace-1721378/

 

【推荐】 【DBCA -SILENT】静默安装之rac数据库安装 http://blog.itpub.net/26736162/viewspace-1586352/

 

一.2.3  本文简介

虽然以前已经屡次安装过rac了,但都是基于linuxwindows的,基于aix的尚未安装过,最近有空就学学基于aix的安装rac,而且对于我而已,rac安装很熟悉了,因此就抛弃图形界面,全程采用命令模式来安装,。

另外,文章中的脚本下载地址:http://yunpan.cn/cdEQedhCs2kFz (提取码:ed9b)

 

---------------------------------------------------------------------------------------------------------------------

 

第二章  安装准备

二.1  软件环境

数据库:

p10404530_112030_AIX64-5L_1of7.zip

p10404530_112030_AIX64-5L_2of7.zip

集群软件(11G 中的 clusterware):

            p10404530_112030_AIX64-5L_3of7.zip

   操做系统:

7100-03-03-1415

 

注意: 解压时 p10404530_112030_AIX64-5L_1of7.zipp10404530_112030_AIX64-5L_2of7.zip

这两个包要解到同一个目录下,p10404530_112030_AIX64-5L_3of7.zip 包解到另外一个不一样的目录下。 

 

二.2  网络规划/etc/hosts

vi /etc/hosts

22.188.187.148   ZFFR4CB1101

222.188.187.148  ZFFR4CB1101-priv

22.188.187.149   ZFFR4CB1101-vip

 

22.188.187.158   ZFFR4CB2101

222.188.187.158  ZFFR4CB2101-priv

22.188.187.150   ZFFR4CB2101-vip

 

22.188.187.160   ZFFR4CB2101-scan

 

配置私网

HOST=`hostname`;IP=`host $HOST | awk '{print "2"$NF}'`;chdev -l 'en1' -a netaddr=$IP -a netmask='255.255.255.0' -a state='up'

[ZFPRMDB2:root]:/>smitty tcpip

     

      Minimum Configuration & Startup

 

* Internet ADDRESS (dotted decimal)                 [222.188.187.148]

  Network MASK (dotted decimal)                      [255.255.255.0]

 

节点一:

[ZFFR4CB1101:root]/]> ifconfig -a

en0: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 22.188.187.148 netmask 0xffffff00 broadcast 22.188.187.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en1: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 222.188.187.148 netmask 0xffffff00 broadcast 222.188.187.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

        inet6 ::1%1/0

         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

[ZFFR4CB1101:root]/]>

[ZFFR4CB1101:root]/]>

 

节点二:

[ZFFR4CB2101:root]/]> ifconfig -a

en0: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 22.188.187.158 netmask 0xffffff00 broadcast 22.188.187.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

en1: flags=1e084863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>

        inet 222.188.187.158 netmask 0xffffff00 broadcast 222.188.187.255

         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

lo0: flags=e08084b,c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,LARGESEND,CHAIN>

        inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255

        inet6 ::1%1/0

         tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]>

 

公网、私网共4个IP能够ping通,其它3个不能ping通才是正常的。

 

二.3  硬件环境检查

ZFFR4CB2101为例:

 

[ZFFR4CB2101:root]/]> getconf REAL_MEMORY

4194304

[ZFFR4CB2101:root]/]> /usr/sbin/lsattr -E -l sys0 -a realmem

realmem 4194304 Amount of usable physical memory in Kbytes False

[ZFFR4CB2101:root]/]> lsps -a

Page Space      Physical Volume   Volume Group    Size %Used Active  Auto  Type Chksum

hd6             hdisk0            rootvg        8192MB     0   yes   yes    lv     0

[ZFFR4CB2101:root]/]> getconf HARDWARE_BITMODE

64

[ZFFR4CB2101:root]/]> bootinfo -K

64

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> df -g

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/hd4           4.25      4.00    6%    12709     2% /

/dev/hd2          10.00      4.57   55%   118820    11% /usr

/dev/hd9var        4.50      4.24    6%     1178     1% /var

/dev/hd3           4.25      4.23    1%      172     1% /tmp

/dev/hd1           1.00      1.00    1%       77     1% /home

/dev/hd11admin      0.25      0.25    1%        7     1% /admin

/proc                 -         -    -         -     -  /proc

/dev/hd10opt       4.50      4.37    3%     2567     1% /opt

/dev/livedump      1.00      1.00    1%        6     1% /var/adm/ras/livedump

/dev/Plv_install      1.00      1.00    1%        4     1% /install

/dev/Plv_mtool      1.00      1.00    1%        4     1% /mtool

/dev/Plv_audit      2.00      1.99    1%        5     1% /audit

/dev/Plv_ftplog      1.00      1.00    1%        5     1% /ftplog

/dev/Tlv_bocnet     50.00     49.99    1%        4     1% /bocnet

/dev/Tlv_WebSphere     10.00      5.71   43%    45590     4% /WebSphere

/dev/TLV_TEST_DATA    100.00     99.98    1%        7     1% /lhr

/dev/tlv_softtmp     30.00     20.30   33%     5639     1% /softtmp

ZTDNETAP3:/nfs   1240.00     14.39   99%   513017    14% /nfs

/dev/tlv_u01      50.00     32.90   35%    51714     1% /u01

[ZFFR4CB2101:root]/]> cat /etc/.init.state

2

[ZFFR4CB2101:root]/]> oslevel -s

7100-03-03-1415

[ZFFR4CB2101:root]/]> lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools

  Fileset                      Level  State      Description        

  ----------------------------------------------------------------------------

Path: /usr/lib/objrepos

  bos.adt.base              7.1.3.15  COMMITTED  Base Application Development

                                                 Toolkit

  bos.adt.lib               7.1.2.15  COMMITTED  Base Application Development

                                                 Libraries

  bos.adt.libm               7.1.3.0  COMMITTED  Base Application Development

                                                 Math Library

  bos.perf.libperfstat      7.1.3.15  COMMITTED  Performance Statistics Library

                                                 Interface

  bos.perf.perfstat         7.1.3.15  COMMITTED  Performance Statistics

                                                 Interface

 

Path: /etc/objrepos

  bos.adt.base              7.1.3.15  COMMITTED  Base Application Development

                                                 Toolkit

  bos.perf.libperfstat      7.1.3.15  COMMITTED  Performance Statistics Library

                                                 Interface

  bos.perf.perfstat         7.1.3.15  COMMITTED  Performance Statistics

                                                 Interface

lslpp: 0504-132  Fileset bos.perf.proctools  not installed.

 

 

二.4  操做系统参数调整

shell脚本:

vi os_pre_lhr.sh

_chlimit(){

  [ -f /etc/security/limits.org ] || { cp -p /etc/security/limits /etc/security/limits.org; }

  cat /etc/security/limits.org |egrep -vp "root|oracle|grid" > /etc/security/limits

  echo "root:

        core = -1

        cpu = -1

        data = -1

        fsize = -1

        nofiles = -1

        rss = -1

        stack = -1

        core_hard = -1

        cpu_hard = -1

        data_hard = -1

        fsize_hard = -1

        nofiles_hard = -1

        rss_hard = -1

        stack_hard = -1

 

oracle:

        core = -1

        cpu = -1

        data = -1

        fsize = -1

        nofiles = -1

        rss = -1

        stack = -1

        cpu_hard = -1

        core_hard = -1

        data_hard = -1

        fsize_hard = -1

        nofiles_hard = -1

        rss_hard = -1

        stack_hard = -1

 

grid:

        core = -1

        cpu = -1

        data = -1

        fsize = -1

        nofiles = -1

        rss = -1

        stack = -1

        core_hard = -1

        cpu_hard = -1

        data_hard = -1

        fsize_hard = -1

        nofiles_hard = -1

        rss_hard = -1

        stack_hard = -1" >> /etc/security/limits

}

 

_chospara(){

  vmo -p -o minperm%=3

  echo "yes"|vmo -p -o maxperm%=90

  echo "yes" |vmo -p -o maxclient%=90

  echo "yes"|vmo -p -o lru_file_repage=0

  echo "yes"|vmo -p -o strict_maxclient=1

  echo "yes" |vmo -p -o strict_maxperm=0

  echo "yes\nno" |vmo -r -o page_steal_method=1;

  ioo -a|egrep -w "aio_maxreqs|aio_maxservers|aio_minservers"

  /usr/sbin/chdev -l sys0 -a maxuproc=16384 -a ncargs=256 -a minpout=4096 -a maxpout=8193 -a fullcore=true

  echo "check sys0 16384 256"

  lsattr -El sys0 |egrep "maxuproc|ncargs|pout|fullcore" |awk '{print $1,$2}'

 

  /usr/sbin/no -p -o sb_max=41943040

  /usr/sbin/no -p -o udp_sendspace=2097152

  /usr/sbin/no -p -o udp_recvspace=20971520

  /usr/sbin/no -p -o tcp_sendspace=1048576

  /usr/sbin/no -p -o tcp_recvspace=1048576

  /usr/sbin/no -p -o rfc1323=1

  /usr/sbin/no -r -o ipqmaxlen=512

  /usr/sbin/no -p -o clean_partial_conns=1

 

  cp -p /etc/environment /etc/environment.`date '+%Y%m%d'`

  cat /etc/environment.`date '+%Y%m%d'` |awk '/^TZ=/{print "TZ=BEIST-8"} !/^TZ=/{print}' >/etc/environment

  _chlimit

 

}

 

_chlimit

_chospara

 

stopsrc -s xntpd

startsrc -s xntpd -a "-x"

 

sh os_pre_lhr.sh

二.5  建立文件系统

 

/usr/lpp/EMC/Symmetrix/bin/inq.aix64_51 -showvol -sid

lspv

mkvg -S -y t_u01_vg -s 128   hdisk22

 

mklv -t jfs2 -y tlv_u01 -x 1024 t_u01_vg 400

crfs -v jfs2 -d tlv_u01 -m /u01 -A yes

mount /u01

 

mklv -t jfs2 -y tlv_softtmp -x 1024 t_u01_vg 240

crfs -v jfs2 -d tlv_softtmp -m /softtmp -A yes

mount /softtmp

 

ZFFR4CB2101为例:

[ZFFR4CB2101:root]/]> /usr/lpp/EMC/Symmetrix/bin/inq.aix64_51 -showvol -sid

Inquiry utility, Version V7.3-1214 (Rev 0.1)      (SIL Version V7.3.0.1 (Edit Level 1214)

Copyright (C) by EMC Corporation, all rights reserved.

For help type inq -h.

 

.........................

 

------------------------------------------------------------------------------------------------

DEVICE        :VEND    :PROD            :REV   :SER NUM    :Volume  :CAP(kb)        :SYMM ID   

------------------------------------------------------------------------------------------------

/dev/rhdisk0  :AIX     :VDASD           :0001  :hdisk5     :   00000:   134246400  :N/A        

/dev/rhdisk1  :EMC     :SYMMETRIX       :5876  :640250a000 :   0250A:        2880  :000492600664

/dev/rhdisk2  :EMC     :SYMMETRIX       :5876  :640250b000 :   0250B:        2880  :000492600664

/dev/rhdisk3  :EMC     :SYMMETRIX       :5876  :640250c000 :   0250C:        2880  :000492600664

/dev/rhdisk4  :EMC     :SYMMETRIX       :5876  :640250d000 :   0250D:        2880  :000492600664

/dev/rhdisk5  :EMC     :SYMMETRIX       :5876  :64026f6000 :   026F6:   134246400  :000492600664

/dev/rhdisk6  :EMC     :SYMMETRIX       :5876  :64026fe000 :   026FE:   134246400  :000492600664

/dev/rhdisk7  :EMC     :SYMMETRIX       :5876  :6402706000 :   02706:   134246400  :000492600664

/dev/rhdisk8  :EMC     :SYMMETRIX       :5876  :640270e000 :   0270E:   134246400  :000492600664

/dev/rhdisk9  :EMC     :SYMMETRIX       :5876  :6402716000 :   02716:   134246400  :000492600664

/dev/rhdisk10 :EMC     :SYMMETRIX       :5876  :640271e000 :   0271E:   134246400  :000492600664

/dev/rhdisk11 :EMC     :SYMMETRIX       :5876  :6402726000 :   02726:   134246400  :000492600664

/dev/rhdisk12 :EMC     :SYMMETRIX       :5876  :640272e000 :   0272E:   134246400  :000492600664

/dev/rhdisk13 :EMC     :SYMMETRIX       :5876  :6402736000 :   02736:   134246400  :000492600664

/dev/rhdisk14 :EMC     :SYMMETRIX       :5876  :640273e000 :   0273E:   134246400  :000492600664

/dev/rhdisk15 :EMC     :SYMMETRIX       :5876  :6402746000 :   02746:   134246400  :000492600664

/dev/rhdisk16 :EMC     :SYMMETRIX       :5876  :640274e000 :   0274E:   134246400  :000492600664

/dev/rhdisk17 :EMC     :SYMMETRIX       :5876  :6402756000 :   02756:   134246400  :000492600664

/dev/rhdisk18 :EMC     :SYMMETRIX       :5876  :640275e000 :   0275E:   134246400  :000492600664

/dev/rhdisk19 :EMC     :SYMMETRIX       :5876  :6402766000 :   02766:   134246400  :000492600664

/dev/rhdisk20 :EMC     :SYMMETRIX       :5876  :640276e000 :   0276E:   134246400  :000492600664

/dev/rhdisk21 :EMC     :SYMMETRIX       :5876  :6402776000 :   02776:   134246400  :000492600664

/dev/rhdisk22 :EMC     :SYMMETRIX       :5876  :640277e000 :   0277E:   134246400  :000492600664

/dev/rhdisk23 :EMC     :SYMMETRIX       :5876  :6402786000 :   02786:   134246400  :000492600664

/dev/rhdisk24 :EMC     :SYMMETRIX       :5876  :640278e000 :   0278E:   134246400  :000492600664

[ZFFR4CB2101:root]/]> lspv

hdisk0          00c49fc434da2434                    rootvg          active     

hdisk1          00c49fc461fc76b2                    None                       

hdisk2          00c49fc461fc76f5                    None                       

hdisk3          00c49fc461fc7739                    None                       

hdisk4          00c49fc461fc777a                    None                       

hdisk5          00c49fc461fc77bd                    None                       

hdisk6          00c49fc461fc77fe                    None                       

hdisk7          00c49fc461fc783f                    None                       

hdisk8          00c49fc461fc7880                    None                       

hdisk9          00c49fc461fc78c5                    None                       

hdisk10         00c49fc461fc7908                    None                       

hdisk11         00c49fc461fc7958                    None                       

hdisk12         00c49fc461fc79a0                    None                       

hdisk13         00c49fc461fc79ea                    None                       

hdisk14         00c49fc461fc7a2f                    None                       

hdisk15         00c49fc461fc7a71                    None                       

hdisk16         00c49fc461fc7ab1                    None                       

hdisk17         00c49fb4e3a8fc12                    None                       

hdisk18         00c49fc461fc7b3b                    T_NET_APP_vg    active     

hdisk19         00c49fc461fc7b7d                    None                       

hdisk20         00c49fc461fc7bbe                    None                       

hdisk21         00c49fc461fc7bff                    None                       

hdisk22         00c49fc461fc7c40                    None                       

hdisk23         00c49fc461fc7c88                    T_TEST_LHR_VG   active     

hdisk24         00c49fc461fc7cca                    T_TEST_LHR_VG   active

 

 

[ZFFR4CB2101:root]/]> df -g

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/hd4           4.25      4.00    6%    12643     2% /

/dev/hd2          10.00      4.58   55%   118785    10% /usr

/dev/hd9var        4.50      4.08   10%     1175     1% /var

/dev/hd3           4.25      3.75   12%     1717     1% /tmp

/dev/hd1           1.00      1.00    1%       17     1% /home

/dev/hd11admin      0.25      0.25    1%        7     1% /admin

/proc                 -         -    -         -     -  /proc

/dev/hd10opt       4.50      4.37    3%     2559     1% /opt

/dev/livedump      1.00      1.00    1%        6     1% /var/adm/ras/livedump

/dev/Plv_install      1.00      1.00    1%        4     1% /install

/dev/Plv_mtool      1.00      1.00    1%        4     1% /mtool

/dev/Plv_audit      2.00      1.99    1%        5     1% /audit

/dev/Plv_ftplog      1.00      1.00    1%        5     1% /ftplog

/dev/Tlv_bocnet     50.00     49.99    1%        4     1% /bocnet

/dev/Tlv_WebSphere     10.00      5.71   43%    45590     4% /WebSphere

/dev/TLV_TEST_DATA    100.00     99.98    1%        7     1% /lhr

ZTDNETAP3:/nfs   1240.00     14.39   99%   512924    14% /nfs

ZTINIMSERVER:/sharebkup   5500.00   1258.99   78%  2495764     1% /sharebkup

 

 

[ZFFR4CB2101:root]/]> mklv -t jfs2 -y tlv_u01 -x 1024 t_u01_vg 400

tlv_u01

[ZFFR4CB2101:root]/]> crfs -v jfs2 -d tlv_u01 -m /u01 -A yes

File system created successfully.

52426996 kilobytes total disk space.

New File System size is 104857600

[ZFFR4CB2101:root]/]> mount /u01

[ZFFR4CB2101:root]/]> df -g

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/hd4           4.25      4.00    6%    12648     2% /

/dev/hd2          10.00      4.58   55%   118785    10% /usr

/dev/hd9var        4.50      4.08   10%     1176     1% /var

/dev/hd3           4.25      3.75   12%     1717     1% /tmp

/dev/hd1           1.00      1.00    1%       17     1% /home

/dev/hd11admin      0.25      0.25    1%        7     1% /admin

/proc                 -         -    -         -     -  /proc

/dev/hd10opt       4.50      4.37    3%     2559     1% /opt

/dev/livedump      1.00      1.00    1%        6     1% /var/adm/ras/livedump

/dev/Plv_install      1.00      1.00    1%        4     1% /install

/dev/Plv_mtool      1.00      1.00    1%        4     1% /mtool

/dev/Plv_audit      2.00      1.99    1%        5     1% /audit

/dev/Plv_ftplog      1.00      1.00    1%        5     1% /ftplog

/dev/Tlv_bocnet     50.00     49.99    1%        4     1% /bocnet

/dev/Tlv_WebSphere     10.00      5.71   43%    45590     4% /WebSphere

/dev/TLV_TEST_DATA    100.00     99.98    1%        7     1% /lhr

ZTDNETAP3:/nfs   1240.00     14.39   99%   512924    14% /nfs

ZTINIMSERVER:/sharebkup   5500.00   1258.99   78%  2495764     1% /sharebkup

/dev/tlv_u01      50.00     49.99    1%        4     1% /u01

[ZFFR4CB2101:root]/]>

 

 

[ZFFR4CB2101:root]/]> mklv -t jfs2 -y tlv_softtmp -x 1024 t_u01_vg 240

tlv_softtmp

[ZFFR4CB2101:root]/]> crfs -v jfs2 -d tlv_softtmp -m /softtmp -A yes

File system created successfully.

31456116 kilobytes total disk space.

New File System size is 62914560

[ZFFR4CB2101:root]/]> mount /softtmp

[ZFFR4CB2101:root]/]> df -g

Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on

/dev/hd4           4.25      4.00    6%    12650     2% /

/dev/hd2          10.00      4.58   55%   118785    10% /usr

/dev/hd9var        4.50      4.08   10%     1177     1% /var

/dev/hd3           4.25      3.75   12%     1717     1% /tmp

/dev/hd1           1.00      1.00    1%       17     1% /home

/dev/hd11admin      0.25      0.25    1%        7     1% /admin

/proc                 -         -    -         -     -  /proc

/dev/hd10opt       4.50      4.37    3%     2559     1% /opt

/dev/livedump      1.00      1.00    1%        6     1% /var/adm/ras/livedump

/dev/Plv_install      1.00      1.00    1%        4     1% /install

/dev/Plv_mtool      1.00      1.00    1%        4     1% /mtool

/dev/Plv_audit      2.00      1.99    1%        5     1% /audit

/dev/Plv_ftplog      1.00      1.00    1%        5     1% /ftplog

/dev/Tlv_bocnet     50.00     49.99    1%        4     1% /bocnet

/dev/Tlv_WebSphere     10.00      5.71   43%    45590     4% /WebSphere

/dev/TLV_TEST_DATA    100.00     99.98    1%        7     1% /lhr

ZTDNETAP3:/nfs   1240.00     14.39   99%   512924    14% /nfs

ZTINIMSERVER:/sharebkup   5500.00   1258.99   78%  2495764     1% /sharebkup

/dev/tlv_u01      50.00     49.99    1%        4     1% /u01

/dev/tlv_softtmp     30.00     30.00    1%        4     1% /softtmp

[ZFFR4CB2101:root]/]>

 

建立卷组的时候注意踩盘,懂AIX的人懂的,很少说。

 

二.6  创建安装目录

直接复制粘贴执行:

mkdir -p  /u01/app/11.2.0/grid

chmod -R 755 /u01/app/11.2.0/grid

mkdir -p /u01/app/grid

chmod -R 755 /u01/app/grid

mkdir -p  /u01/app/oracle

chmod -R 755 /u01/app/oracle

 

[ZFFR4CB2101:root]/]>  mkdir -p  /u01/app/11.2.0/grid                                                      

[ZFFR4CB2101:root]/]>  chmod -R 755 /u01/app/11.2.0/grid                                                         

[ZFFR4CB2101:root]/]>  mkdir -p /u01/app/grid                                                                    

[ZFFR4CB2101:root]/]>  chmod -R 755 /u01/app/grid                                                                

[ZFFR4CB2101:root]/]>  mkdir -p  /u01/app/oracle                                                                 

[ZFFR4CB2101:root]/]>  chmod -R 755 /u01/app/oracle

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> cd /u01/app

[ZFFR4CB2101:root]/u01/app]> l

total 0

drwxr-xr-x    3 root     system          256 Mar 08 16:11 11.2.0

drwxr-xr-x    2 root     system          256 Mar 08 16:11 grid

drwxr-xr-x    2 root     system          256 Mar 08 16:11 oracle

[ZFFR4CB2101:root]/u01/app]>

 

 

 

二.7  创建用户和用户组

直接复制粘贴执行:

mkgroup -A id=1024 dba

mkgroup -A id=1025 asmadmin

mkgroup -A id=1026 asmdba

mkgroup -A id=1027 asmoper

mkgroup -A id=1028 oinstall

 

 

mkuser -a id=1025 pgrp=oinstall groups=dba,asmadmin,asmdba,asmoper,oinstall home=/home/grid fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1  capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  grid

echo "grid:grid" |chpasswd

pwdadm -c grid

 

mkuser -a id=1024 pgrp=dba groups=dba,asmadmin,asmdba,asmoper,oinstall  home=/home/oracle fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1  capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  oracle

echo "oracle:oracle" |chpasswd

pwdadm -c oracle

 

 

chown -R grid:dba  /u01/app/11.2.0

chown grid:dba  /u01/app

chown grid:dba  /u01/app/grid

chown -R oracle:dba  /u01/app/oracle

chown oracle:dba  /u01

 

/usr/sbin/lsuser  -a  capabilities grid

/usr/sbin/lsuser  -a  capabilities oracle  

 

 

 

 

 

[ZFFR4CB2101:root]/u01/app]> mkgroup -A id=1024 dba  

[ZFFR4CB2101:root]/u01/app]> mkuser -a id=1025 pgrp=dba groups=dba home=/home/grid fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1  capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  grid                                                    

[ZFFR4CB2101:root]/u01/app]> passwd  grid

Changing password for "grid"

grid's New password:

Enter the new password again:

[ZFFR4CB2101:root]/u01/app]>

[ZFFR4CB2101:root]/u01/app]> mkuser -a id=1024 pgrp=dba groups=dba home=/home/oracle fsize=-1 cpu=-1 data=-1 core=-1 rss=-1 stack=-1 stack_hard=-1  capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE  oracle                                                    

[ZFFR4CB2101:root]/u01/app]> passwd  oracle

Changing password for "oracle"

oracle's New password:

Enter the new password again:

[ZFFR4CB2101:root]/u01/app]>    chown -R grid:dba  /u01/app/11.2.0                                       

[ZFFR4CB2101:root]/u01/app]>    chown grid:dba  /u01/app                                                                     

[ZFFR4CB2101:root]/u01/app]>    chown grid:dba  /u01/app/grid                                                               

[ZFFR4CB2101:root]/u01/app]>    chown -R oracle:dba  /u01/app/oracle                                                         

[ZFFR4CB2101:root]/u01/app]>    chown oracle:dba  /u01

[ZFFR4CB2101:root]/u01/app]> /usr/sbin/lsuser  -a  capabilities grid

grid capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

[ZFFR4CB2101:root]/u01/app]> /usr/sbin/lsuser  -a  capabilities oracle

oracle capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE

[ZFFR4CB2101:root]/u01/app]>

 

 

2个节点都校验:

[ZFFR4CB1101:root]/]> id grid

uid=1025(grid) gid=1028(oinstall) groups=1024(dba),1025(asmadmin),1026(asmdba),1027(asmoper)

[ZFFR4CB1101:root]/]> id oracle

uid=1024(oracle) gid=1024(dba) groups=1025(asmadmin),1026(asmdba),1027(asmoper),1028(oinstall)

[ZFFR4CB1101:root]/]>

 

二.8  配置 grid oracle.profile

---------2个节点分别配置,注意修改ORACLE_SID的值为+ASM1,+ASM2

su - grid

vi .profile

 

umask 022  

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export ORACLE_SID=+ASM

export ORACLE_TERM=vt100

export ORACLE_OWNER=grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/u01/app/oracle/product/11.2.0/dbhome_1/lib32

export LIBPATH=$LIBPATH:/u01/app/oracle/product/11.2.0/dbhome_1/lib

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

export PATH=$PATH:/bin:/usr/ccs/bin:/usr/bin/X11:$ORACLE_HOME/bin 

export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'

 

set -o vi

export EDITOR=vi 

alias l='ls -l'

export PS1='[$LOGNAME@'`hostname`:'$PWD'']$ '

export AIXTHREAD_SCOPE=S

export ORACLE_TERM=vt100

export TMP=/tmp

export TMPDIR=/tmp

export LANG=en_US

export PS1='[$LOGNAME@'`hostname`:'$PWD'']$ '

export DISPLAY=22.188.216.97:0.0

 

 

su - oracle

vi .profile

umask 022

export ORACLE_SID=ora11g

export ORACLE_BASE=/u01/app/oracle

export GRID_HOME=/u01/app/11.2.0/grid

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

export PATH=$ORACLE_HOME/bin:$GRID_HOME/bin:$PATH:$ORACLE_HOME/OPatch

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/rdbms/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

export NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS'

export ORACLE_OWNER=oracle

 

 

set -o vi

export EDITOR=vi 

alias l='ls -l'

export AIXTHREAD_SCOPE=S

export ORACLE_TERM=vt100

export TMP=/tmp

export TMPDIR=/tmp

export LANG=en_US

export PS1='[$LOGNAME@'`hostname`:'$PWD'']$ '

export DISPLAY=22.188.216.97:0.0

 

 

 

. ~/.profile 生效当前的环境变量

 

[ZFFR4CB1101:root]/]> . ~/.profile

 

 

二.9  准备ASM磁盘

  2个节点都执行, ASM磁盘权限和属性的修改,不然执行root.sh的时候报错:

Disk Group OCR creation failed with the following message:

ORA-15018: diskgroup cannot be created

ORA-15031: disk specification '/dev/rhdisk10' matches no disks

ORA-15025: could not open disk "/dev/rhdisk10"

ORA-15056: additional error message

 

 

chown grid.asmadmin /dev/rhdisk10

chown grid.asmadmin /dev/rhdisk11

chmod 660  /dev/rhdisk10

chmod 660  /dev/rhdisk11

 

lquerypv -h /dev/hdisk10

 

chdev -l hdisk10 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

chdev -l hdisk11 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

 

lsattr -El hdisk10

 

 

[ZFFR4CB2101:root]/]> lsattr -El hdisk10

PCM             PCM/friend/MSYMM_VRAID           Path Control Module              True

PR_key_value    none                             Persistant Reserve Key Value     True

algorithm       fail_over                        Algorithm                        True

clr_q           yes                              Device CLEARS its Queue on error True

dist_err_pcnt   0                                Distributed Error Percentage     True

dist_tw_width   50                               Distributed Error Sample Time    True

hcheck_cmd      inquiry                          Health Check Command             True

hcheck_interval 60                               Health Check Interval            True

hcheck_mode     nonactive                        Health Check Mode                True

location                                         Location Label                   True

lun_id          0x9000000000000                  Logical Unit Number ID           False

lun_reset_spt   yes                              FC Forced Open LUN               True

max_coalesce    0x100000                         Maximum Coalesce Size            True

max_retries     5                                Maximum Number of Retries        True

max_transfer    0x100000                         Maximum TRANSFER Size            True

node_name       0x50000978080a6000               FC Node Name                     False

pvid            00c49fc461fc79080000000000000000 Physical volume identifier       False

q_err           no                               Use QERR bit                     True

q_type          simple                           Queue TYPE                       True

queue_depth     32                               Queue DEPTH                      True

reserve_policy  single_path                      Reserve Policy                   True

rw_timeout      40                               READ/WRITE time out value        True

scsi_id         0xce0040                         SCSI ID                          False

start_timeout   180                              START UNIT time out value        True

timeout_policy  retry_path                       Timeout Policy                   True

ww_name         0x50000978080a61d1               FC World Wide Name               False

[ZFFR4CB2101:root]/]> chdev -l hdisk10 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

hdisk10 changed

[ZFFR4CB2101:root]/]> chdev -l hdisk11 -a reserve_policy=no_reserve -a algorithm=round_robin -a queue_depth=32 -a pv=yes

hdisk11 changed

[ZFFR4CB2101:root]/]> lsattr -El hdisk11

PCM             PCM/friend/MSYMM_VRAID           Path Control Module              True

PR_key_value    none                             Persistant Reserve Key Value     True

algorithm       round_robin                      Algorithm                        True

clr_q           yes                              Device CLEARS its Queue on error True

dist_err_pcnt   0                                Distributed Error Percentage     True

dist_tw_width   50                               Distributed Error Sample Time    True

hcheck_cmd      inquiry                          Health Check Command             True

hcheck_interval 60                               Health Check Interval            True

hcheck_mode     nonactive                        Health Check Mode                True

location                                         Location Label                   True

lun_id          0xa000000000000                  Logical Unit Number ID           False

lun_reset_spt   yes                              FC Forced Open LUN               True

max_coalesce    0x100000                         Maximum Coalesce Size            True

max_retries     5                                Maximum Number of Retries        True

max_transfer    0x100000                         Maximum TRANSFER Size            True

node_name       0x50000978080a6000               FC Node Name                     False

pvid            00c49fc461fc79580000000000000000 Physical volume identifier       False

q_err           no                               Use QERR bit                     True

q_type          simple                           Queue TYPE                       True

queue_depth     32                               Queue DEPTH                      True

reserve_policy  no_reserve                       Reserve Policy                   True

rw_timeout      40                               READ/WRITE time out value        True

scsi_id         0xce0040                         SCSI ID                          False

start_timeout   180                              START UNIT time out value        True

timeout_policy  retry_path                       Timeout Policy                   True

ww_name         0x50000978080a61d1               FC World Wide Name               False

[ZFFR4CB2101:root]/]>

[ZFFR4CB2101:root]/]> lquerypv -h  /dev/rhdisk10

00000000   00000000 00000000 00000000 00000000  |................|

00000010   00000000 00000000 00000000 00000000  |................|

00000020   00000000 00000000 00000000 00000000  |................|

00000030   00000000 00000000 00000000 00000000  |................|

00000040   00000000 00000000 00000000 00000000  |................|

00000050   00000000 00000000 00000000 00000000  |................|

00000060   00000000 00000000 00000000 00000000  |................|

00000070   00000000 00000000 00000000 00000000  |................|

00000080   00000000 00000000 00000000 00000000  |................|

00000090   00000000 00000000 00000000 00000000  |................|

000000A0   00000000 00000000 00000000 00000000  |................|

000000B0   00000000 00000000 00000000 00000000  |................|

000000C0   00000000 00000000 00000000 00000000  |................|

000000D0   00000000 00000000 00000000 00000000  |................|

000000E0   00000000 00000000 00000000 00000000  |................|

000000F0   00000000 00000000 00000000 00000000  |................|

 

 

 

 

 

二.10  配置SSH连通性

能够采用shell脚本或者手动配置,推荐shell脚本的方式。

二.10.1  shell脚本(2个节点都执行)

注意修改黄色背景的部分,oth表明另一个节点的主机名,执行cfgssh.sh便可,执行testssh.sh测试ssh的连通性,该脚本AIXlinux通用,若只给一个节点配置,能够将oth的值设置为hn的值 :

 

vi cfgssh.sh

echo "config ssh..."

grep "^LoginGraceTime 0" /etc/ssh/sshd_config

[ $? -ne 0 ] && { cp -p /etc/ssh/sshd_config /etc/ssh/sshd_config.org; echo "LoginGraceTime 0" >>/etc/ssh/sshd_config; }

 

export hn=`hostname`

export oth=ZFFR4CB2101

export p_pwd=`pwd`

su - grid -c "$p_pwd/sshUserSetup.sh -user grid -hosts $oth -noPromptPassphrase"

su - grid -c "ssh $hn hostname"

su - grid -c "ssh $oth hostname"

 

su - oracle -c "$p_pwd/sshUserSetup.sh -user oracle -hosts $oth -noPromptPassphrase"

su - oracle -c "ssh $hn hostname"

su - oracle -c "ssh $oth hostname"

 

vi sshUserSetup.sh

wps48F9.tmp

 

 

vi testssh.sh

export hn=`hostname`

export oth=ZFFR4CB2101

su - grid -c "ssh $hn pwd"

su - grid -c "ssh $oth pwd"

su - oracle -c "ssh $hn pwd"

su - oracle -c "ssh $oth pwd"

 

chmod 777 *.sh

sh cfgssh.sh

 

 

二.10.2  手动配置

分别配置gridoracle用户的ssh

----------------------------------------------------------------------------------

[root@node1 : /]# su - oracle

[oracle@node1 ~]$ mkdir ~/.ssh

[oracle@node1 ~]$ chmod 700 ~/.ssh

[oracle@node1 ~]$ ssh-keygen -t rsa  ->回车->回车->回车

[oracle@node1 ~]$ ssh-keygen -t dsa  ->回车->回车->回车

 

-----------------------------------------------------------------------------------

[root@node2 : /]# su - oracle

[oracle@node2 ~]$ mkdir ~/.ssh

[oracle@node2 ~]$ chmod 700 ~/.ssh

[oracle@node2 ~]$ ssh-keygen -t rsa  ->回车->回车->回车

[oracle@node2 ~]$ ssh-keygen -t dsa  ->回车->回车->回车

 

-----------------------------------------------------------------------------------

 

[oracle@node1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[oracle@node1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

[oracle@node1 ~]$ ssh ZFFR4CB2101 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  ->输入node2密码

[oracle@node1 ~]$ ssh ZFFR4CB2101 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys  ->输入node2密码

[oracle@node1 ~]$ scp ~/.ssh/authorized_keys ZFFR4CB2101:~/.ssh/authorized_keys    ->输入node2密码

 

-----------------------------------------------------------------------------------

测试两节点连通性:

 

[oracle@node1 ~]$ ssh ZFFR4CB1101 date

[oracle@node1 ~]$ ssh ZFFR4CB2101 date

[oracle@node1 ~]$ ssh ZFFR4CB1101-priv date

[oracle@node1 ~]$ ssh ZFFR4CB2101-priv date

 

[oracle@node2 ~]$ ssh ZFFR4CB1101 date

[oracle@node2 ~]$ ssh ZFFR4CB2101 date

[oracle@node2 ~]$ ssh ZFFR4CB1101-priv date

[oracle@node2 ~]$ ssh ZFFR4CB2101-priv date

 

 

 

第三章 grid安装

三.1  准备安装源

 

上传文件到softtmp目录:

wps4909.tmp 

 

 

[ZFFR4CB2101:root]/softtmp]> l

total 9644872

drwxr-xr-x    2 root     system          256 Mar 08 16:10 lost+found

-rw-r-----    1 root     system   1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r-----    1 root     system   1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r-----    1 root     system   2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[ZFFR4CB2101:root]/softtmp]> unzip p10404530_112030_AIX64-5L_3of7.zip

Archive:  p10404530_112030_AIX64-5L_3of7.zip

   creating: grid/

   creating: grid/stage/

  inflating: grid/stage/shiphomeproperties.xml 

   creating: grid/stage/Components/

   creating: grid/stage/Components/oracle.crs/

   creating: grid/stage/Components/oracle.crs/11.2.0.3.0/

   creating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/

   creating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup5.jar 

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup4.jar 

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup3.jar 

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup2.jar 

  inflating: grid/stage/Components/oracle.crs/11.2.0.3.0/1/DataFiles/filegroup1.jar 

   creating: grid/stage/Components/oracle.has.crs/

《《《《。。。。。。。。篇幅缘由,有省略。。。。。。。。》》》》

  inflating: grid/doc/server.11203/E18951-02.mobi 

  inflating: grid/welcome.html      

   creating: grid/sshsetup/

  inflating: grid/sshsetup/sshUserSetup.sh 

  inflating: grid/readme.html       

[ZFFR4CB2101:root]/softtmp]>

[ZFFR4CB2101:root]/softtmp]> l

total 9644880

drwxr-xr-x    9 root     system         4096 Oct 28 2011  grid

drwxr-xr-x    2 root     system          256 Mar 08 16:10 lost+found

-rw-r-----    1 root     system   1766307597 Mar 02 04:05 p10404530_112030_AIX64-5L_1of7.zip

-rw-r-----    1 root     system   1135393912 Mar 02 04:03 p10404530_112030_AIX64-5L_2of7.zip

-rw-r-----    1 root     system   2036455635 Mar 02 04:06 p10404530_112030_AIX64-5L_3of7.zip

[ZFFR4CB2101:root]/softtmp]> cd grid

[ZFFR4CB2101:root]/softtmp/grid]> l

total 168

drwxr-xr-x    9 root     system         4096 Oct 10 2011  doc

drwxr-xr-x    4 root     system         4096 Oct 21 2011  install

-rwxr-xr-x    1 root     system        28122 Oct 28 2011  readme.html

drwxrwxr-x    2 root     system          256 Oct 21 2011  response

drwxrwxr-x    3 root     system          256 Oct 21 2011  rootpre

-rwxr-xr-x    1 root     system        13369 Sep 22 2010  rootpre.sh

drwxrwxr-x    2 root     system          256 Oct 21 2011  rpm

-rwxr-xr-x    1 root     system        10006 Oct 21 2011  runInstaller

-rwxrwxr-x    1 root     system         4878 May 14 2011  runcluvfy.sh

drwxrwxr-x    2 root     system          256 Oct 21 2011  sshsetup

drwxr-xr-x   14 root     system         4096 Oct 21 2011  stage

-rw-r--r--    1 root     system         4561 Oct 10 2011  welcome.html

 

三.2  执行runcluvfy.sh脚本预检测

[grid@ZFFR4CB2101:/softtmp/grid]$ /softtmp/grid/runcluvfy.sh stage -pre crsinst -n  ZFFR4CB2101,ZFFR4CB1101 -verbose -fixup

 

Performing pre-checks for cluster services setup

 

Checking node reachability...

 

Check: Node reachability from node "ZFFR4CB2101"

  Destination Node                      Reachable?             

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           yes                    

Result: Node reachability check passed from node "ZFFR4CB2101"

 

 

Checking user equivalence...

 

Check: User equivalence for user "grid"

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

Result: User equivalence check passed for user "grid"

 

Checking node connectivity...

 

Checking hosts config file...

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

 

Verification of the hosts config file successful

 

 

Interface information for node "ZFFR4CB2101"

Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

------ --------------- --------------- --------------- --------------- ----------------- ------

en0    22.188.187.158  22.188.187.0    22.188.187.158  22.188.187.1    C6:03:AE:03:97:83 1500 

en1    222.188.187.158 222.188.187.0   222.188.187.158 22.188.187.1    C6:03:A7:3E:FE:01 1500 

 

 

Interface information for node "ZFFR4CB1101"

Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

------ --------------- --------------- --------------- --------------- ----------------- ------

en0    22.188.187.148  22.188.187.0    22.188.187.148  UNKNOWN         FE:B6:72:EF:12:83 1500 

en1    222.188.187.148 222.188.187.0   222.188.187.148 UNKNOWN         FE:B6:7D:9F:6C:01 1500 

 

 

Check: Node connectivity of subnet "22.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101[22.188.187.158]     ZFFR4CB1101[22.188.187.148]     yes            

Result: Node connectivity passed for subnet "22.188.187.0" with node(s) ZFFR4CB2101,ZFFR4CB1101

 

 

Check: TCP connectivity of subnet "22.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101:22.188.187.158      ZFFR4CB1101:22.188.187.148      passed         

Result: TCP connectivity check passed for subnet "22.188.187.0"

 

 

Check: Node connectivity of subnet "222.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101[222.188.187.158]    ZFFR4CB1101[222.188.187.148]    yes            

Result: Node connectivity passed for subnet "222.188.187.0" with node(s) ZFFR4CB2101,ZFFR4CB1101

 

 

Check: TCP connectivity of subnet "222.188.187.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  ZFFR4CB2101:222.188.187.158     ZFFR4CB1101:222.188.187.148     passed         

Result: TCP connectivity check passed for subnet "222.188.187.0"

 

 

Interfaces found on subnet "22.188.187.0" that are likely candidates for VIP are:

ZFFR4CB2101 en0:22.188.187.158

ZFFR4CB1101 en0:22.188.187.148

 

Interfaces found on subnet "222.188.187.0" that are likely candidates for VIP are:

ZFFR4CB2101 en1:222.188.187.158

ZFFR4CB1101 en1:222.188.187.148

 

WARNING:

Could not find a suitable set of interfaces for the private interconnect

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "22.188.187.0".

Subnet mask consistency check passed for subnet "222.188.187.0".

Subnet mask consistency check passed.

 

Result: Node connectivity check passed

 

Checking multicast communication...

 

Checking subnet "22.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "22.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

 

Checking subnet "222.188.187.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "222.188.187.0" for multicast communication with multicast group "230.0.1.0" passed.

 

Check of multicast communication passed.

 

Check: Total memory

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   4GB (4194304.0KB)         2GB (2097152.0KB)         passed   

  ZFFR4CB1101   48GB (5.0331648E7KB)      2GB (2097152.0KB)         passed   

Result: Total memory check passed

 

Check: Available memory

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   2.3528GB (2467056.0KB)    50MB (51200.0KB)          passed   

  ZFFR4CB1101   43.8485GB (4.5978476E7KB)  50MB (51200.0KB)          passed   

Result: Available memory check passed

 

Check: Swap space

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   8GB (8388608.0KB)         4GB (4194304.0KB)         passed   

  ZFFR4CB1101   8GB (8388608.0KB)         16GB (1.6777216E7KB)      failed   

Result: Swap space check failed

 

Check: Free disk space for "ZFFR4CB2101:/tmp"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /tmp              ZFFR4CB2101   /tmp          3.5657GB      1GB           passed     

Result: Free disk space check passed for "ZFFR4CB2101:/tmp"

 

Check: Free disk space for "ZFFR4CB1101:/tmp"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /tmp              ZFFR4CB1101   /tmp          18.4434GB     1GB           passed     

Result: Free disk space check passed for "ZFFR4CB1101:/tmp"

 

Check: User existence for "grid"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists(1025)           

  ZFFR4CB1101   passed                    exists(1025)           

 

Checking for multiple users with UID value 1025

Result: Check for multiple users with UID value 1025 passed

Result: User existence check passed for "grid"

 

Check: Group existence for "oinstall"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists                 

  ZFFR4CB1101   passed                    exists                 

Result: Group existence check passed for "oinstall"

 

Check: Group existence for "dba"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    exists                 

  ZFFR4CB1101   passed                    exists                 

Result: Group existence check passed for "dba"

 

Check: Membership of user "grid" in group "oinstall" [as Primary]

  Node Name         User Exists   Group Exists  User in Group  Primary       Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  ZFFR4CB2101       yes           yes           yes           yes           passed     

  ZFFR4CB1101       yes           yes           yes           yes           passed     

Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

 

Check: Membership of user "grid" in group "dba"

  Node Name         User Exists   Group Exists  User in Group  Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       yes           yes           yes           passed         

  ZFFR4CB1101       yes           yes           yes           passed         

Result: Membership check for user "grid" in group "dba" passed

 

Check: Run level

  Node Name     run level                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   2                         2                         passed   

  ZFFR4CB1101   2                         2                         passed   

Result: Run level check passed

 

Check: Hard limits for "maximum open file descriptors"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       hard          9223372036854776000  65536         passed         

  ZFFR4CB1101       hard          9223372036854776000  65536         passed         

Result: Hard limits check passed for "maximum open file descriptors"

 

Check: Soft limits for "maximum open file descriptors"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       soft          9223372036854776000  1024          passed         

  ZFFR4CB1101       soft          9223372036854776000  1024          passed         

Result: Soft limits check passed for "maximum open file descriptors"

 

Check: Hard limits for "maximum user processes"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       hard          16384         16384         passed         

  ZFFR4CB1101       hard          16384         16384         passed         

Result: Hard limits check passed for "maximum user processes"

 

Check: Soft limits for "maximum user processes"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  ZFFR4CB2101       soft          16384         2047          passed         

  ZFFR4CB1101       soft          16384         2047          passed         

Result: Soft limits check passed for "maximum user processes"

 

Check: System architecture

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   powerpc                   powerpc                   passed   

  ZFFR4CB1101   powerpc                   powerpc                   passed   

Result: System architecture check passed

 

Check: Kernel version

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   7.1-7100.03.03.1415       7.1-7100.00.01.1037       passed   

  ZFFR4CB1101   7.1-7100.02.05.1415       7.1-7100.00.01.1037       passed   

 

WARNING:

PRVF-7524 : Kernel version is not consistent across all the nodes.

Kernel version = "7.1-7100.02.05.1415" found on nodes: ZFFR4CB1101.

Kernel version = "7.1-7100.03.03.1415" found on nodes: ZFFR4CB2101.

Result: Kernel version check passed

 

Check: Kernel parameter for "ncargs"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   256                       128                       passed   

  ZFFR4CB1101   256                       128                       passed   

Result: Kernel parameter check passed for "ncargs"

 

Check: Kernel parameter for "maxuproc"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   16384                     2048                      passed   

  ZFFR4CB1101   16384                     2048                      passed   

Result: Kernel parameter check passed for "maxuproc"

 

Check: Kernel parameter for "tcp_ephemeral_low"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   32768                     9000                      failed (ignorable)

  ZFFR4CB1101   32768                     9000                      failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_low"

 

Check: Kernel parameter for "tcp_ephemeral_high"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   65535                     65500                     failed (ignorable)

  ZFFR4CB1101   65535                     65500                     failed (ignorable)

Result: Kernel parameter check passed for "tcp_ephemeral_high"

 

Check: Kernel parameter for "udp_ephemeral_low"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   32768                     9000                      failed (ignorable)

  ZFFR4CB1101   32768                     9000                      failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_low"

 

Check: Kernel parameter for "udp_ephemeral_high"

  Node Name     Current                   Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   65535                     65500                     failed (ignorable)

  ZFFR4CB1101   65535                     65500                     failed (ignorable)

Result: Kernel parameter check passed for "udp_ephemeral_high"

 

Check: Package existence for "bos.adt.base"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.base-7.1.3.15-0   bos.adt.base-...          passed   

  ZFFR4CB1101   bos.adt.base-7.1.3.15-0   bos.adt.base-...          passed   

Result: Package existence check passed for "bos.adt.base"

 

Check: Package existence for "bos.adt.lib"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.lib-7.1.2.15-0    bos.adt.lib-...           passed   

  ZFFR4CB1101   bos.adt.lib-7.1.2.15-0    bos.adt.lib-...           passed   

Result: Package existence check passed for "bos.adt.lib"

 

Check: Package existence for "bos.adt.libm"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.adt.libm-7.1.3.0-0    bos.adt.libm-...          passed   

  ZFFR4CB1101   bos.adt.libm-7.1.3.0-0    bos.adt.libm-...          passed   

Result: Package existence check passed for "bos.adt.libm"

 

Check: Package existence for "bos.perf.libperfstat"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.libperfstat-7.1.3.15-0  bos.perf.libperfstat-...  passed   

  ZFFR4CB1101   bos.perf.libperfstat-7.1.3.15-0  bos.perf.libperfstat-...  passed   

Result: Package existence check passed for "bos.perf.libperfstat"

 

Check: Package existence for "bos.perf.perfstat"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.perfstat-7.1.3.15-0  bos.perf.perfstat-...     passed   

  ZFFR4CB1101   bos.perf.perfstat-7.1.3.15-0  bos.perf.perfstat-...     passed   

Result: Package existence check passed for "bos.perf.perfstat"

 

Check: Package existence for "bos.perf.proctools"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   bos.perf.proctools-7.1.3.15-0  bos.perf.proctools-...    passed   

  ZFFR4CB1101   bos.perf.proctools-7.1.3.15-0  bos.perf.proctools-...    passed   

Result: Package existence check passed for "bos.perf.proctools"

 

Check: Package existence for "xlC.aix61.rte"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   xlC.aix61.rte-12.1.0.1-0  xlC.aix61.rte-10.1.0.0    passed   

  ZFFR4CB1101   xlC.aix61.rte-12.1.0.1-0  xlC.aix61.rte-10.1.0.0    passed   

Result: Package existence check passed for "xlC.aix61.rte"

 

Check: Package existence for "xlC.rte"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   xlC.rte-12.1.0.1-0        xlC.rte-10.1.0.0          passed   

  ZFFR4CB1101   xlC.rte-12.1.0.1-0        xlC.rte-10.1.0.0          passed   

Result: Package existence check passed for "xlC.rte"

 

Check: Operating system patch for "Patch IZ87216"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ87216:devices.common.IBM.mpio.rte  Patch IZ87216             passed   

  ZFFR4CB1101   Patch IZ87216:devices.common.IBM.mpio.rte  Patch IZ87216             passed   

Result: Operating system patch check passed for "Patch IZ87216"

 

Check: Operating system patch for "Patch IZ87564"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ87564:bos.adt.libmIZ87564:bos.adt.prof  Patch IZ87564             passed   

  ZFFR4CB1101   Patch IZ87564:bos.adt.libmIZ87564:bos.adt.prof  Patch IZ87564             passed   

Result: Operating system patch check passed for "Patch IZ87564"

 

Check: Operating system patch for "Patch IZ89165"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ89165:bos.rte.bind_cmds  Patch IZ89165             passed   

  ZFFR4CB1101   Patch IZ89165:bos.rte.bind_cmds  Patch IZ89165             passed   

Result: Operating system patch check passed for "Patch IZ89165"

 

Check: Operating system patch for "Patch IZ97035"

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   Patch IZ97035:devices.vdevice.IBM.l-lan.rte  Patch IZ97035             passed   

  ZFFR4CB1101   Patch IZ97035:devices.vdevice.IBM.l-lan.rte  Patch IZ97035             passed   

Result: Operating system patch check passed for "Patch IZ97035"

 

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

 

Check: Current group ID

Result: Current group ID check passed

 

Starting check for consistency of primary group of root user

  Node Name                             Status                 

  ------------------------------------  ------------------------

  ZFFR4CB2101                           passed                 

  ZFFR4CB1101                           passed                 

 

Check for consistency of root user's primary group passed

 

Starting Clock synchronization checks using Network Time Protocol(NTP)...

 

NTP Configuration file check started...

The NTP configuration file "/etc/ntp.conf" is available on all nodes

NTP Configuration file check passed

 

Checking daemon liveness...

 

Check: Liveness for "xntpd"

  Node Name                             Running?               

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           yes                    

Result: Liveness check passed for "xntpd"

Check for NTP daemon or service alive passed on all nodes

 

Checking NTP daemon command line for slewing option "-x"

Check: NTP daemon command line

  Node Name                             Slewing Option Set?    

  ------------------------------------  ------------------------

  ZFFR4CB2101                           yes                    

  ZFFR4CB1101                           no                     

Result:

NTP daemon slewing option check failed on some nodes

PRVF-5436 : The NTP daemon running on one or more nodes lacks the slewing option "-x"

Result: Clock synchronization check using Network Time Protocol(NTP) failed

 

Checking Core file name pattern consistency...

Core file name pattern consistency check passed.

 

Checking to make sure user "grid" is not in "system" group

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  ZFFR4CB2101   passed                    does not exist         

  ZFFR4CB1101   passed                    does not exist         

Result: User "grid" is not part of "system" group. Check passed

 

Check default user file creation mask

  Node Name     Available                 Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  ZFFR4CB2101   022                       0022                      passed   

  ZFFR4CB1101   022                       0022                      passed   

Result: Default user file creation mask check passed

Checking consistency of file "/etc/resolv.conf" across nodes

 

File "/etc/resolv.conf" does not exist on any node of the cluster. Skipping further checks

 

File "/etc/resolv.conf" is consistent across nodes

 

Check: Time zone consistency

Result: Time zone consistency check passed

Result: User ID < 65535 check passed

 

Result: Kernel 64-bit mode check passed

 

[grid@ZFFR4CB2101:/softtmp/grid]$

 

三.2.1  静默安装grid软件

root执行:

/softtmp/grid/rootpre.sh

 

[ZFFR4CB2101:root]/]> /softtmp/grid/rootpre.sh

/softtmp/grid/rootpre.sh output will be logged in /tmp/rootpre.out_16-03-09.09:47:33

 

Checking if group services should be configured....

Nothing to configure.

[ZFFR4CB2101:root]/]>

 

./runInstaller -silent  -force -noconfig -IgnoreSysPreReqs -ignorePrereq  -showProgress \

INVENTORY_LOCATION=/u01/app/oraInventory \

SELECTED_LANGUAGES=en \

ORACLE_BASE=/u01/app/grid \

ORACLE_HOME=/u01/app/11.2.0/grid \

oracle.install.asm.OSDBA=asmdba \

oracle.install.asm.OSOPER=asmoper \

oracle.install.asm.OSASM=asmadmin \

oracle.install.crs.config.storageOption=ASM_STORAGE \

oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=EXTERNAL \

oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL \

oracle.install.crs.config.useIPMI=false \

oracle.install.asm.diskGroup.name=OCR \

oracle.install.asm.diskGroup.redundancy=EXTERNAL \

oracle.installer.autoupdates.option=SKIP_UPDATES \

oracle.install.crs.config.gpnp.scanPort=1521 \

oracle.install.crs.config.gpnp.configureGNS=false \

oracle.install.option=CRS_CONFIG \

oracle.install.asm.SYSASMPassword=lhr \

oracle.install.asm.monitorPassword=lhr \

oracle.install.asm.diskGroup.diskDiscoveryString=/dev/rhdisk* \

oracle.install.asm.diskGroup.disks=/dev/rhdisk10 \

oracle.install.crs.config.gpnp.scanName=ZFFR4CB2101-scan \

oracle.install.crs.config.clusterName=ZFFR4CB-cluster \

oracle.install.crs.config.autoConfigureClusterNodeVIP=false \

oracle.install.crs.config.clusterNodes=ZFFR4CB2101:ZFFR4CB2101-vip,ZFFR4CB1101:ZFFR4CB1101-vip \

oracle.install.crs.config.networkInterfaceList=en0:22.188.187.0:1,en1:222.188.187.0:2 \

ORACLE_HOSTNAME=ZFFR4CB2101

 

命令行模式执行静默安装,注意复制脚本的时候最后不能多加回车符号,当前窗口不要执行其余内容,开始执行有点慢,须要修改的地方我已经用黄色背景表示了:

[grid@ZFFR4CB2101:/softtmp/grid]$ ./runInstaller -silent  -force -noconfig -IgnoreSysPreReqs -ignorePrereq  -showProgress \

> INVENTORY_LOCATION=/u01/app/oraInventory \

> SELECTED_LANGUAGES=en \

> ORACLE_BASE=/u01/app/grid \

> ORACLE_HOME=/u01/app/11.2.0/grid \

> oracle.install.asm.OSDBA=asmdba \

> oracle.install.asm.OSOPER=asmoper \

> oracle.install.asm.OSASM=asmadmin \

> oracle.install.crs.config.storageOption=ASM_STORAGE \

> oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=EXTERNAL \

> oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL \

> oracle.install.crs.config.useIPMI=false \

> oracle.install.asm.diskGroup.name=OCR \

> oracle.install.asm.diskGroup.redundancy=EXTERNAL \

> oracle.installer.autoupdates.option=SKIP_UPDATES \

> oracle.install.crs.config.gpnp.scanPort=1521 \

> oracle.install.crs.config.gpnp.configureGNS=false \

> oracle.install.option=CRS_CONFIG \

> oracle.install.asm.SYSASMPassword=lhr \

> oracle.install.asm.monitorPassword=lhr \

> oracle.install.asm.diskGroup.diskDiscoveryString=/dev/rhdisk* \

> oracle.install.asm.diskGroup.disks=/dev/rhdisk10 \

> oracle.install.crs.config.gpnp.scanName=ZFFR4CB2101-scan \

> oracle.install.crs.config.clusterName=ZFFR4CB-cluster \

> oracle.install.crs.config.autoConfigureClusterNodeVIP=false \

> oracle.install.crs.config.clusterNodes=ZFFR4CB2101:ZFFR4CB2101-vip,ZFFR4CB1101:ZFFR4CB1101-vip \

> oracle.install.crs.config.networkInterfaceList=en0:22.188.187.0:1,en1:222.188.187.0:2 \

> ORACLE_HOSTNAME=ZFFR4CB2101

********************************************************************************

 

Your platform requires the root user to perform certain pre-installation

OS preparation.  The root user should run the shell script 'rootpre.sh' before

you proceed with Oracle installation.  rootpre.sh can be found at the top level

of the CD or the stage area.

 

Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle

installation.

Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.

 

********************************************************************************

 

Has 'rootpre.sh' been run by root on all nodes? [y/n] (n)

y

 

Starting Oracle Universal Installer...

 

Checking Temp space: must be greater than 190 MB.   Actual 4330 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 8192 MB    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-03-10_04-54-07PM. Please wait ...[grid@ZFFR4CB2101:/softtmp/grid]$

[grid@ZFFR4CB2101:/softtmp/grid]$

[grid@ZFFR4CB2101:/softtmp/grid]$

[grid@ZFFR4CB2101:/softtmp/grid]$

[grid@ZFFR4CB2101:/softtmp/grid]$ [WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended standards.

   CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].

   ACTION: Provide a password that conforms to the Oracle recommended standards.

[WARNING] [INS-30011] The ASMSNMP password entered does not conform to the Oracle recommended standards.

   CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].

   ACTION: Provide a password that conforms to the Oracle recommended standards.

You can find the log of this install session at:

/u01/app/oraInventory/logs/installActions2016-03-10_04-54-07PM.log

 

Prepare in progress.

..................................................   5% Done.

 

Prepare successful.

 

Copy files in progress.

..................................................   10% Done.

..................................................   15% Done.

........................................

Copy files successful.

..................................................   27% Done.

 

Link binaries in progress.

 

Link binaries successful.

..................................................   34% Done.

 

Setup files in progress.

 

Setup files successful.

..................................................   41% Done.

 

Perform remote operations in progress.

..................................................   48% Done.

 

Perform remote operations successful.

The installation of Oracle Grid Infrastructure was successful.

Please check '/u01/app/oraInventory/logs/silentInstall2016-03-10_04-54-07PM.log' for more details.

..................................................   97% Done.

 

Execute Root Scripts in progress.

 

As a root user, execute the following script(s):

        1. /u01/app/oraInventory/orainstRoot.sh

        2. /u01/app/11.2.0/grid/root.sh

 

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:

[ZFFR4CB2101, ZFFR4CB1101]

Execute /u01/app/11.2.0/grid/root.sh on the following nodes:

[ZFFR4CB2101, ZFFR4CB1101]

 

..................................................   100% Done.

 

Execute Root Scripts successful.

As install user, execute the following script to complete the configuration.

        1. /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands

 

        Note:

        1. This script must be run on the same system from where installer was run.

        2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

 

 

Successfully Setup Software.

 

[grid@ZFFR4CB2101:/softtmp/grid]$

 

 

 

 

执行命令的节点:

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

6.80    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

7.41    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

8.03    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

8.61    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

9.80    /u01/app/11.2.0/grid

[ZFFR4CB2101:root]/u01/app]> du -sg /u01/app/11.2.0/grid

9.80    /u01/app/11.2.0/grid

 

 

执行到 Perform remote operations in progress. 的时候,能够查看另一个节点的grid目录的大小来判断是否卡掉:

[ZFFR4CB1101:root]/u01/app/11.2.0/grid/bin]> du -sg .

1.78    .

[ZFFR4CB1101:root]/u01/app/11.2.0/grid/bin]> cd

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

2.90    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

3.41    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

7.25    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

8.76    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]> du -sg /u01/app/11.2.0/grid

9.81    /u01/app/11.2.0/grid

[ZFFR4CB1101:root]/]>

 

 

三.2.1.1  执行root.sh

As a root user, execute the following script(s):

        1. /u01/app/oraInventory/orainstRoot.sh

        2. /u01/app/11.2.0/grid/root.sh

 

 

[ZFFR4CB2101:root]/]> /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[ZFFR4CB2101:root]/]> /u01/app/11.2.0/grid/root.sh

Check /u01/app/11.2.0/grid/install/root_ZFFR4CB2101_2016-03-10_17-08-45.log for the output of root script

 

回车后一直在等待。。。。。直到自动跳出才是完成,单独开窗口查看日志:

[ZFFR4CB2101:root]/softtmp]>  tail -2000f /u01/app/11.2.0/grid/install/root_ZFFR4CB2101_2016-03-10_17-08-45.log

 

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2.0/grid

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

User grid has the required capabilities to run CSSD in realtime mode

OLR initialization - successful

  root wallet

  root wallet cert

  root cert export

  peer wallet

  profile reader wallet

  pa wallet

  peer wallet keys

  pa wallet keys

  peer cert request

  pa cert request

  peer cert

  pa cert

  peer root cert TP

  profile reader root cert TP

  pa root cert TP

  peer pa cert TP

  pa peer cert TP

  profile reader pa cert TP

  profile reader peer cert TP

  peer user cert

  pa user cert

Adding Clusterware entries to inittab

CRS-2672: Attempting to start 'ora.mdnsd' on 'zffr4cb2101'

CRS-2676: Start of 'ora.mdnsd' on 'zffr4cb2101' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'zffr4cb2101'

CRS-2676: Start of 'ora.gpnpd' on 'zffr4cb2101' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'zffr4cb2101'

CRS-2672: Attempting to start 'ora.gipcd' on 'zffr4cb2101'

CRS-2676: Start of 'ora.gipcd' on 'zffr4cb2101' succeeded

CRS-2676: Start of 'ora.cssdmonitor' on 'zffr4cb2101' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'zffr4cb2101'

CRS-2672: Attempting to start 'ora.diskmon' on 'zffr4cb2101'

CRS-2676: Start of 'ora.diskmon' on 'zffr4cb2101' succeeded

CRS-2676: Start of 'ora.cssd' on 'zffr4cb2101' succeeded

 

ASM created and started successfully.

 

Disk Group OCR created successfully.

 

clscfg: -install mode specified