Краткое содержание страницы № 1 
                    
                        Redbooks 
Paper
Dino Quintero
Sven Meissner
Andrei Socoliuc
Hardware Management Console 
(HMC)  Case Configuration Study for 
LPAR  Management
This IBM® Redpaper provides Hardware Management Console (HMC) 
configuration considerations and describes case studies about how to use the 
HMC in a production environment. This document does not describe how to 
install the HMC or how to set up LPARs. We assume you are familiar with the 
HMC. Rather, the case studies presented in this Redpaper provide a
                    
                    Краткое содержание страницы № 2 
                    
                         Automation  High availability considerations for HMCs Introduction and overview  The Hardware Management Console (HMC) is a dedicated workstation that  allows you to configure and manage partitions. To perform maintenance  operations, a graphical user interface (GUI) is provided. Functions performed by the HMC include:  Creating and maintaining a multiple partition environment  Displaying a virtual operating system session terminal for each partition  Displaying a virtual operator panel of
                    
                    Краткое содержание страницы № 3 
                    
                        Table 1   Types of HMCs Type Supported managed HMC code version systems 1 7315-CR3 (rack mount) POWER4 or POWER5 HMC 3.x, HMC 4.x, or  HMC 5.x 1 7315-C04 (desktop) POWER4 or POWER5 HMC 3.x, HMC 4.x, or  HMC 5.x 7310-CR3 (rack mount) POWER5 HMC 4.x or HMC 5.x 7310-C04 (desktop) POWER5 HMC 4.x or HMC 5.x 1 - Licensed Internal Code needed (FC0961) to upgrade these HMCs to manager POWER5 systems. A single  HMC cannot be used to manage a mixed environment of POWER4 and POWER5 systems. The HMC 3.x cod
                    
                    Краткое содержание страницы № 4 
                    
                        The maximum number of HMCs supported by a single POWER5 managed  system is two. The number of LPARs managed by a single HMC has been  increased from earlier versions of the HMC to the current supported release as  shown in Table 3. Table 3   HMC history  HMC code No. of No. of No. of Other information HMCs servers LPARs 4.1.x 1 4 40 iSeries Only 4.2.0 2 16 64 p5 520 550 570 4.2.1 2 32 160 OpenPower 720 4.3.1 2 32 254 p5 590 595 4.4.0 2 32 254 p5 575 HMC 7310-CR3/C04 4.5.0 2 32/48 254 48 for non 
                    
                    Краткое содержание страницы № 5 
                    
                        menus. However not all POWER5 servers support this mechanism of  allocation. Currently p575, p590, and p595 servers support only DHCP. Note: Either eth0 or eth1 can be a DHCP server on the HMC.   HMC to partitions: HMC requires TCP/IP connection to communicate with the  partitions for functions such as dynamic LPAR and Service Focal Point.   Service Agent (SA) connections: SA is the application running on the HMC for  reporting hardware failures to the IBM support center. It uses a modem for  
                    
                    Краткое содержание страницы № 6 
                    
                        multi-threading. SMT is a feature supported only in AIX 5.3 and Linux at an  appropriate level.  Multiple operating system support: Logical partitioning allows a single server  to run multiple operating system images concurrently. On a POWER5 system  the following operating systems can be installed: AIX 5L™ Version 5.2 ML4 or  later, SUSE Linux Enterprise Server 9 Service Pack 2, Red Hat Enterprise  Linux ES 4 QU1, and i5/OS. Additional memory allocation in a partitioned environment Three memor
                    
                    Краткое содержание страницы № 7 
                    
                        To calculate your desired and maximum memory values accurately, we  recommend that you use the LVT tool. This tool is available at: http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.htm Figure 1 shows an example of how you can use the LPAR validation tool to verify  a memory configuration. In Figure 1, there are 4 partitions (P1..P4) defined on a  p595 system with a total amount of 32 GB of memory. Figure 1   Using LVT to validate the LPAR configuration   Hardware Management Console (
                    
                    Краткое содержание страницы № 8 
                    
                        The memory allocated to the hypervisor is 1792 MB. When we change the  maximum memory parameter of partition P3 from 4096 MB to 32768 MB, the  memory allocated to the hypervisor increases to 2004 MB as shown in Figure 2. Figure 2   Memory used by hypervisor Figure 3 is another example of using LVT when verifying a wrong memory  configuration. Note that the total amount of allocated memory is 30 GB, but the  maximum limits for the partitions require a larger hypervisor memory. Figure 3   An examp
                    
                    Краткое содержание страницы № 9 
                    
                        Micro-partitioning With POWER5 systems, increased flexibility is provided for allocating CPU  resources by using micropartitioning features. The following parameters can be  set up on the HMC:  Dedicated/shared mode, which allows a partition to allocate either a full CPU  or partial units. The minimum CPU allocation unit for a partition is 0.1.  Minimum, desired, and maximum limits for the number of CPUs allocated to a  dedicated partition.  Minimum, desired and maximum limits for processor u
                    
                    Краткое содержание страницы № 10 
                    
                        Note: Take into consideration that changes in the profile will not get activated  unless you power off and start up your partition. Rebooting of the operating  system is not sufficient. Capacity on Demand The Capacity on Demand (CoD) for POWER5 systems offers multiple options,  including:  Permanent Capacity on Demand: – Provides system upgrades by activating processors and/or memory. – No special contracts and no monitoring are required. – Purchase agreement is fulfilled using activation keys.
                    
                    Краткое содержание страницы № 11 
                    
                        HMC sample scenarios The following examples illustrate POWER5 advance features. Examples of using capped/uncapped, weight, dynamic LPAR and  CoD features Our case study describes different possibilities to take advantage of the  micropartitioning features and CoD assuming a failover/fallback scenario based  on two independent servers. The scenario does not address a particular  clustering mechanism used between the two nodes. We describe the operation  by using both the WebSM GUI and the command
                    
                    Краткое содержание страницы № 12 
                    
                        P550 – 2 CPU - 8GB  P550 – 4 CPU – 8 GB   nils (production) julia (standby) Cluster 2 CPUs (dedicated) 0.2 CPU (shared) 7 GB 1024 MB Oli (production) 1 CPU (dedicated) 5120 MB  Nicole_vio 0.8 CPU (shared) 1024 MB  HMC 1 HMC 2 Figure 4   Initial configuration Table 5 shows our configuration in detail. Our test system has only one 4-pack  DASD available. Therefore we installed a VIO server to have sufficient disks  available for our partitions. Table 5   CPU and memory allocation table Partition C
                    
                    Краткое содержание страницы № 13 
                    
                        Table 6   Memory allocation Memory (MB) Partition name Min Desired Max nicole_vio 512 1024 2048 oli 1024 5120 8192 julia 512 1024 8192 Enabling ssh access to HMC By default, the ssh server on the HMC is not enabled. The following steps  configure ssh access for node julia on HMC. The procedure will allow node julia  to run HMC commands without providing a password.  Enabling the remote command execution on HMC. In the management area of the HMC main panel, select HMC Management →  HMC Configura
                    
                    Краткое содержание страницы № 14 
                    
                        HMC Configuration. In the right panel select Customize Network Setting,  press the LAN Adapters tab, choose the interface used for remote access and  press Details. In the new window select the Firewall tab. Check that the ssh port  is allowed for access (see Figure 6). Figure 6   Firewall settings for eth1 interface  Install the ssh client on the AIX node: The packages can be found on the AIX 5L Bonus Pack CD. To get the latest  release packages, access the following URL: http://sourceforge.ne
                    
                    Краткое содержание страницы № 15 
                    
                          openssh.msg.en_US       3.8.0.5302    C     F    Open Secure Shell Messages -  Log in the user account used for remote access to the HMC. Generate the  ssh keys using the ssh-keygen command. In Example 2, we used the root  user account and specified the RSA algorithm for encryption. The security  keys are saved in the /.ssh directory. Example 2   ssh-keygen output root@julia/>ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa): Enter p
                    
                    Краткое содержание страницы № 16 
                    
                        Now, we force node nils to fail and prepare to start the takeover scenario (see  Figure 7). P550 – 2 CPU - 8GB  P550 – 4 CPU – 8 GB   1 nils (production) julia (production) 2 CPUs (dedicated) 2 CPU (shared) takeover    7 GB 7 GB oli (production) 1 CPU (dedicated) 5120 MB  nicole_vio (VIO server) 0.8 CPU (shared) 1024 MB CoD activation DLPAR operations 2 HMC 1 HMC 2 1 - Failover to node julia 2 - Node julia remotely activates CoD and performs DLPAR operations via HMC   Figure 7   CoD and dynamic 
                    
                    Краткое содержание страницы № 17 
                    
                        Figure 8   Activating the On/Off CoD  Activating On/Off CoD using the command line interface. Example 4 shows how node julia activates 2 CPUs and 8 GB of RAM for 3 days  by running via ssh the command chcod on the HMC. Example 4   Activating CoD using command line interface CPU: root@julia/.ssh>ssh hscroot@hmctot184 "chcod  -m  p550_itso1  -o a  -c onoff   -r proc  -q 2 -d 3" Memory: root@julia/.ssh>ssh hscroot@hmctot184 "chcod  -m  p550_itso1  -o a  -c onoff -r  mem -q 8192 -d 3"  Perform the
                    
                    Краткое содержание страницы № 18 
                    
                        Note: If you use reserve CoD instead of ON/OFF CoD to temporarily activate  processors, you can assign the CPUs to shared partitions only. In order for node julia to operate with the same resources as node nils had, we  have to add 1.8 processing units and 6.5 GB memory to this node.  Allocation of processor units. – Using the graphical user interface. In the Server and Partition panel on HMC, right-click on partition julia and select  Dynamic Logical Partitioning → Processor Resources → Add. I
                    
                    Краткое содержание страницы № 19 
                    
                        Example 5   Perform the CPU addition from the command line root@julia/>lsdev -Cc processor proc0 Available 00-00 Processor root@julia/>ssh hscroot@hmctot184 lshwres -r proc -m p550_itso1 --level\ \ >lpar --filter "lpar_names=julia" -F lpar_name:curr_proc_units:curr_procs\ \  >--header  lpar_name:curr_proc_units:curr_procs  julia:0.2:1 root@julia/>ssh hscroot@hmctot184 chhwres -m p550_itso1 -o a -p julia \ \  -r proc --procunits 1.8 --procs 1 root@julia/>lsdev -Cc processor proc0 Available 00-00 
                    
                    Краткое содержание страницы № 20 
                    
                        Figure 10   Add memory to partition – Using the command line. Example 6 shows how to allocate 6 GB of memory to partition julia. Example 6   Memory allocation using command line interface root@julia/>lsattr -El mem0 goodsize 1024 Amount of usable physical memory in Mbytes False size     1024 Total amount of physical memory in Mbytes  False root@julia/>ssh hscroot@hmctot184 lshwres -r mem -m p550_itso1 --level \ \ >lpar --filter "lpar_names=julia" -F lpar_name:curr_mem --header lpar_name:curr_mem