Skip to main content

Architecting a Citrix Virtualization solution-Design:Virtualization Infrastructure Design

Lets refresh where we are! We are discussing  about architecting a Citrix environment from Architects perspective. There are 2 parts to it:

  1)    Assessment
  2)    Design

Assessment is further divided into

Design is further divided into
·   Virtualization Infrastructure Design
·   Access Design
·   Delivery Optimization
In this blog, we will discuss design of Virtualization Infrastructure  .

The virtualization infrastructure is the underlying platform that will support the desktop and application virtualization design. This platform includes the virtualization hypervisor, server hardware configuration, virtual networking configuration, storage configuration and resource pool design.
Virtualization Infrastructure Design Scenario
A Citrix Certified Integration Architect has been given the task of designing a virtualization infrastructure solution for a hospital piloting virtual desktops to some of its users.
During the assessment and previous design discussions, the architect documented the following information:
  • Doctors should have private desktop images as they demand a unique desktop environment with persistent application installations and customizations.
  • The hospital's medical imaging application will be installed on the doctor's desktop image.
  • Because of the criticality of their work, doctors' desktops must be highly available.
  • Nurses move from workstation to workstation within the hospital and access XenApp hosted shared desktops.
  • The hospital's patient care application is hosted on XenApp. XenApp hosted shared desktops are published to nurses, while only the application is published to doctors.
  • XenApp servers hosting shared desktops should be highly available.
  • Office staff use standard productivity and billing applications.
  • High availability of changes or customizations for office staff desktops is not a priority.
  • The hospital has a Hitachi FC-SAN.
  • XenServer hosts have 10,000 RPM drives configured for RAID 0.
  • Provisioning Services will be physical server workloads.
  • Good virtual desktop performance is a higher priority than management ease.
  
      User Group
     Number
      Desktop Type
      vDisk Type
      Doctors
       50 
          Hosted
       Private
       Nurses
       125
         Hosted Shared (XenApp)
       Standard
         Office staff
         150
       Hosted
       Standard

      vDisk Group
       Description
        vDisk Size
        Write Cache Size
       Doctors desktop
            32-bit Windows 7 vDisk
             25 GB
          N/A
       XenApp servers
        64-bit XenApp 5 for Windows Server 2008 vDisk
            30
           8 GB
     Office staff desktop
          32-bit Windows 7 vDisk
       25 GB
          5 GB
An architect can use the information gathered during an assessment and design discussion and the information in the following topics to assist in designing a virtualization infrastructure solution.


Server Hardware
Architects should ensure that the server model will meet the overall project goals and design requirements. All XenServer host hardware should be listed on the current XenServer Hardware Compatibility List (HCL). Ideally, the server hardware should support the organization's initiatives for server consolidation, redundancy and capacity.
All servers in a common XenServer resource pool must be configured with processors capable of executing a common instruction set. XenServer has a mandatory 64-bit processor requirement. If Windows workloads will be virtualized, the processor also must support virtualization technology, such as Intel VT and AMD-V. Furthermore, servers running Intel Nehalem processors can allow for greater density for virtualization workloads.




Hardware Considerations



Network Configuration
When designing XenServer virtualization environment, architects must consider the networking implications of running several virtual machines on a single server. Network I/O performance and NIC redundancy are important as poor performance or a NIC failure potentially can impact multiple virtual workloads and users. The risk can be even greater on blade systems as blade servers within an enclosure share common NICs. Understanding the supported workloads and server hardware configuration will help architects create a virtualization design that minimizes networking risks.
Addressing and VLANs
Because several virtual machines will be running on each physical XenServer host, these virtual machines must share the physical NICs on the server or blade enclosure. To overcome the networking and IP addressing limitations this poses, virtual LANs (VLANs) can be defined on the XenServer hosts, which will allow for network traffic segregation. Segregating network traffic can improve performance by preventing traffic collisions and provide greater control and easier management. If possible, physically segregating management traffic is a best practice; otherwise, management traffic should be segregated through VLANs.
Only physical NIC bonding can be used to bond the management and high availability network. VLAN bonding for this traffic is not supported.
Both the physical XenServer management interfaces and the virtual machines should be assigned to separate VLANs or IP subnets. Virtual machine resources must be assigned to VLANs to ensure users can reach them. IP addressing requirements for both the XenServer hosts and the virtual machines should be accounted for in the design. Architects also should consider the impact to DNS and DHCP given the number of virtual machines that can be started in a very short time.
Citrix XenServer supports the use of tagged VLANs. Using VLANs will require additional analysis as each XenServer in a pool needs to have its physical NICs connected to specific VLAN trunk ports to allow the correct routing of VLAN tagged traffic. Trunking can add complexity to the network design if every VLAN must be trunked to the servers in every resource pool.

From the Architect

Organizations that already have a virtualization platform likely have the expertise needed for segregating VLANs, tagging and trunking traffic, and addressing IPs. Architects should confirm whether the organization currently leverages NIC teaming, traffic segmentation and VLAN trunking and if a process currently exists for fulfilling these network requirements. Architects also should confirm that the organization has adequate addressing space for the virtualized workloads and desktops.
NIC Performance and Redundancy
Because several virtual machines will share common physical NICs, NIC redundancy is critical to ensure the virtualization environment is fault tolerant and has sufficient bandwidth. NIC bonding can be implemented to provide redundancy and better performance. Citrix XenServer uses Source Load Balancing (SLB) to share load across bonded network interfaces.
XenServer provides the ability to bond network adapters to provide redundancy. Deciding whether to bond or not may depend on how many different physical networks need to be supported for each resource pool. While each XenServer host can be configured with six physical NIC bonds, the use of bonded interfaces will limit the overall number of physical networks that can be supported. Redundancy requirements also should extend to the physical network infrastructure. Bonded NICs should be diversely connected to external switches or within a blade chassis to eliminate the risk of single switch failure.
Bonds
In order to provide redundancy across network cards and better performance, NIC bonds can be created. Typically, two NICs will be bonded for redundancy and used purely for management traffic and high availability while two or more of the remaining NICs also will be bonded for and used for virtual machine traffic. The physical network configuration will depend on whether rackable or blade servers are implemented; however, servers ideally should have a minimum of four NICs. Below is an example of a typical NIC configuration.
Most blade servers do not support more than four NICs whereas many rackable servers can be upgraded with additional NICs.
                
            Physical NIC
          Host Network  
        NIC Bonding
      Speed/Duplex
      Device
       Integrated NIC1
      XenServer Management
        Yes
      1Gb/Full-Duplex
      eth0
           Integrated NIC1
        Virtual Machine VLAN
      Yes
     1Gb/Full-Duplex
      eth1
       Integrated NIC2
         XenServer Management
          Yes
      1Gb/Full-Duplex
     eth2
       Integrated NIC2
         Virtual Machine VLAN                     
          Yes
     1Gb/Full-Duplex
      eth3
Based on this example, the following bonds will be created.
Bond0+2
The primary NIC bond will be used for all management, high availability and XenMotion traffic. The bond will be formed using port 0 from the first dual-port embedded NIC and port 0 from the second dual-port integrated NIC, which are recognized as ports eth0 and eth2 on each XenServer host.
Bond1+3
The second bond will be used by the various VLANs configured for virtual machine traffic. The bond will be formed using port1 from the first dual-port integrated NIC and port 1 from the second dual-port integrated NIC, which are recognized as ports eth1 and eth3 on each XenServer.

From the Architect

Each of the NICs in a bond should be diversely connected to different network switches within each blade enclosure and each blade enclosure can be diversely connected to separate switches on the backbone. This ensures that neither a physical NIC, blade enclosure switch or distribution switch failure will cause connectivity failure for any XenServer host or virtual workload.
Internal and External Networks
When defining the network design, architects need to determine if the virtual machines need to communicate with resources outside the XenServer host. Virtual machines can be deployed within internal networks that are segregated from the physical network. Virtual machines on internal networks only can communicate with other virtual machines on the same XenServer host. This configuration often is used for lab environments. Alternatively, virtual machines can be deployed within external networks that are connected to the physical network. Virtual machines on external networks can communicate with other resources connected to the physical network.

Internal Network
The simplest network configuration is an internal network. No PIF is required as there is no logical connection to a physical network adapter and therefore no external connectivity. This network is only available to a single XenServer host and cannot be assigned to other pooled hosts. VMs which are connected to internal-only networks cannot be migrated between pooled hosts. VMs on a common XenServer can share this type of network in order to isolate specific traffic.





External Network
The creation of an external network which uses a physical adapter will create the required PIFs. For example, two VMs each with their own VIF are attached to the same virtual switch, which is logically connected to the PIF, and therefore, the physical network adapter



Bonded Interfaces
Two physical network interfaces can be connected to the same network segment to create a bond for resilience or additional throughput.



VLANs
Virtual switches are created, which bridge virtual machines to the physical NIC or NIC bond. VLANs can be defined to segment virtual machine traffic across the same physical NIC bond. VMs are logically connected to separate VLANs by the same physical network card.
Virtual Network Combinations
Depending on the needs of the virtual machines, architects can combine internal and external networks in order to achieve a configuration that meets with the requirements of the design. In the following example, a XenServer host has three NICs, two of which are bonded. The bonded network is used by five virtual machines: three VMs attach to different VLANs, which leverage the bond for resilience, and the other two VMs attach directly to the bond itself. A third NIC provides a non-resilient network, which is used by a sixth virtual machine



Storage Considerations
Storage is a critical component within the virtualization environment, and architects should be knowledgeable about the performance, redundancy, capacity and cost of the storage repository types supported by Citrix XenServer. The performance and high availability requirements of each virtual workload typically dictate the type of storage required. Organizations with enterprise storage likely have a SAN team responsible for managing shared storage requirements. Architects may need to engage the SAN team to:
  • Identify the storage type and disk array type used for each storage repository. Disk arrays determine the performance and redundancy of the storage solution. Typically RAID1 or RAID5 arrays are used.
  • Determine the number of virtual machines sharing a common storage repository.
  • Ensure the design provides the expected levels of redundancy across all storage components.
Each virtual workload may have its own storage capacity and performance requirements, which dictate the category of storage which should be used. For example, a resource pool may contain several storage repositories based upon FC-SAN, NFS and iSCSI, and each workload will be assigned the most appropriate type. Some workloads may require storage that provides high performance while others require high capacity but no redundancy. XenServer allows individual virtual workloads to be assigned a ceiling for disk I/O throughput. This should be considered in scenarios where lower cost, lower performance storage is used, or where specific workloads are likely to cause a disk I/O bottleneck. Architects should convey the disk requirements to the SAN engineer, who will create LUNs in the configuration needed.
Storage Types
Architects should be familiar with the various local and shared storage options supported by XenServer. The following table provides the advantages and disadvantages of each.
           Storage Type
        Description
        Advantages
           Disadvantages
        Local (lvm)
  • Logical Volume manager for local disk storage
  • Includes local USB attached external drives
  • High performance
  • Ease of use
  • Low cost
  • Support for dynamic virtual disk image resizing
         Cannot be used as shared storage as it only supports a single host
       Local (ext)
        Local disk storage formatted using the EXT3 file system
  • Ease of use
  • Low cost
  • VHD file format
  • Thin provisioning support
  • Fast cloning support
  • Cannot be used as shared storage as it only supports a single host
  • Must be managed from the CLI
       Local (udev)
           Locally attached CDROM or USB hard drives
  • Portability
  • Backup of virtual disk images and metadata using USB storage
  • Lower performance as USB-attached drives rarely perform as well as internal drives
  • Must be managed from the CLI
      Software iSCSI (lvmoiscsi)
          iSCSI handled by onboard NICs
      Low cost shared storage features
  • Potentially lower performance and lower scalability
  • Higher impact on the host CPU
  • Uses the network, which limits its performance
        NFS
          NFS over UDP or TCP, handled by onboard NICs
  • Low cost shared storage features
  • Thin provisioning and fast provisioning support
  • VHD file format
  • Generally lower performance than SAN, but it depends on the configuration
  • Higher impact on the host CPU
      Hardware iSCSI (lvmohba)
        iSCSI over TCP/IP handled by dedicated host bus adapters with TCP/IP offload capabilities
  • Generally better performance than software iSCSI
  • Dynamic virtual disk image resizing support
  • Higher cost
  • Requires dedicated iSCSI HBAs
  • Uses the network, which limits its performance
       Fiber Channel (lvmohba)
        SAN-attached storage LUNs
  • High performance and scalability
  • Dynamic virtual disk image resizing support
  • Higher cost
  • Requires access to an existing SAN infrastructure
Citrix XenServer natively supports thin provisioning, dynamic virtual disk image resizing, native snapshot functionality and fast cloning for NetApp and Dell EqualLogic storage solutions. However, Citrix StorageLink can be used to abstract storage functionality from XenServer and extend it to storage solutions from other vendors such as HP, Hitachi and EMC.


Storage Comparison
Each storage type has its advantages and disadvantages, and architects should determine the type of storage to be used based on the organization's performance, redundancy and cost requirements. While Fibre Channel and NFS storage solutions generally can provide high performance, it ultimately depends on the overall configuration. For each type of storage repository, architects should understand the protocol, underlying disk RAID levels and partitioning schemes to ensure the solution provides the right balance between cost, performance and flexibility.
Citrix XenServer provides native integration with storage solutions from vendors such as NetApp and Dell. However, Citrix StorageLink can be used to abstract storage functionality from XenServer and extend it for storage solutions from other vendors such as EMC and Hitachi. 



Capacity and Storage Repositories
The redundancy, performance and space requirements of the virtual machines will dictate the overall storage capacity required for the virtualization project. The design should account for both current and projected storage requirements so that appropriately sized storage repositories can be implemented. If possible, the design should account for the need to undertake disruptive storage repartitioning in the short term.
Virtual machines leveraging Provisioning Services do not need virtual disk images to boot, but need a storage repository for the write-cache.
The number of storage repositories required and the number of virtual disk images (VDIs) in each storage repository will affect performance and, subsequently, the user experience. Although many VDIs can be contained in a single storage repository, a balance must be found between ease of storage management and virtual machine performance. Architects must ensure that the design does not saturate the storage path, whether the design is for one large LUN containing all VDIs or several smaller LUNs with fewer VDIs in each LUN.


From the Architect

Storing too many VDIs in a single storage repository can cause contention issues among VDIs and increase I/O overhead from XenServer hosts communicating to storage queues. Although the number of VDIs contained in each repository can vary, a best practice guideline for architects is 16 VDIs in each storage repository. Proper testing should be conducted to find the ideal balance of performance and management.


Resource Pools
Resource pools provide the backbone of every XenServer infrastructure and define how virtual machines will be segmented across XenServer hosts. Resource pools enforce a logical management boundary while providing a mechanism to organize virtual machines into logical groups that share common resource requirements or business processes.
Resource pools consist of XenServer hosts that share common storage and networking connectivity, which allows virtual machines to be dynamically moved across the hosts in each pool. XenServer currently has a recommended limit of 16 hosts for each resource pool.
When designing the resource pool configuration, architects should note that only XenServers with similar processor architectures and hardware components can be used safely in a common resource pool. During a XenMotion migration event, the source and destination XenServer host processors must be capable of executing the same instruction set to ensure that the virtual machine does not fail during the XenMotion process. Hosts with processors from different vendors, such as AMD and Intel, and processors with different core instruction sets, such as SSEx, VT and XD, cannot be mixed in the same resource pool. Architects should determine whether the organization has hardware that is similar enough to create the needed resource pools. If not, the resource pool design might have to be based on the available hardware sets.

From the Architect

A pool master is a designated XenServer host that handles management tasks on behalf of the pool. The pool master monitors the XenServer host resource usage and issues power commands to the virtual machines. The pool master should be dedicated or assigned a slightly lighter workload than other pool hosts to take into account this additional activity. Dedicating a pool master might not be necessary for server workloads such as XenApp, but is a best practice for XenDesktop hosted desktop pools.
Some organizations may even want to dedicate a pool master for each resource pool. In a resource pool of 16 XenServers, dedicating a pool master will reduce the number of virtual machines hosts to 15, which subsequently reduces the hardware resource available for hosting virtual machines. Architects should note whether this might impact virtual machine capacity. To mitigate potential capacity and performance risks, architects can design resource pools with fewer than 16 XenServer hosts; however, this adds additional administrative overhead as more resource pools must be created and managed


Virtualized Components
When designing resource pools for the Citrix virtualization environment, architects need to determine which infrastructure components should be virtualized. Because many of the components only stress one server resource, leveraging the server virtualization infrastructure layer increases utilization, allowing for greater scalability without the need for more server hardware. Furthermore, virtualization can improve availability and simplify management through XenMotion and high-availability functionality. Each virtual machine can be moved to another physical server without incurring downtime. This allows the organization to provide ongoing hardware maintenance before hardware failures appear. However, if a critical hardware failure occurs, XenServer automatically will restart virtual machines on another host server as needed.
Adding the virtualization layer underneath the operating system can slightly impact system performance depending on the component being virtualized. The following table provides the general recommendations for XenDesktop component virtualization. When designing the virtualization environment, architects should weigh the advantages and disadvantages of virtualizing these components based on the environment and design requirements.
Component
Virtualize?
Justification
Desktop Delivery Controller
Yes
The primary Desktop Delivery Controller incurs a significant processor impact when hundreds or thousands of virtual desktops are started concurrently. Once the startup sequence is completed, the overall processor utilization falls to negligible levels. Concurrently, the secondary Desktop Delivery Controllers experience minimal utilization, regardless of the time. The secondary controllers are responsible for virtual desktop registrations. Each one of these functions occurs without user involvement or interaction. Any performance impact that virtualization adds to the controllers is not experienced by the user.
Provisioning Services
No
If possible, Provisioning Services should be a physical server workload. Provisioning Services maximizes network I/O on the server. Virtualizing these servers on XenServer can result in Dom0 bottleneck and lower network performance as XenServer does not officially support link aggregation. These drawbacks can have a negative impact on the scalability of Provisioning Services.
XenApp
Yes/No
Virtualizing XenApp servers is a trade-off between manageability and user density. Depending on the applications hosted, XenApp servers either will maximize processor or memory, but rarely both at the same time. Many XenApp servers are not fully utilized, which means that virtualizing these servers may have little impact on the overall hardware requirements for the environment. Virtualizing XenApp may reduce user density due to virtualization overhead; however, virtualization allows more 32-bit XenApp servers to run on each host, taking advantage of server hardware resources. Running several XenApp servers on robust physical hosts might mitigate any impact virtualization has on physical server user density.
Web Interface
Yes
Web Interface servers typically are idle except during user connection surges, such as in the morning. Adding a layer of virtualization has little impact on the overall hardware requirements as these servers already are underutilized. In addition, XenMotion allows these servers to be dynamically moved from one physical server to another, allowing them to continue operating or to be restarted on another device, even if the previous server fails.
Zone Data Collector
Yes
Zone Data Collectors are similar to Desktop Delivery Controllers as they incur a significant processor impact when hundreds or thousands of users log onto XenApp concurrently. However, once the logon sequence is completed, the overall processor utilization falls to negligible levels.
License Server
Yes
The Citrix Licensing server is a single-threaded process, which typically is underutilized.
Secure Gateway
Yes
The processor typically is the only bottleneck; however, these servers usually are underutilized.
Access Gateway can be run as a virtualized workload with the Access Gateway VPX virtual appliance. For more information, see the citrix.com web site.

Ask the Architect

Should XenDesktop be implemented in a small environment and if so, how? To find out more, view the "Design XenDesktop for the Small Business" video on the www.citrix.com/tv web site.
XenDesktop Resource Pool Considerations
Virtualized XenDesktop infrastructure components must belong to a XenServer resource pool in order to utilize the high-availability options. The design decision is whether the resource pools will be created for a shared infrastructure or segregated infrastructure. The following table describes the advantages and disadvantages for each type.
       Type
       Description
     Advantages
      Disadvantages
           Shared
       Each resource pool contains virtual servers for any XenDesktop component.
         Server computing resources are more likely to be utilized equally.
       Several resource pool components are required for varied resource pool designs.
          Segregated
          Each resource pool contains only virtual servers for a single or a few XenDesktop components.
      Multi-resource pool designs with more high-availability options are easier to manage.
         All server resources are not fully utilized as a single component typically maximizes one resource.
Depending on the size of the XenDesktop environment, the decision between shared and segregated might be a non-issue. For smaller environments, a single, shared resource pool potentially can provide enough capacity for the entire XenDesktop environment. However, for larger implementations (5000+ hosted desktops), a shared pool model can add complexity given the number of resource pools that need to be created. Configuring high availability across all pools can be challenging especially if business requirements dictate that a single component is always available.

Ask the Architect

Does the environment require that certain virtual desktops always start from a specific set of host servers? If so, the number of resource pools needed may be affected. To find out more, view the "Allocate Virtual Desktops to Specific Hypervisors" video on the www.citrix.com/tv web site.
XenDesktop Resource Pool Best Practices
When designing XenDesktop resource pools, separating the hosted desktop resource pools from the infrastructure component resource pools is a best practice. This dictates two resource pools at a minimum:
  • One resource pool includes all virtualized XenDesktop components except the actual hosted desktops. This pool may allow the physical servers to be better utilized while simplifying high-availability configurations for the XenDesktop infrastructure components. The XenDesktop infrastructure components are able to reside within a single resource pool with plenty of extra capacity available, if required for other services.
  • A second resource pool includes only hosted desktops. For larger environments (5000+ hosted desktops), several pools will be required for the hosted desktops. Because the resource pool only contains hosted desktops, high availability and regular maintenance configurations are simplified.
Ideally, XenApp servers should be separated from the XenDesktop infrastructure resource pool and added to a third resource pool.


Virtual Machine Sizing
Determining the virtual processor, memory and disk requirements for virtual machines requires the proper balance of scalability and server resource utilization. Architects should base virtual machine sizing on the particular application workload the virtual machine will support. For example, a group of task workers might require virtual machines with 1GB RAM and 1 vCPU whereas power users might require Windows 7 virtual machines with 3GB RAM and 2 vCPUs.
However, true sizing and capacity planning requires performance and scalability testing. This testing is essential for validating any sizing assumptions. Architects should define the testing scope by the number of applications and the number of workflows performed within those applications. For example, a group of users whose primary application is SAP might perform several workflows within SAP that are resource intensive and can impact performance.

From the Architect

Virtual machine sizing should be based more on users' application sets and corresponding workflows than on their existing client device specifications or classification, such task worker or power user. However, if sizing is going to be based on existing client device utilization, architects should be conservative and base virtual machine requirements on peak utilization instead of average utilization. If processor and memory resources on XenServer hosts are overcommitted and virtual machine utilization increases, performance and the user experience may suffer.
Architects should avoid providing specific sizing and capacity measurements without using performance and scalability testing. However, if the organization strongly desires using specific numbers based on assumptions instead of testing, architects must explain that the organization may learn later that it needs to buy more hardware, allocate more storage or create more resource pools. In all cases, conducting a phased rollout with pilot users is a best practice.


XenDesktop High-Availability Considerations
When virtualizing XenDesktop components, architects must consider the impact that a XenServer host failure has on the environment. The following XenServer virtual machine settings are recommended for virtualized XenDesktop component high availability:
  • Configure the protection level for the primary XenDesktop controller to "protect" and the secondary controllers to "restart if possible." This configuration guarantees that at least one controller is always available. Additional XenDesktop controllers are recommended for scalability and fault tolerance.
  • Configure the Web Interface servers the same as the XenDesktop controllers. It is important to guarantee that at least one Web Interface server is available to allow users to access desktops and applications. Additional Web Interface servers are recommended for scalability and fault tolerance.
  • Configure the XenDesktop data store and Citrix License Server to "protect." The data store is critical to the functionality of the XenDesktop farm and must be running to create new virtual desktop connections. The License Server is critical to the proper functionality of XenDesktop, Provisioning Services and XenApp.
  • Configure the HDI resource pool of hosted desktops to "do not restart." If a user's hosted desktop fails, the user can connect to another hosted desktop. While connecting, the XenDesktop controllers start the appropriate number of hosted desktops based on XenDesktop group idle settings. However, if a virtual desktop is assigned and not pooled, then the user must wait for a hosted desktop to start.
As many as four Provisioning Services servers can act in a high availability configuration



Comments

Popular posts from this blog

Data Center Migration

Note: This blog is written with the help of my friend Rajanikanth
Data Center Migrations / Data Center Consolidations
Data Center Consolidations, Migrations are complex projects which impact entire orgnization they support. They usually dont happen daily but once in a decade or two. It is imperative to plan carefully, leverage technology improvements, virtualization, optimizations.
The single most important factor for any migration project is to have high caliber, high performing, experienced technical team in place. You are migrating business applications from one data center to another and there is no scope for failure or broken application during migration. So testing startegy should be in place for enterprise business applications to be migrated.
Typical DCC and Migrations business objectives
Business Drivers
·Improve utilization of IT assets ·DC space & power peaked out - business growth impacted ·Improve service levels and responsiveness to new applications ·Reduce support complexi…

HP CSA Implementation

I know the above picture is little confusing but don’t worry I break it down and explain in detail. By the time I am done explaining you all will be happy. HARDWARE AND SOFTWARE REQUIREMENTS 1.VMware vSphere infrastructure / Microsoft Hyper V: For the sake of Simplicity we will use VMware vSphere. We Need vSphere 4.0 /5/5.5 and above and vCenter 4.0 and above ready and installed. This is the first step. 2.We need Software medias for HP Cloud Service Automation, 2.00, HP Server Automation, 9.02, HP Operations Orchestration (OO)9.00.04, HP Universal CMDB 9.00.02, HP Software Site Scope, 11.01,HP Insight Software6.2 Update 1 3.DNS, DHCP and NTP systems are already installed and configured. NTP information should be part of VM templates 4.SQL Server 2005 or Microsoft® SQL Server 2008 or Microsoft® SQL Server 2012 , Oracle 11g, both 32-bit and 64-bit versions may be used for CSA database.
5.We will install  HP Cloud Service Automation, 2.00, HP Server Automation, 9.02, HP Operations Orchestra…

Openstack- Its importance in Cloud. The HP Helion Boost

Every enterprise expects few things from cloud computing, mainly:

· Auto scaling: The workload should increase and decrease as needed by the IT environment.

· Automatic repair: If there is any fault or crash of the application or the server, it automatically fix it

· Fault tolerant: The application or underlying technology is intelligent enough to make itself fault torrent

· Integrated lifecycle: It should have integrated lifecycle

· Unified management: Its easy to manage all different aspects of technology

· Less cost

· Speed


Its year 2014. till now only 5% to 7% enterprises are using cloud computing. Such a small number. Its a huge opportunity and a vast majority for anyone who is interested in providing cloud computing services.
Current IT environment is very complex. You just cant solve all your problems with cloud computing.
There are legacy systems, databases, data processors, different hardware and software. You name it , there are so many technology available in just o…