Monday, June 30, 2014

Citrix -Provisioning Services Architecture

In a XenServer-only implementation, virtual disk files for each virtual machine are stored in a storage location. Organizations with a significant number of virtual machines incur an increase in storage cost, as well as maintenance costs.
Provisioning Services complements XenServer by providing simplified virtual machine management and reduced storage requirements. Provisioning Services allows an engineer to create one standard virtual machine image that is stored and updated. This standard image is streamed to server or desktop target devices. Resource mappings stored on a Provisioning Services server identify which virtual machine images are assigned to each target device. Workloads can easily be repurposed by updating a vDisk mapping and rebooting the target device.

Provisioning Services vDisk changes can be stored in a cache file. Storing changes for each target device that shares a common virtual machine image requires far less storage space than individual virtual machine files in an environment that utilizes server or desktop provisioning.



Traditional Disk Architecture


In a traditional disk architecture, input, such as key strokes or mouse clicks, is passed to the server and is stored on the server RAM. The processor requests instructions from the hard disk controller; these are then passed to the hard disk. The hard disk finds the requested data then passes it to the hard disk controller which then passes it back to the RAM. The RAM sends the data and instructions to the processor where it is processed into output that is sent to the user in the form of screen updates

Provisioning Services Disk Architecture
In a Provisioning Services architecture, input, such as keystrokes or mouse clicks, is passed to the target device and is stored in the server RAM. Unlike a traditional disk architecture, the request for instructions from the hard drive is sent to the NIC on the target device. The request is then sent to the hard disk controller, which in this case is the Provisioning Services server. The Provisioning Services server then locates the appropriate hard disk for that target device from a storage device and forwards the data request. The correct data is found on the hard disk, sent back to the Provisioning Services server, and then sent back to the NIC on the target device. That data is sent back to the RAM, then processed into output in the form of screen updates






Provisioning Services Farm Layout
Although a Provisioning Services farm layout can vary based on individual organization requirements, a farm always consists of a grouping of Provisioning Services servers, target devices and vDisks that are connected to the same database and license server. Provisioning Services servers maintain constant database connectivity to stream vDisks to target devices; therefore the database should be located in physical proximity to the Provisioning Services servers to minimize latency and ensure optimal target device performance.
The database can be placed on a LAN or WAN as long as all Provisioning Services servers can access it. However, streaming vDisks to target devices requires a robust data link between all Provisioning Services servers and target devices in a farm, such as a LAN, to ensure optimal performance.




Traffic Isolation
When possible, vDisk streaming traffic should be isolated from normal production network traffic, such as Internet browsing, printing and file sharing. It is best to use multiple NICs—a single NIC for PXE traffic and bonded NICs for streaming the vDisks to target devices. This separation provides more consistent performance to the streamed operating systems and also prevents conflicts between the streaming traffic and the production network traffic.



Configuring NICs to Separate Stream Service from LAN Traffic
An engineer can use the following procedure to connect separate NICs to a PXE network and a LAN.
  1. Verify that the network interfaces are on isolated subnets so that the PXE and LAN traffic are routed separately and do not conflict with each other.
  2. Run the Provisioning Services Configuration Wizard to bind the Stream Service to the network interface that will be used for vDisk streaming.
  3. Specify the network interface that will be used for the PXE services from the Provisioning Services server
NIC Teaming
Network I/O on the Provisioning Services server can be a limiting factor in server scalability. Teaming two NICs for throughput provides the server with a maximum of 2Gb of network I/O, which increases the network performance and helps alleviate a potential bottleneck.
Additionally, teaming the NICs eliminates a single point of failure, which occurs when only one NIC is available.
The following NIC teaming considerations should be taken into account:
  • Provisioning Services supports Broadcom and Intel NIC teaming drivers.
  • A target device will only failover to Provisioning Services servers that are in the same subnet as the PXE boot server.
  • Load balancing is not supported in NIC teaming implementations.
This must be taken into account for large scale Provisioning Services implementations.
For a virtualized Provisioning Services server, it is recommended to create a NIC bond at the host server level to mitigate the load balancing limitation and decrease the complexity of the Provisioning Services configuration because the NIC teaming requirements do not have to be met on the virtual machine. 



Write Cache Location
A vDisk write cache can be stored in any of the following locations:
  • Target device RAM
  • Target device local storage
  • Target device shared storage
  • Provisioning Services local storage
  • Provisioning Services shared storage
The decision for where to place the target device write cache will impact server and target device performance, server scalability and overall cost. The following descriptions provide benefits and considerations for each write cache location that can help engineers determine the most appropriate location for it in the environment
Target Device RAM
Selecting this write cache location sets aside a portion of the RAM on the target device for the write cache.
Benefits
Considerations
Fastest type of write cache
  • RAM is diverted from workload use
  • The cost is greater than using storage
  • Determining the amount of RAM required for the write cache is difficult, yet critical to the stability of the environment
  • Target device fails when the allocated write cache space reaches capacity

Target Device Local Storage
Selecting this write cache location dedicates a portion of the target device local storage for the write cache. The local storage can be either a physical or virtual disk drive.
Benefits
Considerations
  • Does not require additional resources if physical target devices already have local disks installed and unused
  • Provides fast response times because the read/write to/from the write cache is performed locally
  • Determining the size of the write cache is critical to prevent server failure
  • Local storage typically provides more than enough space for the write cache, minimizing risk of underestimating disk requirements
  • Live migration is not possible on virtual target devices because the storage is not shared across virtual infrastructure servers
  • Not as fast as target device RAM cache
Target Device Shared Storage
Selecting this write cache location places the write cache on a shared storage device attached to the target device. This write cache type is usually only valid in environments that use virtual target devices, such as Citrix XenServer. The storage is assigned to each virtual machine from a shared storage repository.
Benefits
Considerations
  • Response times are faster
  • Storage costs are significantly cheaper than RAM
  • Live migration is possible because the target device cache storage is accessible from multiple virtual machines
  • Slower than target device RAM or local disk cache
  • Setup and configuration of a shared storage solution is required if one is not already in place
Provisioning Services Local Storage
Selecting this write cache location stores the write cache on the physical disks on the Provisioning Services server.
Benefits
Considerations
  • Simplest option to set up
  • No additional resources or configuration within the environment required
  • Disk space is inexpensive
  • Performance is the slowest due to requests to/from the write cache traversing the network between the target device and Provisioning Services server.
  • Provisioning Services server scalability is reduced because Stream service must also service the write cache requests.
  • Provisioning Services HA is not possible because the write cache storage is not accessible by other Provisioning Services servers.
  • Provisioning Services server will fail if the local storage space is exceeded.
Provisioning Services Shared Storage
Selecting this write cache location places the write cache on a shared storage solution that is connected to the Provisioning Services server.
Benefits
Considerations
  • Provisioning Services HA is possible because all Provisioning Services servers attached to shared storage can access the write cache
  • Shared storage devices typically hold a large amount of data, which mitigates storage size concerns
  • Performance is reduced because requests to/from the write cache must traverse the network between the target device, Provisioning Services server and the shared storage.
  • Provisioning Services server scalability is reduced because Stream service must also service the write cache requests.
  • Setup and configuration of a shared storage solution is required, if one is not already in place.












XenApp Installation and Configuration

We are assuming that,  virtual machines are hosted using Citrix XenServer. Each virtual machine is an independent system called a guest operating system. Citrix XenCenter allows you to connect to the XenServer environment and administer guest operating systems

This is also applicable in case of Hyper-V and VMware Environments,

First We need to Create XenApp Farm SQL Database

2.     Start the virtual machines in the following order:
a.      ABC-DMC1
b.     FILER1
c.      ABC-SQL1
d.     ABC-PVSRV1
e.      ABC-PVSRV2
Wait until each virtual machine is completely powered on and at the authentication screen before powering on the next virtual machine.
7.     Log on as a domain administrator to the ABC-SQL1 virtual machine.
8.     Click Start > All Programs > Microsoft SQL Server 2005 > SQL Server Management Studio . The Connect to Server screen appears.
9.     Click Connect to connect to the ABC-SQL1 server.
10. Right-click ABC-SQL1 > Security and click New > Login. The Login - New screen appears.
11. Type ABC-\_XA in the Login name field and click OK.
12. Right-click ABC-SQL1 > Databases and click New Database. The New Database screen appears.
13. Type XenApp-ABC- in the Database name field.
14. Click ... to select a database owner.
15. Type ABC-\_XA in the Enter the object names to select pane and click Check Names.
16. Click OK. The Select Database Owner screen closes.
17. Click OK. The New Database screen closes.
Install XenApp
Use the following procedure to install XenApp on the ABC-XAMaster virtual machine.
1.     Within the XenCenter console, start the ABC-XAMaster virtual machine and log on as a domain administrator.
Confirm that you are using Remote Desktop mode. If you are not using Remote Desktop mode the XenApp installation will take considerably longer to complete. For more information, see Citrix Knowledge Base Article CTX120429.
2.     Select XA_50.ISO from the DVD Drive drop-down list.
3.     Open My Computer.
4.     Right-click CD Drive (D:) and click Open.
5.     Double-click AUTORUN.EXE to launch the XenApp installation. The XenApp installation screen appears.
6.     Click Platinum Edition.
7.     Click Application Virtualization.
8.     Read and respond to the license agreement and click Next.
9.     Review the installation prerequisites and click Next.
10. Click Next to accept the default components.
11. Disregard the error screen and click OK.
12. Click Next in the Component Selection screen.
13. Click Next to accept the default passthrough authentication configuration.
14. Type http://ABC-WI1 in the Server URL field and click Next.
15. Click Next to accept the default license server configuration.
16. Read and respond to the Microsoft Visual C++ license agreement. Microsoft Visual C++ is installed.
17. Click Next in the Access Management Console Installation Welcome screen.
The Access Management Console Installation Welcome screen may appear minimized in the taskbar.
18. Click Next to install all components. The installation starts.
19. Click Finish to complete the component installation. The XenApp Plugin installation starts. When the Plugin installation completes, the Citrix XenApp 5.0 Installation screen appears.
20. Click Next in the Citrix XenApp 5.0 Installation screen.
21. Click Next to accept the default installation components and installation location.
22. Click Next to create a new farm.
23. Type ABC-_XA_Farm in the Farm name field.
24. Select Use the following database on a separate database server and select SQL Server from the drop-down list.
25. Accept the default zone name configuration and click Next. The Create a New Data Source to SQL Server screen appears.
26. Select ABC-SQL1 from the Server drop-down list and click Next.
27. Confirm that With Windows NT authentication using the network login ID is selected and click Next.
28. Select XenApp-ABC- from the Change the default database to drop-down list and click Next.
29. Click Finish.
30. Click Test Data Source to ensure the connection is configured properly. The Test results should return a success message.
31. Click OK in the Test Results screen.
32. Click OK to close the ODBC Microsoft SQL Server Setup screen.
33. Type ABC-\_XA in the User Name field and type Password1 into the Password fields and click Next. 
34. Click Next to accept the default farm administrator settings.
35. Click Next to accept the default IMA encryption configuration.
36. Type ABC-DMC1 in the Host name field and click Next.
37. Click Next to accept the default shadowing configuration.
38. Click Next to accept the default XML service port configuration.
39. Click Add the list of users from the Users group now.
40. Deselect Add anonymous users also and click Next.
41. Click Finish to begin the installation. The installation starts.
42. Deselect the View the Readme file option and click Close to complete the installation. The XenApp Advanced Configuration installation screen appears.
The XenApp Advanced Configuration installation screen may appear minimized in the taskbar.
43. Click Next in the XenApp Advanced Configuration installation welcome screen.
44. Click Next to accept the default destination folder.
45. Click Next to begin the installation.
46. Click Finish to complete the installation. The Citrix XenApp Document Library installation screen appears.
The Citrix XenApp Document Library installation screen may appear minimized in the taskbar.
47. Click Next in the Citrix XenApp Document Library installation welcome screen.
48. Click Next to accept the default destination folder.
49. Click Finish the complete the installation.
50. Confirm that all of the selected components and installed successfully and click Finish to close the Citrix XenApp Components Installation screen.

51. Click Yes to restart the ABC-XAMaster virtual machine.

Saturday, June 28, 2014

Green Datacenter

Note: This blog is written with help of my friend Rajanikanth Katikitala


Green IT
•       Use servers with less power demands. The lower the PUE, the more efficient data center infrastructure
Power Usage Efficiency =       Total facility Power 
        -------------------------------            
      Total IT equipment Power
•       Floor space reduction by moving to high density blade servers
•       Use hot & cold aisle containments to avoid mixing of hot air with cold air
•       Use rack  or row cooling instead of traditional CRAC units
•       Use free cooling. Retrofit outside air supply where possible
•       Right size the cooling requirement as per the IT load to avoid over sizing cooling.
•       use high density area to focus on cooling
•       Distribute high-density racks throughout the layout to mitigate hot
•       spots; use spot rack cooling as necessary




The Uptime Institute “tiers”
 The uptime Institute classifies data centers according into “tiers”.  The following table is taken from the paper, “Industry Tier Classifications Define Site Infrastructure Performance,” by W. Pitt Turner IV and Kenneth Brill, both of the UpTime Institute.

Tier I
Tier II
Tier III
Tier IV
Number of delivery paths (of power and a/c etc)
Only 1
Only 1
1 active
1 passive
2 active
Redundant components
N
N+1
N+1
2(N+1) or S+S
Support space to raised floor ratio
20%
30%
80-90%
100%
Initial watts/sq ft
20-30
40-50
40-60
50-80
Ultimate watts/sq ft
20-30
40-50
40-60
150+
Raised floor height
12”
18”
30-36”
30-36”
Floor loading pounds/sq ft
85
100
150
150+
Utility voltage
208, 480
208, 480
12-15Kv
12-15Kv
Months to implement
3
3-6
15-20
15-20
Year first deployed
1965
1970
1985
1995
Construction $/sq ft raised floor
$450
$600
$900
$1,100
Annual IT downtime due to site
28.8 hrs
22.0 hrs
1.6 hrs
0.4 hrs
Site availability
99.671%
99.749%
99.982%
99.995%
“Even a fault-tolerant and concurrently maintainable Tier IV site will not satisfy an IT requirement of “Five Nines” (99.999%) uptime. The best a Tier IV site can deliver over time is 99.995%, and this assumes a site outage occurs only as a result of a fire alarm or EPO (Emergency Power Off), and that such an event occurs no more than once every five years. Only the top 10 percent of Tier IV sites will achieve this level of performance.  Unless human activity issues are continually and rigorously addressed, at least one additional failure is likely over five years.  While the site outage is assumed to be instantaneously restored (which requires 24 x “forever” staffing), it can still require up to four hours for IT to recover information availability.
Tier IV’s 99.995% uptime is an average over five years.  An alternative calculation using the same underlying data is 100% uptime for four years and 99.954% for the year in which the downtime event occurs.  Higher levels of site uptime can be achieved by protecting against accidental activation or the real need for fire protection and EPOs. Preventatives include high sensitivity smoke detection, limiting fire load, signage, extensive training, staff certification, limiting the number of “outsiders” in critical spaces, and treating people well to increase pride in their work. All of these measures, if taken, can reduce the risk of failures.  Other solutions include placing the redundant parts of the IT computing infrastructure in different site infrastructure compartments so that a site infrastructure event cannot simultaneously affect all IT systems.  Another alternative is focusing special effort on business-critical and mission-critical applications so they do not require four hours to restore.  These operational issues can improve the availability offered by any data center, and are particularly important in a “Four Nines” Tier IV data center housing IT equipment requiring “Five Nines” availability.”
Heating Ventilation and Air Conditioning standards
The Uptime Institution recommends hot and cold aisles.  Instead of having the fronts of all racks facing one way, racks in raised floor data centers should be installed with the fronts towards each other (the cold aisle) and the backs towards each other (the hot aisle). 
ASHRAE "TG9 HDEC" is the industry guiding reference document for data center environments in the US. According to it the data center ambient temperature should be from 20o C (68o F) to 25o C (77o F) and measured in the cold aisle at 5 ft. above the raised floor at the air intake of the operating equipment.  The relative humidity should be from 40 to 55%.  The future TIA-942 standard will be comparable to the ASHARE standard. The TIA-942 standard will call for a maximum rate of change of 5o C (9o F) per hour.
 “Operationally, an errpr on the slightly cooler (21-22 deg C 70-72deg F) room ambient measured at an average 3-5' off the floor in the center of the aisles.  Experience has shown that prolonged temperatures in the room above 75-78 deg. F (24-26C) reduce HDD life up to 50% and cause serious internal issues as interior processor temperatures can exceed 40 deg. C (105 deg. F)”.
If you are involved in writing a proposal that calls for data center temperatures, it is strongly recommended that SLAs state that the HVAC system will keep equipment within the temperature ranges specified by OEM equipment manufacturers, rather than the air temperatures mentioned above.  If a loss of city water supply causes a cooling emergency, you will bless that bit of advice.
Rack spacing
Based on studies that Jonathon Jew performed at 12 web hosting data centers between 30% to 60% of the computer room space can be consumed by power & cooling equipment and circulation (compare with the UpTime table’s entries for Support space to raised floor ratio).  He comments that: “… a lot depended on the heat density of the computer room, the shape of the computer room, and the quantity and size of columns.” 
That reference to circulation concerns rack spacing and other passageways. The ASHRAE and ANSI/TIA recommendations for cabinet/rack spacing are to provide a minimum of 4 feet clearance in the cold aisles in front of cabinets and 3 feet clearance in the hot aisles behind cabinets.  The Americans with Disabilities Act dictates a minimum of 3 feet for any aisles where wheel chairs need to traverse.  Jonathon says: “For cabinets/racks with equipment with a depth no greater than 2 feet, you may be able to get by with rows of cabinets/racks spaced on 5 feet centers.  However, the recommended spacing based on ASHRAE and ANSI/TIA is 7 ft centers.”
If your customer has HP 42 U racks (24” wide and close to 40” deep) with 4 foot of clearance for the cold aisle and the 3 foot of clearance in the hot aisle there will be two racks occupying 164” x 24”.  That’s 13.7 square feet per rack including the hot and cold aisles but ignoring any other additional passageways

Featured Post

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.Route 53  perform three main functions in any...