Tuesday, April 28, 2015

Changing Cloud Market

In this blog we will cover changing face of cloud market.

Public cloud market is getting crowded. Now days everyone wants to provide public cloud services to the customers. The shake up has already started.

HP has declared that it is getting out of public cloud market. Wise decision.

Since the Cloud market thrives on razor thin margins, each VM costs Cents, it is increasing difficult for many vendors to survive in public cloud. Only the Big three Remains as dominant player: Google, Amazon and Microsoft.

Infrastructure as a service (IAAS) has now  high entry barrier. With huge investments and minimal returns, no new player can join the market.


What other a players should do?

My thinking is that there is only few place where the money is right now in Cloud.

1) Software as a Service (SAAS): Now days anyone can go to public cloud and publish its COTS /custom application and give it as a service. SAAS has huge monetary benefits for software companies.

2) Application Transformation: There is a huge market in application transformation consulting. If the Organization wants to move its application to public or hybrid cloud environment, then it needs to make sure thats its application are cloud enabled. Application re-archeting, re-design and making them cloud enable is a goldmine.

So question is : Would you like to provide cloud services? If yes, then be prepared to provide something unique.

Saturday, April 18, 2015

Hyper Converged System: EVO RAIL

EVO:RAIL combines VMware compute, networking, and storage resources into a hyper-converged infrastructure appliance to create an all-in-one solution .
Configure your top-of-rack switch.
Rack, cable, and power-on EVO:RAIL.
Connect a client workstation/laptop to the top-of-rack switch and point your browser to EVO:RAIL for configuration and management.



EVO:RAIL is normally accessed from a browser (Firefox/Chrome/IE) on a management workstation/laptop connected to the EVO:RAIL network. Open the Google Chrome browser on your desktop.


EVO:RAIL Configuration is the first thing you see after physical EVO:RAIL deployment.

1.                  Click to (unsafe)You will now see the EVO:RAIL configuration splash page.


Click the Customize Me! button to configure hostnames, networking, passwords, and global settings. All the fields have predefined values to make configuration quick and easy.
Configuration changes are automatically validated and saved when changing between fields or sections.

To customize EVO:RAIL, use the Hostnamestab to define a naming scheme for your ESXi hosts. The hostname consists of an ESXi hostname prefix, a Separator, an Iterator, and a Top-level domain. The Preview field shows an example of the result of the first ESXi host.  
•          Enter an ESXi hostname prefix.
•          Select a Separator (“None” or a dash ”-“) and the Iterator (Alpha, Num X, or Num 0X).
•          Enter a Top-level domain name.
•          Enter a vCenter Server hostname. The top-level domain is automatically applied to the vCenter Server hostname.
Click Networking to view IP and/or VLAN details for each network type: ESXi Hosts, vSphere vMotion, Virtual SAN, vCenter Server, and VM Networks.

•          in esxi hosts, view the starting and ending address for ip pool, netmask, and default gateway.

•          click vmotion and view the starting andending address for ip pool, the vsphere vmotion vlan id, and the netmask.

•          click virtual san and view the starting andending address for ip pool, the virtual san vlan id, and the netmask.

•          click vcenter server to see the ip address for evo:rail management and vcenter server. the netmask and default gateway are automatically copied from esxi hosts.

•          click vm networks to view the pre-configured virtual machine networks.



Click Passwords to see the predefined passwords for the ESXi hosts and vCenter Server. 
To optionally use Active Directory for authentication, you would enter the AD domain, AD username, and AD password


Click Globals to set the time zone, logging, and any existing NTP, DNS, or Proxy servers on your network. Logging can be set to Log Insight or to an existing syslog server on your network.


Click the Validate button. EVO:RAIL quickly verifies the configuration data and checks for conflicts.
After validation is successful, click the Build Appliance button.

Click the Take me to it! button.
You will see a browser message that the site's security certificate is not trusted:
1.         Click Advanced
2.         Then click Proceed to (unsafe)You will now see the EVO:RAIL build appliance page.


EVO:RAIL implements data services, creates the new ESXi hosts and a Virtual SAN datastore, and deploys the vCenter Server.

Simplicity Transformed: EVO:RAIL enables power-on to VM creation in minutes, radically easy VM deployment, one-click non-disruptive patch and upgrades, simplified management…you get the idea.
Software-Defined Building Block: EVO:RAIL is a scalable Software-Defined Data Center (SDDC) building block that delivers compute, networking, storage, and management to empower private and hybrid cloud, end-user computing, test/dev, and branch office environments.


When you see Hooray!, click the IP address to continue to EVO:RAIL Management.

In EVO:RAIL you will create virtual machines with only a few clicks to select the guest operating system, VM size, network segment, and security option.
After creating some VMs, here are some of the things you can explore in EVO:RAIL Management:
•          View VMs and manage lifecycle operations such as Clone or Rename.
•          Optionally access vSphere Web Client
•          Monitor EVO:RAIL cluster, appliance, and node health
•          Explore features such as logs, licenses, localization, updates, and tasks

After clicking away from the Hooray! page, you will see the login page for EVO:RAIL Management.
Login and Click the Authenticate button.
1.         Click the Create VM icon in the left sidebar to begin the virtual machine creation process.


1.         in the create vm called field, enter a name for your virtual machine such as "vmworld vm 1".
2.         then click the upload image button.
3.         in upload iso, click the choose file button.



Click the Upload Image button.
EVO:RAIL simplifies virtual machine sizing by offering single-click small, medium, and large configurations optimized for each Guest OS type.
Select any size and then click the Select VM Size button.


Check one or more network segments (Development, Production, and/or Staging).
Then click the Select Networks button.
Without EVO:RAIL, customers must manually go through a long list of options to secure a Virtual Machine. EVO:RAIL streamlines this process with three pre-defined Risk Profiles to choose from at the time of VM provisioning.
These profiles are a collection of Virtual Machine Advanced Settings, based on a particular Risk Profile from the vSphere 5.5 Security Hardening Guide
No Policy means that no security configuration options are applied to the VM.
•          Default Policy is Risk Profile 3, which specifies guidelines that should be implemented in all environments. These are VMware best practices for all data centers.
•          Basic Policy is Risk Profile 2, which specifies guidelines for more sensitive environments or small/medium/large enterprises that are subject to strict compliance rules.
•          Secure Policy is Risk Profile 1, which specifies guidelines for the highest security environments, such as top-secret government or military installations, or anyone with extremely sensitive data or in a highly regulated environment.
Select one of the policies and then click Create and Start a New VM.
Wait a minute while EVO:RAIL creates and powers on your VM. You will be returned automatically to the VM dashboard.


Click the VMs icon in the left sidebar and you can see your new VM.

EVO:RAIL Management provides the following capabilities by clicking on the icons on the left side of the management page:
•          VMS - a dashboard to view and manage virtual machines
•          CREATE VM - starts the tool to create a VM
•          HEALTH - monitors the EVO:RAIL cluster, appliances, and nodes
•          CONFIG - accesses log collection, licensing, localization, and update features
•          TASKS - tracks system and user tasks
EVO:RAIL Management displays a dashboard containing all virtual machines. From this page, you can manage the VMs or arrange them with sorting and filtering. The icons at the bottom of each VM allow you to perform operations to manage the virtual machine.  
1.         Click the VMs icon in the left sidebar.
2.         Clone an existing VM by clicking the Clonebutton.
3.         Create or clone several VMs, then view them with Filter, Sort, and/or Search.
4.         If you are familiar with vSphere Web Client and you would like to use it to explore what EVO:RAIL has created, click the vSphere Web Client icon. The username isadministrator@vsphere.local and the password is VMware1!
5.         In the Hands-On Lab, the VM cannot be properly installed and configured with an IP address, so the IP address is “IP Not Available”.

Click the Health icon in the left sidebar.
EVO:RAIL Management simplifies live compute management with health monitors for CPU, memory, storage, and VM usage for entire EVO:RAIL clusters, individual appliances, and individual nodes.
•          Cluster information: Click on OVERALL SYSTEM.
•          Appliance information: Click on an appliance ("MAR12345604") - either in the menu bar in the top of the window or in the list ofEVO:RAIL Appliances below the Live Usage Statistics.
Node information: To view information about a specific node, click the appliance first and scroll down to see the nodes


After you click on the appliance ("MAR12345604"), scroll down from "Live Usage Statistics" to see information about the four nodes in this appliance.


EVO:RAIL revolutionizes scale-out. Increasing compute, network, and storage resources is as easy as powering up a new appliance to join an existing EVO:RAIL cluster.
EVO:RAIL provides auto-discovery capabilities that use the RFC-recognized "Zero Network Configuration" protocol. New EVO:RAIL appliances advertise themselves on a network using the VMware "Loudmouth" service.
The first EVO:RAIL appliance in a cluster creates a new instance of vCenter Server, and additional EVO:RAIL appliances join that first instance. Thus, subsequent appliances in a cluster are built in considerably less time than the first EVO:RAIL appliance. In production environments, the first EVO:RAIL appliance is built in about 15 minutes and additional appliances are built in about 6 minutes each.


Click the Add EVO:RAIL Appliance button.



The Networking Pools section shows the IP pools that were reserved for ESXi, vMotion and Virtual SAN during the configuration of the first EVO:RAIL appliance. In general, we recommend allocating 16 IP addresses for each pool to make adding new appliances really simple. On the right side of the page, you can see that EVO:RAIL validates that you have enough IP addresses for the new appliance; otherwise, you would need to add them on this page. Also, note that the new EVO:RAIL appliance tag, "MAR45678904", is displayed in the upper right corner.
1.         All you have to enter is the Passwords for the existing ESXi hosts and vCenter Server. Both passwords should be VMware1!
2.         Be sure to leave the password fields (tab or click out).
3.         Click the Add EVO:RAIL Appliance button.


Click the Health icon in the left sidebar. Confirm that the Overall Health is Healthy.

2. Then click the first appliance, MAR12345604. Confirm that its status is Healthy.

3. Then click the second appliance,MAR45678904. Confirm that its status isHealthy


Using the vSphere Web Client, we can more closely investigate the status and configuration of the Virtual SAN datastore:

1. Click the Home tab in the center section, then click Hosts & Clusters.

2. Expand MARVIN-Datacenter and expandMARVIN-Virtual-SAN-Cluster. This shows that you now have eight ESXi hosts.

3. With the MARVIN-Virtual-SAN-Clusterselected, click the Manage tab in the center section.

4. Under Virtual SAN, select General.

5. Under Resources, confirm that all 8 hosts, SSD and Data disks are shown and eligible, and that the network status is "Normal".


EVO:RAIL handles hardware failures easily. A single node can be replaced in the field without disruption to your work load.
In the event of node failure, your Qualified EVO:RAIL Partner (QEP) will ship a replacement node with a unique identity stamped in the BIOS that matches that of the failed node. All you need to do is remove the failed node and a replace it with the new EVO:RAIL node.
The EVO:RAIL engine detects the new node and alerts the operator to "re-add" it. EVO:RAIL cleans up the old ESXi host and configures a new ESXi host on the replacement node with all previous parameters.


Return to the vSphere Web Client by clicking the vSphere Web Client tab in your Chrome browser or by clicking the vSphere Web Client icon on the EVO:RAIL Management home page.
 To emulate the failure of an EVO:RAIL node, shut down an ESXi host in the vSphere Web Client. This triggers alerts in EVO:RAIL and the vCenter Web Client.
If you still have the vSphere Web Client tab open from previous exercises, you may be able to skip these 5 steps:
1.         From EVO:RAIL Management, click the VMsicon in the left sidebar.
2.         In the top right corner, click the vSphere Web Client icon.
3.         Login to the vSphere Web Client; username:administrator@vsphere.local password:VMware1!
4.         From vSphere Web Client, click the Hometab, then click Hosts & Clusters.
5.         Expand MARVIN-Datacenter and expandMARVIN-Virtual-SAN-Cluster.
1.         Select the eighth ESXi host.
2.         Right-click to select Shut Down.
3.         Click OK.
After waiting a minute or two the ESXi host will be powered off. You will see a red alert next to the selected ESXi host, confirming the failure in vSphere Web Client
Return to the EVO:RAIL Management tab in your browser. You may need to refresh your browser.
2.         Click the Health icon in the left sidebar. You should see that the Overall Health is "Critical".
3.         Click the second appliance, MAR45678904.
Verify that the node (in this example, esx-node08.vmworld.local) has a red X icon indicating a problem
All other nodes in the EVO:RAIL cluster will be marked with a "Warning" because EVO:RAIL fault tolerance policy is failure of one full node.


Each EVO:RAIL appliance has unique ID, and each node within the appliance has a unique number. The QEP will dispatch a replacement node with the exact same identity as the failed node. The failed node is removed and replaced, and the new node is powered on. Because it has the same identity as the original node, EVO:RAIL recognizes the new node and adds it back into the appliance. EVO:RAIL reconfigures the ESXi host on the new node with the original configuration (Networking, Password, Global Settings).
1.         On the left-hand side of the HOL interface you will see shortcut area labelled "Consoles"
2.         When you click this icon, you will see an icon called "ESX-08A-REPLACEMENT". Click this icon.
3.         In the central pane, click the Power On icon to bring the replacement node online. Please wait for the ESXi host to complete the boot process.


When ESXi finishes booting, the Direct Console UI (DCUI) display (shown in the screenshot) will display the IP address.

2.         Go back to the Control Center by clicking the CONTROLCENTER icon on the left-hand side of the HOL interface.

You can close the Consoles tab by clicking the 'X' in the top left corner


Return to EVO:RAIL Management in your Chrome browser.
2.         You will soon see that a new node was detected: look for a box in the bottom right-hand corner of the page.
3.         Click the Re-add Existing Node button.
4.         Enter your ESXi host and vCenter Server passwords (VMware1!), and then click theRe-Add Existing Node button.

Wednesday, April 15, 2015

Hyper-Converged Building Blocks in the Open Software-Defined Data Center

Hyper-convergence is a new emerging buzz-word in the IT infrastructure business. This approach to creating integrated appliances that deliver faster time to value, increase operational efficiency, automate installation, provide non-disruptive scaling and create a simplified and unified user experience. These innovative appliances are extending the benefits delivered by software-defined storage and converged storage technologies for customers of all sizes.

There’s no doubt that hyperconverged infrastructure products have become a compelling option for not just enterprise customers, but cloud providers as well.  Hyperconverged appliances offer a simple scale-out architecture that fits the needs of most shared virtualized environments.  The highly engineered appliance-based products remove the complexity of large storage arrays

Let’s face it, business demands are forcing IT organizations to dramatically change the way they run their business. We refer to this shift as the “New Style of IT”

Keeping It Open and Simple

HP has been working together with VMware for many years to develop management stacks, converged infrastructure and converged systems that are integrated and optimized.

HP StoreVirtual Virtual Storage Appliance (VSA) is the industry’s leading software-defined storage with over 1 million licenses shipped. The new ConvergedSystem 200-HC StoreVirtual combines VSA, HP OneView for converged management, and HP OneView for vCenter with robust VMware vSphere integration on ProLiant x86 servers. New HP OneView InstantON wrapper facilitates rapid deployment in less than 15 minutes. The HP CS 200-HC StoreVirtual system includes a rich set of standard data services, including:
·         Automated, sub-volume auto-tiering, thin provisioning and space reclamation provide capacity efficiency and lower storage costs without sacrificing performance.
·         Stretch cluster capabilities that keep applications online during appliance, rack-level or site-wide outages to address the need for business continuity.
·         Transparent multi-site high availability and workload mobility built on HP StoreVirtual technology certified by VMware as a vSphere Metro Storage Cluster solution and integrated with VMware vCenter Site Recovery Manager.

·         Data volumes that can be non-disruptively replicated or migrated from one HP CS 200-HC StoreVirtual system to another—or to any x86-based server running any major hypervisor and HP StoreVirtual VSA—using built-in HP Peer Motion to provide flexibility to meet changing business needs.

The HP hyper-converged appliance and StoreVirtual VSA
HP is also announcing the HP ConvergedSystem 200 for VMware EVO:RAIL. This new offering is built around a simplified and intuitive management stack developed by VMware that is installed on a HP Proliant x86 server. The EVO:RAIL software bundle includes VMware vSphere, vCenter Server, vCenter Log Insight, Virtual SAN for Storage and EVO:RAIL Deployment, Configuration and Management. The EVO:RAIL bundle works with HP OneView for EVO:RAIL to allow end users to discover, monitor, and control the HP ConvergedSystem 200-HC from the EVO:RAIL user interface or from your vCenter web client.  In the future, HP OneView will directly manage the EVO:RAIL system unifying management of hyper-converged and traditional infrastructure deployments.

 HP ConvergedSystem 200 for VMware EVO:RAIL

How the Pieces Fit in an Open SDDC

With this announcement HP can now fill in more of the pieces of the Open SDDC framework that HP is developing with VMware as well as other partners. By adding in these new hyper-converged systems to the HP ConvergedSystem portfolio, HP is providing more customer choice and flexibility. Figure  below provides a conceptual overview of how the pieces developed by HP and VMWare fit together. Many of the software components were co-developed. The HP hardware components can be managed with VMware software and/or HP software, such as VSA and HP software-defined networking (SDN) software. HP OneView is the key enabling technology that facilitates Infrastructure as-a-Service (IaaS). It serves as the converged management platform that provides IT administrative control for the HP converged infrastructure. HP OneView serves as an automation hub that provides integration with the HP, VMware and other management software through the REST API.

An overview of the open SDDC framework HP is developing with VMware

Scale Computing’s HC3 platforms are an interesting alternative to the EVO:RAIL design.  There is no standard for measuring VM workloads, but the three different HC3 platforms scale up to 400 VMs.  Scale uses a customized version of KVM that’s hidden from the user.  Scale leverages a block-level storage architecture opposed to VSAN’s object-based approach.  Scale’s primary market for its HC3 product line is the small- and medium-sized business that has an eye fixed on simplicity.  While KVM may not have as many features as vSphere, Scale is banking on the simplicity of operation along with aggressive pricing compared to the competition.
Nutanix is all over storage and storage features.  Not that the other vendors take storage lightly, but Nutanix has a feel and premium price that puts it in competition with VCE’s Vblock vs. some of the other products.  Nutanix built SAN features into virtual storage appliances that can run on top of any major hypervisor.  The decoupled SAN functionality means that Nutanix’s approach is compatible with vSphere, Hyper-V and KVM.  Without much effort, an administrator can roll out one of these three platforms within an hour of racking and stacking the equipment.
As mentioned, Nutanix is pretty serious about storage and storage features.  Some of these features are available in vSphere, while others are unique to Nutanix, which means that traditional features such as snapshot, clones and deduplication are available across all supported hypervisors. Nutanix is a web-scale solution that competes with the likes of Vblock and can scale to replace a large enterprise data center.

SimpliVity is similar to Nutanix in that its storage platform takes center stage.  In addition to offering a hardware platform, SimpliVity teamed with Cisco to offer an integrated solution with Cisco’s C-Series Rack servers.  This provides an interesting alternative to EVO:RAIL for existing Cisco UCS customers looking to not stray from the UCS platform. The SAN software is key to the company’s future aspirations.  SimpliVity offers many of the storage features you’d expect to see in an enterprise storage array and, like the other hardware appliance vendors, decouples the SAN from the hypervisor.  The decoupling will allow SimpliVity to support more than VMware in the future.
From a cost perspective, you can expect SimpliVity to compete with Nutanix.

Nimboxx is the last appliance provider we’ll examine.  Nimboxx's hypervisor is also a customized version of KVM and is hidden from the user.  Like the other appliance vendors, the storage is decoupled from the hypervisor.  However, Nimboxx isn’t looking to provide multiple hypervisor support.  The company is looking to provide a scale-out hyperconverged product to cloud providers.  Nimboxx’s approach is to provide a web-scale infrastructure without paying licensing and support costs for the hypervisor layer.  In addition to enterprises and cloud providers, Nimboxx has an application OEM offering.  Nimboxx’s application OEM offering targets application providers looking to deploy their VMs on pre-packaged hardware.
With the existing hardware vendors, VMware and its seven OEM partners, VMware-centric options abound. The number of options surrounding VMware is to be expected with VMware in a market leader position.  However, if you are looking to expand beyond or even leave the VMware ecosystem, there is no shortage of appliance options.  In addition, I fully expect HP and Cisco to each offer hyperconverged products based on their various data center products and partnerships in the near future

Featured Post

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.Route 53  perform three main functions in any...