The solution
Working in close collaboration with the facilities management department, Data Networks agreed that the strongest and most scalable investment would be to implement two identical Software-Defined Data Centers, sharing the workloads in an active/active cluster. Four Dell PowerEdge servers were installed—identically provisioned and configured, at each location, thereby reducing the number of servers by 13 in the process. The eight servers were configured as follows: 384GB of memory on each server; two high-performance Dell 4128F-ON Top of Rack switches; Dell S3048-ON switch; two optical transceivers; and approximately 167TB of flash storage spread across all servers.
Data Networks deployed VMware NSX Data Center to deliver full network virtualization. A self-service portal allows administrators to manage and control the services delivered to the applications and application workloads without requiring changes to the university’s Central IT networking services. Logical Routing in NSX enables communication between workloads belonging to different subnets. E-W traffic is optimized by Distributed Logical Routers; N-S traffic is handled by NSX Edge Services Gateways. High availability is provided by two models—Active-Standby and ECMP—which when combined, allow the facilities management department to easily build multiple logical topologies.
The facilities management department was using a flash storage array introduced to the market in 2010. Data Networks replaced it with VMware’s vSAN Enterprise hosted on vSAN Ready Nodes on the PowerEdge servers. Each server was configured with 800GB of flash storage. Key to making that choice was VMware vSAN’s capability to scale to high capacity over time. This extended the life of the department’s investment, since they only used eight of 24 drive slots, leaving enough room to triple capacity and cache.
With the twin SDDCs in place, and even with a 62% reduction in the number of servers, the department still doubled the amount of storage from 85TB to 167TB.
Mission accomplished
The department measures costs in two ways—the capital expense of the hardware and software, and the operational costs of keeping them up and running. In both areas, the SDDCs made dramatic improvements. With its old infrastructure, the department was looking at a CAPEX of $1,213,012 over the coming five years. With the SDDC in place, that five-year budget plummeted to $696,803, most of which was the initial investment in new components. The savings? $516,209, or 42%. With its old infrastructure, the client was projected to spend $112,762 for power, cooling and floor space. With the SDDC, that projected cost came down by 35% to $36,987.
Data Networks vetted and selected the collocation host, and mounted and built an identical configuration at the site. Despite the reduced number of servers in the two Software-Defined Data Centers, CPU performance increased by 40%. And equally as important, the power demand for two data centers over the next five years is projected to be cut by 68%.
The department had already made significant existing investments in VMware management. By keeping all components within the Dell and VMware families, the department leveraged the skills and tribal knowledge the organization had developed. With that familiarity, the department avoided soft cost overruns due to training and transitions.
The collocated infrastructure made solid business continuity a reality—the active/ active vSAN Stretch Cluster allows each site to be the active failover site for the other. And with NSX virtual networking, the workloads don’t require any reconfiguration no matter which site fails, and which acts as failover.
The department can now replicate entire application environments to its data centers for disaster recovery. Failover and failback between the on-campus and the remote sites happen without any administrative changes to the networking configuration or without having to call Central IT Networking Services.
Today, the department operates a streamlined, cost-contained, highly flexible infrastructure to support its users. Costs are lower. Performance is higher. Security and continuity are assured. And its infrastructure is ready to grow, with minimal investments, for many years to come.
To ensure all components of the new architecture performed both individually and collectively, independently and integrated with university Central IT, Data Networks consultants constructed the solution, rendered it operational and conducted several realistic proofs-of-concept at its state-of-the-art Staging Center, which replicates client environments.
According to Green, “We’ve done a ‘white-glove delivery’ of this solution for Facilities Management. It was ready-to-go at our Staging Center, which mitigated much of the typical risk and made actual implementation at the university’s sites less complicated.”