About Us
About the Mitchell C. Hill SDC
The industry-wide emphasis on cloud computing has created a new focus in Information Systems (IS) education. As the demand for graduates with adequate knowledge and skills in cloud computing is on the rise, IS educators are facing a challenge to integrate cloud technology into their curricula. Although public cloud tools and services are available for many students today, education institutes can build a private, educational cloud to facilitate more practical, interactive and hands-on learning. This project is set to build a student-run data center through an industry partnership between Cal Poly Pomona and leading cloud technology firms such as Microsoft, Avanade, Chef, and Juniper. The data center, in conjunction with public cloud infrastructure, serves as a hybrid-cloud to engage faculty and students in a highly accessible, experimental cloud environment, where through real-world experience faculty and students can explore the design, configuration, deployment, management, and use of cloud solutions. This polytechnic approach in cloud curriculum integration will also allow the IS department to be simulated as a modern enterprise with a goal to virtualize its IT provisioning, where students can gain a broader, more enterprise-centric view of modern computing.
The Mitchell C. Hill SDC Design
The data center includes computing, networking and storage systems that are typical for use in cloud data centers. The design of the data center included participation from students, faculty and industry partners to ensure the facility was designed with student and curricular needs in mind while also representing industry best practice.
The room that houses the data center is approximately 15‟ x 20‟ and has a data-center grade air conditioning unit mounted in the ceiling. There is no backup air conditioning aside from the building-wide unit that only runs when the building is open and the ceiling-mounted room unit so a failure of the in-room air conditioning system will require the shutdown of computers within the facility. There is also no generator backup for the facility so a loss of power will mean that UPSs in the room will simply offer computers ample time to shut down gracefully. As a result of the size, air conditioning, and power constraints of the facility, the data center will run only curricular and research workloads. Projects in the data center that grow to needing large-scale deployment or robust uptime and availability will need to migrate to public cloud infrastructures.
The data center is being built in phases and when complete will include 128 RUs of space for servers in addition to 112 RUs of space allocated to storage, power backup, cable management etc. While Cal Poly Pomona is currently on the quarter system it is in the midst of a transition to semesters. As a result, the data center is divided into four quadrants with three quadrants being in use at any given time and the fourth quadrant being under construction.
In the first two years one quadrant will be built each semester and added to the production pool. By year three, servers in the first quadrant will be removed and reconditioned or replaced. The following semester the second quadrant would be rebuilt and so on. This process will ensure that a section of the data center is being built every semester which means that every student will participate in the construction of the data center. The continual reconstruction process will also provide ongoing upgrades and improvements that will keep the center aligned with industry best practices and with the evolving curricular and research needs of the university.
Data center hardware will be homogenous within each quadrant but heterogeneous across quadrants. The blend of architectures allows efficiencies of homogenous systems while also allowing the flexibility and scalability of heterogeneous systems. This will also offer the ability to pursue new design methods each semester while maintaining operational integrity for existing systems.