Cloud Infrastructure refers to the hardware and software components -- such as servers, storage, a network, and virtualization software -- that are needed to support the computing requirements of a cloud computing model.
Cloud infrastructure also includes an abstraction layer that virtualizes resources and logically presents them to users through application program interfaces and API-enabled command-line or graphical interfaces.
In a cloud computing architecture, cloud infrastructure refers to the back-end components -- the hardware elements found within most enterprise data centers. These include multi-socket, multicore servers, persistent storage and local area network equipment, such as switches and routers -- but on a much greater scale.
Major public cloud providers, such as Amazon Web Services (AWS) or Google Cloud Platform, offer services based on shared, multi-tenant servers. This model requires massive compute capacity to handle both unpredictable changes in user demand and to optimally balance demand across fewer servers. As a result, cloud infrastructure typically consists of high-density systems with shared power.
Virtualization is the key to share resources in cloud environment. But it is not possible to satisfy the demand with a single resource or server. Therefore, there must be transparency in resources, load balancing and application, so that we can scale them on demand.
Scaling up an application delivery solution is not that easy as scaling up an application because it involves configuration overhead or even re-architecting the network. So, application delivery solution is needed to be scalable which will require the virtual infrastructure such that resource can be provisioned and de-provisioned easily.
To achieve transparency and scalability, application solution delivery will need to be capable of intelligent monitoring.
The the mega data center in the cloud should be securely architected. Also the control node, an entry point in the mega data center, also needs to be secure.