Source: Nutanix
While interoperability is crucial amongst these multiple cloud environments, the study also found that application mobility is top of mind for organizations. 91% of all companies surveyed said they had moved on or more applications to a new environment – primarily a hybrid multicloud infrastructure – in the last 12 months. Security, performance and control are some of the primary reasons for this shift.
“Enterprise applications and data need to be portable across varying public cloud environments, and also be interoperable with on-premises private clouds,” said Krista Macomber, senior analyst, data protection and multi-cloud data management at Evaluator Group.
Enterprises are more strategic about using IT infrastructure related to application performance and criticality. AI and ML are an integral part of most software today. As the machine does more and the human does less, companies are looking for dynamic infrastructure technologies that bring out the best in both.
In the process, they are moving away from a cloud vs. data center approach and focusing on being “cloud-smart” instead of “cloud-first.” This involves matching every workload to the right cloud environment based on various performance, cost and security considerations.
Serverless Computing
Serverless is a cloud-native application development and service delivery model that allows developers to build and run applications without having anything to do with servers. The servers are still part of the underlying infrastructure, but they’re totally abstracted from the development life cycle.
The key difference between serverless and other cloud delivery models is that the provider manages app scaling as well as provisioning and maintaining the cloud infrastructure necessary for it. Developers are limited to packaging their code in ready-to-deploy containers. All application monitoring and maintenance tasks such as OS management, load balancing, security patching, logging and telemetry are offloaded to the cloud vendor.
Serverless turns current concepts of pay-as-you-go on its head – serverless apps automatically scale up and down as per demand and are metered accordingly using an event-driven execution model. What this means is, when a serverless function is not being used, it isn’t costing the organization anything.
Contrast this with standard cloud computing service models, where clients purchase pre-defined units of application or infrastructure resources from the cloud vendor for a fixed monthly price. The burden of scaling the package up or down rests on the client – the infrastructure necessary to run an application is always active and billed for regardless of whether the app is being used.
Within a serverless architecture, apps are launched only when needed – an event triggers the app code to run and the cloud provider dynamically allocates resources to the app. The client pays only until the code is executing.
It is possible to develop fully or partially serverless apps, according to workload requirements. Overall, serverless apps are leaner, faster and simpler to modify/modernize than their traditional counterparts because they deliver just the precise software functionality needed to complete a workload.
Artificial Intelligence (AI)
AI and the cloud have been evolving and developing in parallel over the course of the last decade. Now the two are so interleaved, it is hard to separate one from the other. From Instagram filters to Google searches, the most common tasks people perform dozens of times a day goes to show how inextricably woven AI is with human life.
Google CEO Sundar Pichai described AI as having an effect “more profound than electricity or fire” on society. No surprise then, that the global AI software market is expected to be worth $850 billion by 2030.
It is the cloud that delivers these AI and ML models to humans and machines alike. Machine learning models and AI-based applications typically need high processing power and bandwidth. A hybrid, multicloud deployment can deliver this much better than other architectures.
Not only can AI algorithms understand language, sentiment, and other core aspects of business-consumer interaction, but they can also facilitate application development and train other AI programs.
Edge Computing
Edge computing has emerged as a perfect complement to IoT and ROBO workloads. Organizations increasingly want a distributed IT architecture that controls and coordinates devices and customer interactions at remote locations – while synchronizing it with data and applications across multiple cloud services and on-prem data centers.