01Organizations of all shapes and sizes are making a conscious effort to embrace AI technology and solutions.
While 90% of respondents say AI is a priority to their organization, there is still a significant amount of work to be done. The nascent nature of this technology segment means there is a dearth of strategic best practices, established guardrails, or even reference architectures. Many organizations are still determining which IT environments are best to run different parts of their AI processes and workloads, or which type of AI application(s) have the most applicability to their industry or business.
02Today, data security and governance are at the forefront of enterprise AI decision-making – not cost.
Among respondents, data security, data quality, scalability and speed of development were the top considerations related to running their AI workloads. Additionally, over 90% of respondents say that security and reliability are important considerations in their AI strategy. Respondents consistently indicated that data security and governance, including data quality and data protection, are of paramount importance to support AI technologies and services. This may create a windfall for related IT infrastructure markets – specifically for data storage, security, governance and protection – as new and growing AI technology budgets are allocated to these essential functions.
03Enterprises are seeking accelerated AI deployment options – likely the result of acute skills shortages, combined with the need to maintain a competitive edge.
All organizations surveyed (100%) say they require additional AI skills to support related initiatives over the next 12 months. In the short term, many organizations are likely to encounter skill gaps and shortages in AI modeling and application development.. Skills shortage, along with resource optimization, is likely the reason why the vast majority of enterprises (90%) are planning to leverage existing, whether commercial or open source, generalized large language models. The vast majority of organizations will likely seek to leverage existing, pre-trained models, which can be fine-tuned to support specific usage, in order to maximize resources and accelerate time to market of new AI applications.
04AI technology adoption will catalyze a new wave of IT infrastructure modernization, with an emphasis on seamless data mobility across core and edge environments.
Nearly all (99%) respondents say they plan to upgrade their AI applications or infrastructure with more than half saying they need to improve the transfer of data between cloud, data center, and edge environments to support AI data initiatives. However, many are struggling to identify the most efficient path to modernize their infrastructure to be able to support AI workloads. Today, private, hybrid and multicloud deployments are well established and are synonymous with modern IT infrastructure workloads. AI technologies, and growing requirements for speed and scale, are likely to bring edge strategies and core infrastructure deployment to the forefront of IT modernization.
consider AI a priority
agree their IT infrastructure needs to be improved to support AI
plan to increase investment in edge strategy to support AI
The Future is Now
There is already a pervasive understanding among enterprises that AI solutions are important to their future. In fact, 90% of respondents agreed that their organization considers AI a priority. The question then becomes: how will these priorities manifest into real-world business initiatives and programs? What are the challenges/roadblocks associated with implementation? And perhaps most importantly, how are new AI programs and applications going to affect people, processes, and budgets?
Today, enterprises are primarily putting AI to work through generative video, text, and image applications, as well as virtual assistant and customer support solutions (Figure 1). AI-based solutions for fraud detection and cybersecurity, as well as a range of image recognition, speech recognition, and computer vision are also high on the list of current/planned use cases.
Generative AI including video, text, and images.
Virtual assistants and customer support (i.e., chatbots)
Fraud detection and cybersecurity
Image recognition and computer vision
Speech recognition and natural language understanding
Recommendation systems
Large Language Models (LLMs)
Natural language processing
Autonomous systems
Voice assistants
Gaming and recreational applications
Healthcare and medical diagnosis
Facial recognition
Figure 1: AI apps/workloads deployed today or planned to deploy in the next 12 months
In terms of deployment environments, respondents indicated that AI inferencing processes are primarily run in either private cloud and/or edge locations (Figure 2). This is likely driven by data locality and regulatory requirements of the data being analyzed – in addition to performance/latency requirements. For example, for organizations leveraging sensitive, proprietary, or personally identifying data, these dedicated infrastructure environments can provide more control.
On-premises Data Center or Private Cloud
Managed Data Center or Private Cloud
Edge/ Remote sites
Single public cloud
Multiple Public cloud(s)
Figure 2: In which environment(s) does your organization run, or plan to run, AI inferencing workloads?
When it comes to frequency of fine-tuning their AI models, 60% of organizations say they plan to update their AI models on a monthly or quarterly basis (Figure 3). If we imagine a spectrum of enterprise data accessibility spanning from “hot” to “cold,” one might argue that quarterly-to-monthly frequency of access fits close to the middle of that spectrum (i.e., “warm”). The survey results indicate the majority of organizations are going to require a relatively consistent level of access to their AI models and datasets.
Monthly or more frequently
Quarterly
Bi-annually (twice a year)
Annually or less frequently
We don’t have a plan for how often models will be updated
Figure 3: How often does your organization expect to update its AI models?
Finally, in an effort to develop an AI technology baseline, we asked respondents about the compute environment on which they deploy applications: 63% of organizations say they currently deploy AI applications on virtual machines (VMs), while 62% say they currently deploy AI applications on containers (Figure 4). This relatively even distribution of AI application deployment across compute environments is impressive and is a good indication of the wide applicability of AI technology across all facets of enterprise IT environments – both virtualized and cloud-native.
We currently deploy
AI applications here
We plan to deploy
AI applications here
We don't plan to
deploy AI applications here
Virtual Machines
Containers
Figure 4: Do you currently deploy or plan to deploy AI applications in the following environments
Source: Nutanix ECI AI Report
A Growing Need for IT Modernization and Seamless Data Mobility
AI applications and workloads require unfettered access to data in order to produce meaningful results. This means that beneath every successful AI-driven application or process is an equally important and cohesive data strategy that ensures these processes and workflows maintain secure, performant access to data. Our survey results show that expanding data requirements of AI applications and workloads are reshaping the way enterprises think about their data infrastructure in two key ways:
01Meeting AI workload demands requires long-term investment in IT infrastructure modernization.
Most respondents (91%) agree their organization's IT infrastructure needs to be improved to more easily support and scale AI workloads. Key challenges for enterprises today include data security, resilience and scalability (Figure 5). Luckily, in many cases there will be budgeted investment to meet this need: 85% of respondents say their organization plans to increase investments in IT infrastructure modernization over the next 1-3 years to support AI workloads.
02Meeting AI workload demands requires seamless data mobility across data center, cloud, and edge environments.
For many organizations, edge infrastructure deployment and strategy has remained a secondary IT initiative due to related costs and complexity. By contrast, many organizations have already implemented hybrid and/or multicloud IT architectures, but these don't always include a distributed or “edge” component. AI technology implementation may significantly change this dynamic一as organizations are likely to train models in the public cloud, fine-tune then in their private cloud, then deployed at the edge close to where they are needed to act on data一along with elevating the need to move and protect data among data center, cloud, and edge locations seamlessly. Our survey indicates that 93% of respondents agree that developing an edge strategy to support AI plans is important to their program's success. And once again, respondents indicated a willingness to spend on edge initiatives that support AI: 83% of respondents say their organization plans to increase investments in edge strategy over the next 1-3 years to support AI workloads.
Overall, these findings present a positive picture for early adopters. Enterprises understand the need for IT infrastructure modernization and the importance of including edge deployments as a central tenet of that strategy, all to support AI. Perhaps most importantly, enterprises are willing to invest in making this a reality.
Given the fact that enterprises acknowledge the need for infrastructure modernization to support AI, and are also willing to allocate dollars to do so, we can conclude that the challenges will mostly lie in effective design and implementation. The survey results provide further insight into the specific areas that are driving AI infrastructure and application upgrades from a design perspective. Improving data security, infrastructure resilience and uptime, infrastructure management at scale, and infrastructure automation will be top priorities to support AI applications (Figure 5).
Data security
Increased technology resilience and uptime
Improving management at scale
Reduce workload on staff through automation
GPU (graphics processing unit) support
Data insights
Cost
Figure 5: Key drivers of AI application and infrastructure upgrades
Strategic Investment In AI Talent to Accelerate Time to Market
Organizations plan to increase investments across all major areas that support AI solution development and deployment over the next 1-3 years, including the people and skills needed to deploy and support these new technologies and services: 84% of respondents said they plan to increase investments to expand their data science and engineering teams over the coming years. Many organizations seem to be aware of the need to invest in the people and skills needed to support AI initiatives. In fact, generative AI and prompt engineering, and data science/data analytics were identified as the top-two areas in need of more AI skills over the next year (Figure 5).
Generative AI and prompt engineering
Data science/ data analytics
Environment Social Governance (ESG) reporting
Development operations
Research and development
Product development
Logistics/ supply chain planning
Strategic decision making
Platform engineering
Line of business
Figure 6: Which of the following organizational areas do you believe will need more AI skills over the next 12 months?
Survey respondents ranked ESG reporting as a key area requiring AI skills development over the next 12 months. This choice was ranked higher than others, including R&D and product development, which is somewhat surprising.
However, the reality is that sustainability and ESG initiatives have been elevated to the top of many enterprise and executive priority lists over the past 12-24 months. There are several reasons for this rise in popularity, including the need to optimize IT operations and costs during a period of economic uncertainty and inflation. But more importantly, many organizations are being driven to improve their sustainability reporting and ESG measurement capabilities by new regulatory and compliance requirements.
One important example is the United States Security and Exchange Commission (SEC) rule proposal which would require publicly listed organizations to disclose information about their direct and indirect greenhouse gas emissions, including Scope 1, 2, and 3 emissions reporting.
This is likely a key reason why we see the need for AI-related ESG reporting ranked so high on our list of results. Enterprises understand the massive amounts of energy required to run compute- and GPU-hungry AI algorithms and workloads. The survey results indicate that enterprises are already considering the impact this will have on their ESG initiatives and any associated emissions reporting.
Identifying and remediating skills shortages are a constant challenge when it comes to emerging technologies – and usually something to be expected. However, what is often more difficult to predict is the way in which these people and skills shortages impact the direction of an emerging technology market. In the case of AI solutions and services, our survey uncovered some interesting results. We found that 85% of respondents plan to purchase existing AI models or leverage existing open-source AI models in order to build their AI applications. By contrast, only 10% of respondents indicated they plan to build their own models (Figure 7). The high prevalence of organizations indicating they will leverage existing pre-trained large language models might be an indicator of organizations' need to optimize available resources by leveraging existing models to be fine-tuned with an organizations' proprietary data to best support their needs.
We plan to purchase existing
models to build AI applications
We plan to leverage open-source
models to build AI applications
We plan to build our own
models to build AI applications
We plan to use
a combination
Figure 7: Is your organization currently or planning to leverage existing AI models or build your own?
*Note totals do not sum to 100% due to rounding.
We believe these findings inform an important hypothesis: in the context of “building vs. buying” IT solutions, enterprises will buy or source existing models for their AI application development needs from trusted providers, rather than develop their own. This trend seems likely to occur for two key reasons: (1) organizations want to speed time to market of new AI applications and maintain a competitive edge, and (2) organizations will look for ways to maximize existing resources.
Furthermore, we expect this trend to impact how enterprises make AI infrastructure investments and decisions. The infrastructure required to create and train various AI models (e.g., large language models) has vastly different performance characteristics compared to the infrastructure required for inferencing, fine tuning, and running higher-level applications that depend on these AI models. When we asked respondents to rank what they believe their greatest challenges will be with AI over the next 1-2 years, data modeling was ranked 5th – behind data security, cost-effective infrastructure delivery, ensuring mission-critical resiliency, and managing workload scale. Our survey data indicates that many enterprises will be focused on infrastructure needs to support AI model implementation likely to maximize return on investment.
Data Security, Data Quality, and Data Governance
AI applications and services have a symbiotic relationship with their underlying datasets, models, and infrastructure. Enterprises are acutely aware of this relationship, and the importance of developing data security and quality strategies to make their AI technology as reliable and resilient as possible. This trend is made clear in our survey results, where respondents ranked data security and data quality as their #1 and #2 most important considerations when running or planning to run AI workloads – by significant margin over the other choices in the response set (Figure 8).
Data security
Data quality
Scalability
Speed of deployment
Data sovereignty
Data gravity
Integration into existing systems
GPU (graphics processing unit) support
Data latency
Cost
Data locality
Figure 8: Most important considerations when running or planning to run AI workloads
Media hype and commercial availability of AI tools and services have driven enterprise interest in AI technology to new heights in 2023. Partly as a result of this hype, cost ranked as the second-lowest consideration when it comes to running or planning to run AI workloads. Furthermore, over 90% of respondents agree that their IT costs and cloud spending will both increase due to AI applications.
Early adopters will certainly benefit from this laissez-faire attitude to AI solution spending, finding few cost- and budget-related roadblocks when it comes to project and technology evaluation or implementation.
However, this honeymoon phase will not last forever. We also asked respondents to identify the greatest challenges their organization will face with AI over the next 1-2 years. In this forward-looking question, cost-effectively delivering the necessary infrastructure was chosen as the #2 most important challenge behind data security. Eventually, budgetary and spending expectations for AI projects and technologies will be expected to come in line with the rest of the IT portfolio. Today's AI projects should be designed with long-term budgetary and operational efficiency in mind, rather than an expectation that spending will remain unlimited for years to come.
In addition to data security and quality, our survey findings indicate that AI technologies will drive implementation of enhanced data governance practices including implementing data protection practices for critical underlying datasets: 51% of respondents say they plan to add mission-critical/production level data protection and disaster recovery (DR) solutions to support AI data governance. Furthermore, half of organizations say they plan to define data protection and governance strategies within edge environments to support AI. In many emerging technology segments, data governance becomes an afterthought as enterprises race to develop front-end applications and services. Our survey results indicate this may not be the case with AI technologies. Because AI solutions rely so heavily on consistent data access, quality, and scalability in order to effectively operate, protecting and securing this data will be of paramount importance for any AI workload.
expect to increase their
investment to modernize their
IT infrastructure to support AI
expect IT costs to
increase due to AI
expect cloud costs
to increase due to AI
2023 brought Artificial Intelligence into the mainstream, but we’re still in the very early days when it comes to AI solution applications within enterprises. Organizations are still identifying the right workloads and use cases, determining best fit, and understanding budget implications. Early adopters will seek to gain short-term competitive advantage through accelerated solution deployment. Meanwhile, other enterprises will take a longer-term approach, accumulating necessary staffing and skills, and plotting a development strategy that includes infrastructure modernization as well as internal AI modeling and application development. It’s too early to guess which approach will yield the best results, but based on our survey results, we offer the following recommendations:
01Prepare your infrastructure for data modernization and mobility.
Effective implementation of AI technologies and solutions requires data mobility and management across data center, cloud, and edge environments. Each of these environments will play a critical role in supporting an end-to-end AI workflow. For example, requirements for localized data cleansing/processing at the edge can be supported with subsequent transfer to dedicated or cloud data centers for training, inferencing, and mass storage. This is no small task for IT and infrastructure professionals, and in some cases will require significant retooling, training, and support to build modern infrastructure solutions to support AI technologies. Many enterprises will likely look for solutions to help them simplify this effort.
02Invest in AI skill development and plan for staff shortages.
All organizations surveyed said they need more AI skills across a range of related areas over the next 12 months. This means there will be significant competition over a finite pool of support and development resources for solutions like generative AI, data science and analytics, research and development, platform engineering, and prompt engineering. Organizations choosing to build their own AI models and applications will likely be most affected by shortages.
03Make data security, data quality, and data protection core tenets of your AI strategy.
The need to implement AI technology and solutions is driving organizations to rethink related data security and protection strategies. Without access to data, many AI-based applications and solutions simply will not function. Expect data security and protection solutions to play a pivotal role in shaping enterprise AI technology ecosystems and strategy.
04Plan now for future infrastructure cost reduction mandates.
The gold rush mentality to develop AI models, applications and related business results will result in a short-term acceptance of over-budgeting and over-spending on AI initiatives. However, this will be temporary. Organizations should plan to quickly improve (and measure) the cost-effectiveness of their infrastructure to support AI over the next 1-2 years.
Respondents were which teams/departments are focusing on AI implementation and adoption within their organization. The #1 choice was IT, and the #2 choice was a tie between DevOps and information security teams. All three of these teams will have significant influence on enterprise AI technology decision-making, strategy, and implementation. However, our survey identified some interesting nuances in the way IT decision makers and DevOps decision makers are thinking about AI strategy and priorities:
DevOps Drive Open Source Model Development:
33% of ITDMs said they plan to leverage open-source models to build AI applications. By contrast, 48% of DevOps decision makers said they plan to leverage open-source models to build AI applications.
Decision Maker Differences in Sustainability and ESG:
DevOps decision makers ranked ESG reporting as their #2 most important area to improve AI-related skills over the next 12 months, above data science/data analytics and R&D. When we look at the same ranking by ITDMs, ESG reporting dropped all the way down to #5.
Machine Learning Operations (MLOps):
Only 14% of ITDMs expect AI models to be updated on a monthly or more frequent basis. By contrast, this proportion increases to 25% when we look at just DevOps decision makers. This highlights an important inconsistency when it comes to AI dataset and MLOps requirements.
AI Technology Challenges:
ITDMs say data security is the #1 challenge associated with AI over the next 1-2 years. By contrast, DevOps decision makers ranked ensuring mission-critical resiliency and meeting business SLAs as their top two challenges by a slim margin.
Containers vs. Virtual Machines (VMs):
59% of ITDMs say they currently deploy AI applications on containers. However, this proportion shoots up to 66% when we look at just DevOps decision makers. Clearly, DevOps prefer container-based application deployment for AI. Interestingly, this trend was not observed when looking at AI app deployment on Virtual Machines (VMs) – here both decision maker camps indicated a similar proportion (62% for ITDMs and 63% for DevOps).
For the State of Enterprise AI report, Nutanix commissioned a global research study to learn about the state of global enterprise artificial intelligence (AI) deployments. In July, August and September 2023, U.K. researcher Vanson Bourne surveyed 650 IT as well as DevOps and Platform Engineering decision makers around the world about various elements of their enterprise AI technology strategy and adoption. The research seeks to establish a baseline picture of current enterprise AI deployment and trends, as well as how planned implementations will affect IT and cloud spending and budgeting. The research also explores some of the key technology, infrastructure, and skills-related challenges organizations face as they develop their AI strategy and implement solutions.
This report confirms there is a fundamental understanding among most enterprises regarding the need to explore AI technologies and solutions in order to maintain a competitive edge. More importantly, the survey uncovered an important theme among enterprises adopting AI solutions: a growing requirement for data mobility and data protection across data center, cloud, and edge infrastructure environments. This theme was pervasive throughout the study and leads us to conclude that the adoption of enterprise AI applications and workloads will catalyze a new wave of IT infrastructure modernization initiatives focused on data mobility, security, and protection. These themes are discussed at length in this report, along with supporting survey results.
The report's respondent base spanned multiple industries, business sizes, and geographies. It included IT as well as DevOps and Platform Engineering decision makers employed in organizations across North and South America; Europe, the Middle East, and Africa (EMEA); and the Asia-Pacific-Japan (APJ) region.