7 trends driving compute infrastructure innovation

Image: Adobe Stock

According to a recent survey, 69% of boards have been proposed to do so, while 60% have chosen to improve operational excellence through digital business.

Transformations that would normally take decades to be implemented by the I&O and in person. The distributed enterprise fuels the need for compute to be available in more locations and environments than ever before.

I&O is now in a position to build on the momentum of their high-profile achievements throughout the pandemic and capitalize on business leaders’ realization that IT is fundamental to the operation of the business, not just another cost center. As such, they have to deal with problems.

Old technology, antiquated processes and fragile workflows are certainly not helping bring organizations to their desired future state of delivering compute platforms everywhere: On-premises, at the edge, in colocation facilities and in the cloud.

The following seven trends and technologies are expected to help organizations drive innovation within their compute infrastructure and reach their desired future state.

Sustainability and energy efficiency tech

Organizations are being driven by customers and investors to deliver sustainable solutions and that technology is seen as critical, with the majority already investing in IT for their sustainability programs. For I&O leaders, this means introducing a strategy to reduce their organization’s carbon footprint (ie, greenhouse gas emissions), water consumption, resource utilization and e-waste management in data centers.

I&O is helping build a robust growth trajectory on today’s market terms.

Cloud sustainability, carbon footprint measurement and advanced grid management software is one of three years.

By 2025, 75% of I&O leaders will be tasked with contributing to enterprise sustainability goals, and 25% will have metrics tied to sustainable progress, specifically around carbon neutrality.

Quantum computing

While quantum computing’s (QC) true potential remains a decade away, there is sufficient progress in the field to justify I & O’s exploration of quantum techniques to address business problems. Quantum is another opportunity for I&O to become a center for innovation rather than simply the maintainer of legacy systems.

I&O should focus on two specific areas within quantum:

  1. Developing quantum computing skill sets
  2. Enabling business units, through quantum computing working groups, to centralize and focus exploratory investments

Gartner expects to build management skills ahead of QC opportunities in the next three years.

QC champions who are equipped to navigate a rapidly evolving quantum ecosystem. In fact, by 2025, 25% of large enterprises will deliver the evolutionary advantage over non-quantum-enabled peers through quantum-inspired initiatives.

SEE: Hiring Kit: Database engineer (TechRepublic Premium)

Artificial intelligence

As they move into the mainstream of compute infrastructure, high-performance computing (HPC) and artificial intelligence (AI) are two new areas that need to prioritize to ensure optimized compute infrastructure delivery. Once seen as required for specialized needs, HPC is becoming part of the mainstream for many organizations. Sometimes, compute infrastructure for AI must become a core I&O capability. Because of the self-service nature of AI and HPC solutions, they are a powerful test bed for the automation of provisioning and delivering immutable compute infrastructure.

I&O will use AI to enable many of the efficiencies that deliver on the other aspects of this roadmap. By leveraging AI for hyper automation, I&O teams can be freed up to deliver on the innovation and accelerated digital transformation required to meet business needs.

In addition, building AI-generated digital tweaks of compute and networking infrastructure allows changing events to be verified against a model of the existing environment, making changes more robust and reliable and further freeing up I&O resources.

Immutable infrastructure

Immutable infrastructure is a process pattern that is never updated once the system and application infrastructure is deployed into production. Instead, when the infrastructure and applications are simply replaced by the development pipeline.

Threats such as ransomware create a need for infrastructure to be immutable down to the server and networking firmware levels. The result is improved security posture with reduced vulnerabilities and faster time to remediate when new issues are identified.

By delivering the basic systems in an automated, immutable fashion, I&O can simply reimage any system so that it is identical to its previous configuration and guaranteed free of infection. This will drive requirements for the immovable storage of gold and the processes necessary to keep these up to date.

Confidential computing and secured systems

As data privacy and security requirements proliferate across the globe, I&O must bring the concepts of privacy and security to everything they do. All I&O staff will need to be able to provide information on security systems, which protects data running on third-party systems.

One of the major obstacles to cloud adoption for any regulated organization.

It also allows traditional cryptographic methods to achieve this. I&O staff who have secured crystallographic signing techniques will be in a better position to ensure firmware, hypervisors and operating systems are not compromised.

New system interconnects

AI and HPC will drive the need for higher-performance accelerators and new input / output (I / O) and memory solutions. Within the next two years, PCIe 5.0 and the Computer Express Link (CXL) Interconnect will deliver the opportunity to create new architectures for compute infrastructure, based on a switched PCIe 5.0 fabric. This is a scenario where HPC and AI will be the test bed for these solutions, enabling I&O to gauge capabilities and use cases for enterprise solutions.

SEE: Microsoft Power Platform: What you need to know about it (free PDF) (TechRepublic)

Memory tiering

As CXL connected memory becomes available, a method for aggregating different memory types such as DRAM, PMEM, and NVMe in a cost-effective manner will be needed.

Memory tiering software will enable extremely large memory footprints. I&O leaders should evaluate which platforms will benefit from such technology and how this can be implemented.

Taken together, these technologies will help I&O teams deliver computing platforms that can be integrated into a composable business design. Business units that will also come in to compute infrastructure resilience, where I&O acts as an advisor to the business and can provide rapid recovery strategies and enable rapid recovery in the event of operational failure.

Black and white headshot of Tony Harvey.
Tony Harvey

Tony Harvey is a senior research director at Gartner and covers a variety of infrastructure topics.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker