Author name: CatYong

AI, Automation

Informatica launches Agentic AI offerings and expands cloud partnerships

Informatica launches Agentic AI offerings and expands cloud partnerships AI Agent Engineering, a new service within Informatica’s Intelligent Data Management Cloud platform™, empowers organizations to build, connect and manage intelligent multi-agent AI systems and compose business applications quickly, securely and at scale. Informatica  an AI-powered enterprise cloud data management leader, announced its comprehensive strategy for Agentic AI, building on the company’s position as the industry’s first AI-powered cloud data management platform. Informatica’s strategic approach to Agentic AI expands on the company’s AI innovation history, which includes the launch of CLAIRE® GPT, CLAIRE® Copilot and GenAI blueprints for major cloud ecosystem partners. As the resilient backbone of enterprise data and AI strategies, Informatica provides the critical metadata system of intelligence that enhances decision-making and AI outcomes through a comprehensive catalog of an organization’s data assets. “As the world of AI agents proliferates, the winners will be those who can connect, govern and manage agents at scale while providing enterprise-wide access to trusted data,” said Amit Walia, CEO at Informatica. “With the launch of CLAIRE® Agents and AI Agent Engineering, we are redefining what is possible in data management and AI orchestration. By combining the deep intelligence of our CLAIRE Agents with a no-code, enterprise-grade foundation, Informatica empowers businesses to turn autonomous agents into a strategic advantage, securely and confidently. With AI Agent Engineering, we’re enabling organizations to rapidly build, connect and orchestrate intelligent agent workflows across complex hybrid ecosystems, all without writing a single line of code.” According to Gartner®, “Through 2026, those organizations that don’t enable and support their AI use cases through an AI-ready data practice will see over 60% of AI projects fail to deliver on business SLAs and be abandoned.”[1] New AI Agent Engineering Service to Build, Connect and Manage Multi-Agent AI Systems Agents offer more resilient solutions that reason rather than depend on rigid and rule-based code. In an AI Agent world, trusted data becomes even more critical as agents will make autonomous decisions based on it. While many application vendors are building domain-specific agents, the solution to meaningful enterprise business problems often requires connecting siloed sources of information. As AI Agents proliferate across enterprises, the ability to connect, manage and govern them is paramount to ensure enterprise service quality (performance, scale), security and compliance. AI Agent Engineering, a new service within Informatica’s Intelligent Data Management Cloud platform™, empowers organizations to build, connect and manage intelligent multi-agent AI systems and compose business applications quickly, securely and at scale. It provides a unified, no-code environment to seamlessly orchestrate agents across ecosystems like AWS, Azure, Databricks, Google Cloud, Microsoft, Salesforce, Snowflake and more—bridging the gap between innovation and enterprise operations. Key Capabilities: Metadata-aware and context-intelligent: Ensures AI agents operate on trusted, governed data for context-rich automation leveraging Informatica’s  metadata system of intelligence. Leverage existing investments from Informatica’s cloud platform as skills: Includes mappings, business processes and other assets, into third party agents. Enterprise-grade performance: Built on Informatica’s proven scalable and secure AI-ready platform, supporting mission-critical workloads globally. No-code, AI-native interface: Enables both technical and non-technical users to register and discover enterprise-wide agents, build and manage AI agents without writing code. Availability: AI Agent Engineering is expected to be available globally in fall of 2025 as part of the Informatica’s cloud data management platform. Early previews and partner engagement opportunities will be announced at Informatica World this week in Las Vegas. “At Wescom Financial, our mission is to deliver innovative, member-focused solutions while ensuring operational excellence and data integrity across every channel,” said Desigan Reddi, VP IT and Operations at Wescom Financial. “Informatica’s new AI Agent Engineering service is a game-changer for organizations like ours, enabling us to build and orchestrate intelligent AI agent workflows securely and at scale—without the need for complex coding. The ability to connect agents across our hybrid ecosystem, leveraging trusted data, empowers both our technical and business teams to accelerate automation and drive real-time, data-driven decisions. This no-code, metadata-aware approach aligns perfectly with our vision of making advanced AI accessible and actionable, helping us enhance member experiences and streamline operations as we continue to lead in digital transformation for the credit union industry.” CLAIRE ® Agents: The Next Evolution in Autonomous Data Management Informatica announced CLAIRE® Agents, a suite of autonomous digital assistants designed to augment enterprise data management. These agents harness advanced AI reasoning and planning models to automate complex data operations—ranging from data ingestion and lineage tracking to data quality assurance. Built on open standards that include support for Model Context Protocol (MCP) and fully integrated with Informatica’s Intelligent Data Management Cloud platform, CLAIRE Agents deliver a new level of productivity, data accuracy and scale. CLAIRE Agents redesign how users interact with Informatica’s cloud platform. Instead of an interface built around specific tasks, the new experience will dynamically adapt based on the current context, giving users a personalized, fluid experience to achieve the goal at hand. Addressing Modern Data Challenges with Intelligent Autonomy Today’s data teams face escalating demands driven by AI adoption, governance complexity and increasing regulatory oversight—all while navigating siloed data environments and budget constraints. CLAIRE Agents can address these challenges head-on by executing end-to-end data management goals with intelligent autonomy. Key features include: Data Quality Agent: Continuously monitors and remediates data quality across cloud warehouses, master data management (MDM) systems and third-party repositories. Data Discovery Agent: Rapidly identifies relevant, trusted and compliant data assets for analytics and AI. Data Lineage Agent: Automatically generates granular data lineage across diverse coding environments. Data Ingestion Agent: Simplifies building complex ingestion and replication pipelines. ELT Agent: Automates optimized ELT jobs for Snowflake, Databricks, Google BigQuery, Amazon Redshift and Microsoft Fabric. Modernization Agent: Automates data engineering and integration workflows to Informatica’s AI-powered cloud data management platform. Product Experience Agent: Enrich product data attributes from a hierarchy/taxonomy in Informatica MDM. Data Exploration Agent: Automates goal-based data exploration of structured enterprise datasets in cloud data warehouses and data lakes. “AI agents hold great promise to transform business models and usher in new ways of working and, through our collaboration with Informatica, our joint clients can unlock the potential of AI agents

Network

Gigamon survey: 91% of security leaders are recalibrating hybrid cloud risk during this AI era

Gigamon survey: 91% of security leaders are recalibrating hybrid cloud risk during this AI era Key findings from Gigamon’s survey highlight how hybrid cloud risk and security priorities have to be recalibrated during this AI era as organisations are challenged with ineffective and inefficient tools, fragmented environments and low visibility. Gigamon, a leader in deep observability, released its 2025 Hybrid Cloud Security Survey, revealing that hybrid cloud infrastructure is under mounting strain from the growing influence of artificial intelligence (AI). The annual study, now in its third year, surveyed over 1,000 global Security and IT leaders across Australia, France, Germany, Singapore, UK, and US. As cyberthreats increase in both scale and sophistication, breach rates have surged to 55 percent during the past year, representing a 17 percent year-on-year (YoY) rise, with AI-generated attacks emerging as a key driver of this growth. Security and IT teams are being pushed to a breaking point, with the economic cost of cybercrime now estimated to be $3 trillion worldwide according to the World Economic Forum. As AI-enabled adversaries grow more agile, organizations are challenged with ineffective and inefficient tools, fragmented cloud environments, and limited intelligence. Key Findings Highlight How AI is Reshaping Hybrid Cloud Security Priorities   AI’s role in escalating network complexity and accelerating risk is evident. The study reveals that 46 percent of Security and IT leaders say managing AI-generated threats is now their top security priority. One in three organizations report that network data volumes have more than doubled in the past two years due to AI workloads, while nearly half of all respondents (47 percent) are seeing a rise in attacks targeting their organization’s large language model (LLM) deployments. More than half (58 percent) say they’ve seen a surge in AI-powered ransomware—up from 41 percent in 2024 underscoring how adversaries are exploiting AI to outpace and outflank existing defenses. Compromises highlight continued trade-offs in foundational areas of hybrid cloud security. 96 percent of Singaporean respondents concede that they need to make compromises in securing and managing their hybrid cloud infrastructure. The key challenges that create these compromises include the lack of clean, high-quality data to support secure AI workload deployment (46 percent) and lack of comprehensive insight and visibility across their environments, including lateral movement in East-West traffic (47 percent). Public cloud risks prompt industry recalibration. Once considered an acceptable risk in the rush to scale post-COVID operations, the public cloud is now coming under increasingly intense scrutiny. Many organizations are rethinking their cloud strategies in the face of their growing exposure, with 71 percent  of Security and IT leaders in Singapore now viewing the public cloud as a greater risk than any other environment. As a result, 76 percent of Singaporean respondents report their organization is actively considering repatriating data from public to private cloud due to security concerns and 54 percent are reluctant to use AI in public cloud environments, citing fears around intellectual property protection. Visibility is top of mind for Security leaders. As cyberattacks become more sophisticated, the limitations of existing security tools are coming sharply into focus. Organizations are shifting their priorities toward gaining complete visibility into their environments, a capability now seen as crucial for effective threat detection and response. More than half (55 percent) of respondents lack confidence in their current tools’ ability to detect breaches, citing limited visibility as the core issue. As a result, 64 percent say their number one focus for the next 12 months is achieving real-time threat monitoring and delivered through having complete visibility into all data in motion. “Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity of and vulnerability of public cloud environments,” said David Land, vice president, APAC at Gigamon. “Deep observability addresses this challenge by combining MELT data with network-derived telemetry such as packets, flows, and metadata, delivering increased visibility and a more informed view of risk. It enables teams to close visibility gaps, regain control, and act proactively with increased confidence. With 93 percent of Security and IT leaders in Singapore agreeing it is critical to securing AI deployments, deep observability is fast becoming a strategic imperative.” Deep Observability Becomes the New Standard With AI driving unprecedented traffic volumes, risk, and complexity, nearly nine in 10 (89 percent) Security and IT leaders cite deep observability as fundamental to securing and managing hybrid cloud infrastructure. Executive leadership is taking notice, as boards increasingly prioritize complete visibility into all data in motion, with 88 percent of Singaporean respondents confirming that deep observability is now being discussed at the board level to better protect hybrid cloud environments. About the survey The 2025 Hybrid Cloud Security Survey was commissioned by Gigamon and fielded in collaboration with Vitreous World. The data is based on findings of an online survey of 1,021 global respondents Feb. 21- Mar. 7, 2025. About Gigamon Gigamon® offers a deep observability pipeline that efficiently delivers network-derived telemetry to cloud, security, and observability tools. This helps eliminate security blind spots and reduce tool costs, enabling you to better secure and manage your hybrid cloud infrastructure. Gigamon serves more than 4,000 customers worldwide, including over 80 percent of Fortune 100 enterprises, 9 of the 10 largest mobile network providers, and hundreds of governments and educational organisations worldwide. To learn more, please visit gigamon.com.

AI, Data Centres

Dell Tech World 2025: How industry leaders delivered enterprise AI value at scale

Dell Tech World 2025: How industry leaders delivered enterprise AI value at scale A Dell Technologies customer panel reveals  enterprise AI value being realised after three years of experimentation, DIYs with makeshift tools, and massive ideation. Dell Tech World 2025 unfolded in a way that many would appreciate, thanks to the diverse examples by customers aka businesses about how they approach AI and operationalise it in  workplaces. A panel session moderated by Dell Technologies’ CTO and Chief AI Officer, John Roese, also unpacked these examples. John, alongside AI and IT leaders from CSX Corporation, Dauntless XR, Fluidstack and Worley, talked about their experiences deploying AI at scale in diverse industries like rail transportation, mixed reality tech for aerospace and aviation to AI cloud platform service provision to engineering plus construction. The common thread among all the panellists’ businesses was how each was on an AI journey regardless of how wide or narrow the scope of their AI implementation. John opined that this was a very good hint that, “…we’re starting to see scale.” Starting with experimentation: 3 years later The panellists’ insights highlighted how experimentation figured largely in their very early efforts. Year 1 was mostly about experiments and proposed projects, while Year 2 saw many do-it-yourself projects with initial implementations that had “interesting outcomes.” John shared his frank comment, “To be perfectly honest, Year 1 was kind of a waste of time. We had a couple of tools available that were not enterprise-grade, but what they did was spawn massive ideation.” Year 3, or this year, saw industry announcements from the likes of Cohere and Glean as it coalesced in ways that made generative AI tech easier to consume. The panel discussion served as a gathering of people who have walked the journey and achieved outcomes in ways that signal to other enterprises, It Can Be Done. Enterprise AI today? Enterprise AI is the application of AI against an organisation’s most impactful processes to improve productivity.  “At Dell, we probably have a million processes in the company, and the first ones we went after were four and within them, maybe 20 processes. The result was a significant impact on the company,” John explained. AI came into action for CSX when the rail operator focused its initiatives on safety and employee engagement. CSX’s senior director of innovation, Bill Jacobs admitted that AI is not new to the organisation and that it has been working on what he likes to call classical AI, for 12 years, specifically on machine vision to monitor and inspect train tracks or prevent equipment-caused injuries. To be perfectly honest, Year 1 was kind of a waste of time. We had a couple of tools available that were not enterprise-grade, but what they did was spawn massive ideation. John Roese “What we have seen the last year is that generative AI is fundamentally changing how we approach those problems.” In the past, a particular problem would have taken a few months and at least two engineers to solve. Now, it can be 80% resolved in 20 minutes. Care about where your AI-trained models come from As an AI cloud platform provider, Fluidstack played a crucial role in powering some of the largest AI research labs and supporting large-scale AI projects for sovereigns and enterprises. Product VP, Mike McDonald shared, “Our goal is to enable customers to get access to reliable price performance infrastructure at scale.” The company leveraged AI to optimise the design and management of massive GPU clusters, and explored the use of agentic site reliability engineers (SREs) to augment human staff in maintaining infrastructure. Mike observed, “Large-scale deployment trend continues, right? And we are starting to see a massive shift towards inferences – the larger clusters are getting larger although there are fewer of them. As we start enabling everyone who is doing computer vision at scale, or moving your AI to the edge, there are massive data centres powering it.” To this John quipped that, “The hybrid multi cloud environment for AI is different from the hybrid multi cloud environment for traditional workloads.” This has led to the emergence of a whole new class of infrastructure players like FluidStack, “…that are actually initially going after the large training environments.” We need to apply generative AI across all of the knowledge we have, to supercharge our engineers, so 50,000 can do the work of 100,000. Peter Downey Worley, the global engineering and construction firm used AI to transition from a document-centric to a data-centric business. By embedding generative AI into processes such as bid evaluation and technical specification, Worley aimed to double workforce productivity and address the global shortage of skilled engineers. The company highlighted the need to balance cloud and on-premises infrastructure, both to manage costs and to protect intellectual property. Worley VP, Peter Downey, also said, “We need to apply generative AI across all of the knowledge we have, to supercharge our engineers, so 50,000 can do the work of 100,000.” Generative AI = more than just LLMs Dauntless XR CEO, Lori-Lee Elliot, explained how her company uses generative AI to not only speed up development, but also incorporate it into their products. One of these is a mixed-reality, hands-free guided workflow app that integrates with headsets or smart glasses for field workers in construction and heavy industries.  Eventually people that are smarter than us and with more PhDs can create models to run different simulations and scenarios for things like space weather and air traffic control and so on. Lori-Lee Elliot The key capability that brings it all together is real-time object recognition which segments out an image and identifies objects within it. Yet another product that Dauntless XR has is immersive digital twins of very large spaces, for example the inner solar system that it worked on with NASA to be able to provide a snapshot view of satellites in space. “Where we see this going is us being able to provide a single platform where all the data lives. Eventually

Network

Ant International launches tokenised solutions for real-time treasury management with HSBC

Ant International launches tokenised solutions for real-time treasury management with HSBC Lewis Sun, Global Head of Domestic and Emerging Payments, Global Payments Solutions, HSBC (fourth from right), and Kelvin Li, General Manager of Platform Tech, Ant International (third from left), together with other senior executives.     The Tokenised Deposit Service is also the first bank-led, blockchain-based settlement service in Hong Kong. The Whale platform is a next-generation treasury management solution that utilises blockchain technology. Ant International announced its collaboration with HSBC on the bank’s new Tokenised Deposit Service in Hong Kong. The service will support treasury management with real-time, always-on HKD and USD payments between corporate wallets held by a corporate client at HSBC Hong Kong. The launch comes after a successful pilot test between HSBC and Ant International on Ant’s Whale platform, which has been rolling out blockchain-based payments and tokenised deposits solutions with various bank partners. Following the pilot test, HSBC adapted the learnings from the joint innovation into its Tokenised Deposit Service, with Ant International as the pioneer client. As the first client to utilise the Tokenised Deposit Service, Ant International has successfully completed an instant intra-group fund transfer via the service. It initiated the transaction via its internal global treasury management platform, the Whale platform, digitising its USD deposits with HSBC into digital tokens on the bank’s secure distributed ledger. One of the core products developed by the Platform Tech team under Ant International’s Embedded Finance business, the Whale platform is a next-generation treasury management solution that utilises blockchain technology, including advanced encryption and AI, to improve the efficiency and transparency of funds transfer between Ant International’s intragroup entities for real-time global treasury management. In 2024, more than a third of Ant International’s transactions were processed on-chain via the Whale platform. The Whale platform currently supports multiple tokenised assets from leading banks and institutions around the world, enabling interoperability across diverse blockchain networks. By leveraging leading DLT technologies such as homomorphic encryption and zero-knowledge proofs, the Whale platform encrypts transaction information and enables multi-party verification in an encrypted state, ensuring secure, confidential, and seamless cross-chain transactions.  Ant International’s collaboration with HSBC on tokenised deposits expands a longstanding partnership since 2020. In October 2024, Ant International and HSBC also completed a successful HKD-denominated cross-bank experiment under the Hong Kong Monetary Authority’s Ensemble Sandbox. “We are very excited to work with an industry leader like HSBC, who shares the belief that tokenisation is the key to bridging the stability of traditional banking with the efficiency of blockchain, to enable real-time treasury management,” said Kelvin Li, General Manager of Platform Tech at Ant International. “As a tech connector in the fast-evolving financial services industry, our banking partnerships are expanding from tokenisation to AI-driven global FX and liquidity initiatives. We look forward to working with more public and private-sector partners to unlock more transparent, accessible and efficient treasury management solutions for businesses worldwide.” About Ant International With headquarters in Singapore and main operations across Asia, Europe, the Middle East and Latin America, Ant International is a leading global digital payment, digitisation and financial technology provider. Through collaboration across the private and public sectors, our unified techfin platform supports financial institutions and merchants of all sizes to achieve inclusive growth through a comprehensive range of cutting-edge digital payment and financial services solutions. To learn more, please visit https://www.ant-intl.com/ (Adapted from a press release).

Cloud

Cloud security at the crossroads: Malaysia’s digital transformation opportunity

Cloud security at the crossroads: Malaysia’s digital transformation opportunity   As Malaysia stands at the precipice of significant cloud adoption, the cybersecurity landscape is poised for dramatic transformation escalated by rapid digitalisation and ambitions to become a data centre and cloud hub. In an exclusive interview with Enterprise IT News, Shailesh Rao, President of Cortex at Palo Alto Networks, offered a compelling perspective on the challenges and opportunities that lie ahead for organisations navigating this digital frontier. The triple threat of cloud transformation Drawing from his extensive experience working with more than 500 companies over the past 18 months, Rao identifies three fundamental shifts that will reshape Malaysia’s cybersecurity posture as cloud adoption accelerates. “While there will be a lot of benefits coming from the cloud—compute, efficiency, cost benefits and scale benefits—the attack surface expands as well,” explained Shailesh, whose division develops AI-powered platforms for security operations centres and the cloud. This expansion creates vulnerabilities that traditional security approaches simply weren’t designed to address. Shailesh pointed to other nations that have already undergone similar digital transformations. “As more and more data centres come online, I anticipate just a sheer volume of companies that move to the cloud. This creates a whole new paradigm for security to be provided,” he stated. The industry veteran pointed to the growing trend of hybrid deployments as a particular challenge. Organisations are increasingly spreading workloads across on-premises systems and multiple cloud environments—a strategy that complicates security but is necessary. “One of the benefits of being in the cloud is that you don’t have to put all your eggs in one basket,” he observed. “And when you have a multi-cloud environment, that is where we (Palo Alto Networks) shine and do a very good job of protecting because we are a neutral third party that can provide cybersecurity services.” The second transformation involves the evolving nature of applications that require protection. As organisations develop cloud-native applications designed to leverage elasticity and AI capabilities, security requirements fundamentally change. These applications operate differently from traditional software, demanding specialised protection strategies. Perhaps most concerning is Shailesh’s third observation: cloud environments have become prime targets for sophisticated threat actors. As valuable data and critical workloads migrate to these platforms, attackers are rapidly adapting their techniques to exploit vulnerabilities in these new environments. The Talent Deficit Having traversed the globe consulting with organisations on their security posture and the benefits of platformisation, Shailesh highlighted a persistent challenge that exacerbates security risks: the scarcity of qualified cybersecurity professionals. And when you have a multi-cloud environment, that is where we (Palo Alto Networks) shine and do a very good job of protecting because we are a neutral third party that can provide cybersecurity services. Shailesh Rao “Machine learning and AI are not only hugely beneficial but possess potential to change the world from a cybersecurity perspective,” he explained. These technologies have become essential precisely because human expertise alone cannot scale to meet current threats—especially given the talent shortage. This deficit affects even major cloud providers, who must prioritise their resources. “Due to scarcity of resources, engineering talent, cloud service providers will very likely pay attention to their own technology stack,” he noted. While providers secure their infrastructure, this focus creates gaps that organisations must address quickly.  Shared responsibility misconceptions The interview reveals that despite the robust security capabilities of hyperscalers like Microsoft, AWS, and Google, two critical factors frequently undermine cloud security. First, the multi-cloud trend introduces complexity that makes consistent security difficult to achieve. “The very fact that a company is in the cloud means it has the option to be multi-cloud. So, you don’t have to put your risk into one place,” the Cortex chief explained. While this approach reduces dependency on any single provider, it requires sophisticated security strategies that work across disparate environments. Second, and perhaps more troubling, is human error. Shailesh pointed out many recent high-profile breaches have occurred not because of technological failures but because of misconfigurations, poor access controls, and other mistakes made by users and administrators. Even perfectly designed systems remain vulnerable to this unpredictable variable. Southeast Asia’s cloud evolution Compared to Western counterparts, Southeast Asian nations have adopted a more measured approach to cloud migration. Shailesh described a three-phase evolution that characterises this journey: Initially, organisations move from on-premises environments to local cloud providers. As businesses gain confidence with cloud concepts, they explore more sophisticated implementations. Finally, they adopt major public cloud platforms while addressing data residency requirements. “As regulations evolve, as people get more comfortable and as market dynamics dictate, I think we will move towards more global clouds with lesser boundaries,” he predicted. “This is what’s happening in most western economies, and it will happen here as well.” Navigating regulatory waters Malaysia’s recent cybersecurity legislation represents a significant development in the country’s digital governance framework. When asked about these requirements, Shailesh offered a pragmatic perspective informed by his company’s global experience. “We’ve managed the presence of regulations in all the countries that we operate in, and for the most part regulations have evolved over time,” he explained. “As that happens, the market evolves and our awareness evolves. So, I expect that to happen in Malaysia as well.” As head of a division within a company that he described as “the largest pure-play cybersecurity company in the world with operations in over 70 countries,” Shailesh emphasised Palo Alto Networks’ commitment to regulatory compliance while enabling cloud adoption. Looking forward For organisations navigating Malaysia’s cloud transformation, Shailesh underscored the importance of partnering with experienced security providers. “You need a partner that has been there, done that, has continued to innovate and can provide the most comprehensive, pre-integrated, most complete solution set in the market,” he advised. As Malaysia and the rest of the region continues its digital transformation journey, Shailesh’s insights offer valuable guidance for organisations seeking to balance innovation with cybersecurity. This perspective reflects a fundamental reality of modern cybersecurity: as cloud adoption accelerates, the traditional boundaries between on-premises and cloud security

AI, Events

Intelligence and architecture at the core of Dell Technologies World 2025

Intelligence and architecture at the core of Dell Technologies World 2025     Sitting among the audience during the Day 1 keynote of Dell Technologies World 2025 (DTW 2025), it became clear to this journalist that the guest speakers sharing the stage with Michael Dell are all in the business of pure intelligence. JP Morgan’s Larry Feinsmith shared how his company is exploring AI agent orchestration, while Lowe’s Seemantini Godbole discussed equipping sales associates with AI-powered access to expertise beyond their speciality. Data intelligence forms the core of their operations. Michael Dell had also emphasised that “AI will follow the data” before introducing Larry as someone who “embraces an AI-augmented future for the enterprise” and was “putting the data to work.” Companies in the business of pure intelligence As a financial institution handling $10 trillion in daily payments, JP Morgan operates in over 100 markets globally with more than 300,000 employees. “This requires us to build and deliver technology at scale,” Larry noted, adding that the firm had announced an $18 billion tech budget that would likely fund their main priorities: delivering best-in-class digital experiences for clients and employees by leveraging their exabyte of data with AI capabilities integrated throughout. This requires modern, resilient, and scalable architecture, which is where Dell fulfils its role as it has done for the past 30 years.  New infrastructure requirements A conversation between NVIDIA CEO Jensen Huang and Michael Dell revealed their thoughts about the hardware required to power AI in organisations and potentially worldwide. Both anticipate AI agents augmenting the human workforce in areas like cybersecurity, software engineering, marketing, sales, operations, forecasting, and supply chain management. Besides working with cloud service providers and “new GPU cloud companies focused on AI native startups and AI native cloud companies,” Jensen mentioned that NVIDIA is preparing for one of their largest opportunities: enterprise AI. “These are companies essentially building a digital workforce of AI agents… some want to do it in the cloud, but many want to do it on-premises.” Michael noted that with substantial data being created at the edge or on endpoint devices, customers increasingly want to bring AI to the data rather than the reverse. “All these new capabilities require significant innovation.” He claimed that together with improvements in compute, storage, and networking, the Dell AI Factory with NVIDIA, version 2.0, can help address these new requirements. New architecture and form factors Michael Dell predicts these AI factories will grow from thousands today to millions in coming years, calling it an “intelligence explosion.” With AI-optimised servers, advanced networking solutions, high-performance file systems, and data management tools, the AI Factory was conceptualised to make AI accessible as well as scalable to enterprises. When companies use their proprietary data and express it in an AI agent, they are expanding their ability to express their competitive advantage. Michael Dell “This is unquestionably the single biggest platform shift, and we talk about how every layer of the computing tech stack gets reinvented,” said Jensen. “The over 500,000 enterprise companies worldwide that have built their IT and data centres over the last 30 years have built them in the old way.” “And they need to be brought into today’s world of AI.” But Jensen rightly observed legacy environments within enterprises that might slow down wide-scale deployment of AI factories. So, what if there is a way to close the hardware and processing power gap while meeting an emerging trend for localised computing, while not taking up too much space? During GTC Spring 2025, NVIDIA introduced the DGX Spark, a small form factor that belies the AI power and benefits it can bring to AI ecosystems. Despite its compact size, DGX Spark will enable developers, researchers, data scientists, and students to accelerate generative AI workloads. Powered by the NVIDIA Grace Blackwell platform, DGX Spark will allow developers to prototype, finetune, and run inference on the latest generation of reasoning AI models and seamlessly deploy them to data centres or the cloud. New expression AI isn’t the product but it can power an organisation’s purpose. Both Jensen and Michael observed how customers over the years, have created intelligence with their proprietary data to enhance their businesses. Michael concluded, “When companies use their proprietary data and express it in an AI agent, they are expanding their ability to express their competitive advantage.” (This journalist attended DTW 2025 as a guest of Dell Technologies).

Scroll to Top