Author name: Editorial Team

A futuristic data centre filled with high-performance server racks, green LED indicators, and digital monitoring screens, representing advanced cloud infrastructure, AI computing power, and enterprise-scale data processing.
Data Centres

The future of data centres: Speed, sustainability, and AI-readiness at scale

This article was written by Leona Lo, EITN’s editor-at-large Asia Pacific’s data centre landscape is undergoing a major shift. As AI workloads grow heavier and deployment timelines tighten, operators are also facing mounting pressure to meet sustainability targets. At Siemens’ Data Center Industry Analyst Day on 17 Jul 2025, leaders from WT Partnership, Red Engineering, and Exyte shared their views on how the market is evolving—and what it takes to build smarter, safer, and more resilient infrastructure in today’s climate. Asia’s data surge and the rise of modular construction “Asia is home to nearly 60% of the global population— an enormous market to serve,” said Jodi Pieterse, Director at WT Partnership. “We’ve seen massive digitalization in the region, and data center builds now regularly exceed 40–50 megawatts, increasing in capacity year-on-year. In Thailand alone, we’re seeing multiple data centre developments exceeding 200-plus megawatts in total capacity.” Pieterse noted that traditional procurement and construction models are being re-evaluated. “Project developers used to focus on getting the best value through competitive tenders. But today, due to supply chain constraints, they are being forced to engage directly — often single source — with key vendors to secure resources. He added that pre-fabricated and modular designs— once considered untested —are rapidly gaining ground in Thailand. “This shift is being driven by a shortage of skilled labor and the need to reduce build times. As the market matures, we’re also starting to see cost efficiencies emerge.” Design bottlenecks and hardware limitations Joe Ong, Technical Director at Red Engineering, highlighted how the design process is under pressure. “For any data centre project in a design phase, critical decisions are made in the initial weeks. There is no luxury of time at all.” He spoke of the challenge in building up design talent. “Data centre design and delivery requires deep technical skills and knowledge that cut across multiple disciplines of architecture, structure, electrical, mechanical, plumbing, fire protection, info-communications technology, and construction methodologies. It takes a minimum of three to five years for any graduate data centre engineer to acquire a reasonable level of competency to contribute independently in just one design discipline. The runway is long, challenging but undoubtedly, a very rewarding journey for the engineer.” Ong also addressed concerns about infrastructure planning, particularly when designing for AI workloads. “For an AI data centre, more often than not, end users typically require data centre mechanical and electrical infrastructure to be built on the basis that the entire data halls are completely filled with only AI high-density compute racks, which have a very high power draw. However, in reality, only an estimated 60–65% of the total rack count in an AI data centre comprises AI high-density compute racks. The remaining racks are networking, storage, and normal compute, which have a significantly lower power draw. This gives rise to an inconvenient truth: there will be a marked difference between the provisioned capacities of mechanical and electrical infrastructure versus actual demand when in operations.” To reduce time to market, he said, clients are increasingly exploring edge and containerised solutions. “We need to carefully watch the industry through the lens of global leaders of the community such as Nvidia and OpenCompute Project (OCP). In future AI data centres, it is anticipated that multiples of compute capacity will be compressed into the same rack space, driving up demand for power and cooling. White space will shrink; grey spaces for mechanical and electrical infrastructure will expand. Ultimately, the roof of any data centre will be the limit, because the roof will limit how many cooling towers may be installed for heat rejection.” Engineering for speed and sustainability Joshua Hunt, Exyte’s Director of Design & Engineering, described how prefabrication and parallel construction are gaining momentum. “If you can pre-fabricate and test modules offsite while the shell is being built, you’re saving months. That’s the future—though the re-engineering cycles can add extra time that isn’t always in the project schedules.” Exyte is also looking ahead to new technologies. “Cryogenic cooling, LNG-supplied cooling, quantum computing readiness—these are becoming mainstream design conversations.” Rethinking data centres for the AI Age Data centres in Asia Pacific are no longer just about uptime. They must be flexible, efficient, and built to meet growing expectations around environmental and operational accountability. The region’s varying regulations, workforce limitations, and infrastructure gaps add layers of complexity. “Successful data centre project delivery is now more than ever about rethinking everything from supply chain to operations,” Pieterse said. As demand continues to grow, partners like Siemens are helping to enable this next phase with integrated systems that support safety, performance, and sustainability. With modular construction, smarter design, and a growing focus on lifecycle efficiency, the data center of the future is taking shape—faster than ever.

AI

VAST Amplify to Help Organisations Multiply Effective Flash Capacity

 VAST Data, the AI Operating System company, has announced a new capacity optimisation program – VAST Amplify. This new customer engagement is designed to help enterprises and service providers increase effective flash capacity using the SSDs they already own. As storage lead times and allocation windows tighten, VAST Amplify provides a structured path for customers to identify underutilised flash, rapidly qualify and repurpose existing hardware, and consolidate capacity into the VAST AI Operating System’s unified Disaggregated Shared Everything (DASE) architecture, delivering up to 6× or more effective capacity, depending on workload characteristics and existing environment. Across the industry, capacity planning has become increasingly constrained by supply availability, and these SSD shortages are increasingly exposing architectural inefficiencies across modern data platforms – including replication-heavy protection models, fragmented data stacks, and performance approaches that depend on overprovisioned flash. VAST Amplify addresses these constraints by helping organisations reclaim stranded SSD capacity and convert installed flash into a unified, globally accessible pool – increasing usable capacity and sustaining AI and analytics growth even as flash becomes scarcer and more expensive, without waiting for new procurement cycles. “Storage scarcity is forcing organisations into impossible trade-offs – delay programs, ration capacity, or accept whatever allocation they can get,” said Phil Manez, Vice President, GTM Execution at VAST Data.“With VAST Amplify, we’re giving customers a practical alternative: reclaim the flash you already have, consolidate it into a modern architecture, and materially increase the usable capacity and performance you can deliver to the business.” As organisations scale AI from training into higher-volume inference, emerging approaches that persist and reuse inference state – including key-value (KV) cache – are beginning to add additional demand for fast storage, further increasing the importance of maximising effective capacity from existing flash. VAST Amplify is designed to meet customers where they are – operationally and technically – while aligning each engagement phase to measurable efficiency gains at scale, including: By consolidating flash into a unified VAST environment, customers can reduce the friction of workload-by-workload capacity planning and instead support a broader range of use cases from a shared pool – including data-intensive analytics, real-time pipelines, modern databases, and AI infrastructure needs such as high-performance repositories for model data and emerging inference-era requirements. (Adapted from press release)

AI, Uncategorized

MIT Technology Review unveils full agenda for EmTech AI 2026

MIT Technology Review unveils full agenda for EmTech AI 2026 The longest-running AI business conference returns April 21–23, 2026, on the MIT campus and online CAMBRIDGE, Mass., Jan. 7, 2026 /PRNewswire/ — MIT Technology Review today announced the release of the full agenda for EmTech AI 2026, the leading AI business conference for executives, researchers, and innovators. The event will be held April 21–23, 2026, on the MIT campus and online via livestream. Now in its 14th year, EmTech AI brings together the researchers and executives to explore how intelligence is moving from pilots to production — from isolated wins to enterprise-wide impact. This year’s theme, “The Great Integration,” analyzes the implications and impact of this critical moment as intelligence is woven into workflows, systems, and decision-making. Curated by the MIT Technology Review editorial team, the agenda provides insights grounded in research, real-world application, and forward-looking strategy. Attendees will leave with the clarity to act, equipped to integrate AI responsibly, scale initiatives with confidence, and transform insights into measurable business outcomes. Agenda Highlights Include: Featured Speakers Include: To access the full detailed agenda, visit EmTech-AI.com. “2026 is the year organizations must operationalize AI to stay competitive,” said Brian Bryson, director of event content and experiences at MIT Technology Review.  “As AI becomes infrastructure, the effects ripple across every function. This year’s agenda helps leaders understand the strategic and operational impact of the Great Integration and how these changes will redefine workflows, roles, and results.” “The future isn’t built with AI—it’s built on AI,” he adds. Tickets and early access offerTickets are limited for this exclusive event and will sell out. Secure your seat today at early access pricing, available until January 23, 2026. For the full agenda and to secure your spot, visit EmTech-AI.com. Anchored by the editorial expertise of MIT Technology Review, EmTech AI’s in-person event will feature exclusive experiences, thought-provoking interviews and keynotes, and strategy-setting case studies. Attendees sit side by side with leaders and peers across all industries in interactive Q&A sessions and experience unparalleled networking opportunities, all in an intimate setting at the MIT Media Lab on the MIT campus. The virtual experience will include an interactive event hub, featuring livestreamed sessions, videos on demand, live chat and Q&A, and additional networking opportunities. Members of the media may obtain additional information and request press credentials by emailing [email protected]. About MIT Technology Review MIT Technology Review is an independent media company owned by MIT. The print publication, established in 1899, was the first-ever technology magazine; today, MIT Technology Review publishes in multiple digital formats every day, including on our site, in email newsletters, and across all major social channels. We also produce a multi-award-winning bimonthly print magazine and run one of the industry’s most highly regarded events brands, EmTech. Our goal is to become the destination for those seeking to understand how technology is shaping our world. Subscribe. Attend. Follow: LinkedIn, Reddit, Facebook, Instagram. Media Contact: MIT Technology [email protected]  Read More

AI, Uncategorized

How AI-driven, human-centred manufacturing will shape SEA region in 2026

How AI-driven, human-centred manufacturing will shape SEA region in 2026 Marcelo Tarkieltaub, Regional Director, Southeast Asia, Rockwell Automation Southeast Asia’s manufacturing sector had a challenging start to 2025, but a positive end. As the uncertainty settles, 2026 presents an opportunity to move forward with intelligence, foresight and the ability to scale digital capabilities across diverse markets. As global supply chains continue to rebalance and the region strengthens its position in electronics, automotive, food production and advanced materials, manufacturers are asking a new question: How do we build operations that can anticipate, not just react? Four forces will shape this next chapter: predictive capability, edge intelligence, workforce evolution and embedded sustainability. Underpinning these forces is the accelerating use of AI across Southeast Asia’s manufacturing sector, not as a standalone lever but an enabler for smarter decisions, faster responses and more adaptable operations.  Together, these shifts reflect a broader regional movement toward adaptive, insight-driven manufacturing. While countries such as Singapore and Malaysia offer early examples, the implications span the entire Southeast Asian landscape. Predictive manufacturing becomes a regional differentiator Predictive manufacturing is fast becoming the region’s competitive advantage. Manufacturers across ASEAN are accelerating their use of digital twins, simulation environments and AI-enabled forecasting to anticipate issues before they occur. Several factors are driving this shift: Rockwell Automation’s 10th Annual State of Smart Manufacturing Report highlights that 94% of APAC manufacturers are investing in or planning adoption of AI/ML tools. Predictive technologies are at the center of these investments, reflecting a global shift from manual troubleshooting to data-driven optimization. Research reinforces this momentum, wherein predictive maintenance practices in 2025 shows consistent reductions in downtime, energy use and maintenance costs across digitally advanced plants.  Early movers across Southeast Asia are already applying predictive insights to optimize equipment lifecycles, balance energy loads and simulate production decisions before making physical adjustments. This capability allows manufacturers to navigate uncertainty with greater confidence, an advantage that will grow more critical as supply chains adjust across the region. Edge intelligence accelerates smart manufacturing at scale While cloud platforms remain essential for enterprise-wide governance and analytics, the next wave of industrial gains in Southeast Asia will be unlocked at the edge. Edge intelligence enables data to be processed directly on the production floor, supporting faster, more accurate decisions. IDC’s Asia/Pacific Future of Operations Survey estimates that 40% of operational data in the region will be generated and processed at the edge by 2027. This reflects the realities of Southeast Asian manufacturing: distributed production networks, varying connectivity environments, and a mix of advanced and legacy equipment. Edge analytics is particularly effective for: Manufacturers are increasingly using a blended approach, where big picture analysis happens in the cloud while fast, real-time decisions happen at the factory. This hybrid architecture is becoming a norm in markets such as Vietnam, Thailand, Malaysia and Indonesia to modernize at their own pace while maintaining unified oversight. For enterprise IT leaders, this signals a strategic shift: edge is no longer a hardware consideration but a foundational element of digital resilience. People will define the success of digital factories Despite rapid advances in automation and AI, human capability remains central to manufacturing competitiveness. A recurring challenge across ASEAN is workforce readiness: senior technicians are retiring, younger talent is gravitating toward digital-first industries, and new technologies require new skills. But across the region, manufacturers are reframing workforce transformation from a labor shortage problem into a skills-acceleration opportunity. Three developments are gaining traction: The latest Rockwell State of Smart Manufacturing Report indicates that 42% of Asia Pacific manufacturers are using digital tools to redesign roles and create more engaging work environments. This underscores the region’s broader pivot toward workforce augmentation, not replacement. Sustainability becomes an operational lever, not a reporting task Southeast Asian manufacturers are increasingly integrating sustainability into production, moving beyond compliance into real-time operational optimization. AI-enabled energy orchestration, automated load balancing and waste-reduction analytics are allowing factories to reduce consumption while maintaining throughput. As customers tighten expectations around transparency, sustainability performance is increasingly intertwined with competitiveness. The manufacturers that treat sustainability as a measurable, data-driven performance metric will lead the next phase of regional growth. A predictive, connected and people-centred future Southeast Asia’s manufacturing sector will continue to diversify in 2026, but one theme will cut across all markets: intelligence will define advantage. Predictive tools will shape strategic decisions, edge intelligence will enable responsiveness at scale, workforce transformation will become a shared priority, and sustainability will integrate into daily operations. For manufacturing leaders across the region, the goal is not to adopt every emerging technology, but to build digital foundations that connect systems, empower people and deliver insights that scale across varied environments. Those who do will navigate the region’s rapid transformation and shape its next era.

Cybersecurity, Uncategorized

From AI hype to trusted autonomy: Five ways APAC cyber resilience will change in 2026

From AI hype to trusted autonomy: Five ways APAC cyber resilience will change in 2026 By Martin Creighan, Vice President, Asia Pacific at Commvault As APAC economies enter the era of agentic AI, resilience and sovereignty are no longer technical concepts – they are the foundations of leadership, trust, and competitiveness. Artificial intelligence has matured from pilots to purpose. IDC describes an Agentic Future in which humans and AI act with autonomy and intention. Across APAC, AI-related investments are projected to grow around 1.7 times faster than overall digital technology spending, generating an estimated US$1.6 trillion in economic impact by 2027, while in Singapore more than 70% of companies report adopting AI in some form. The most visible shift is in AI assistants powering customer engagement, operations, and even cyber response. But these systems are only as trustworthy as the data they learn from. In 2026, AI integrity will become a central pillar of resilience with the ability to trace, verify, and restore the truth in machine learning models. What’s emerging next is the use of conversational interfaces to run resilience itself. Instead of navigating dashboards and scripts, teams will ask – in natural language – to protect a workload, check a policy, or validate recovery readiness across SaaS, multi-cloud, and hybrid environments. Resilience begins to feel like an always-on, conversational control layer over critical services. From Singapore’s Digital Sovereignty Framework to India’s Data Protection Act, cloud sovereignty has become the new strategic frontier. Forrester expects that by 2026, roughly half of APAC enterprises will make sovereignty-based controls – such as in-region infrastructure and data residency – a top criterion for cloud and AI platforms. Sovereignty is about control and choice. In a multi-cloud, multi-region world, enterprises need the freedom to decide where data resides – on-premises, in a private cloud, a local hyperscaler region, or a global cloud – while still maintaining visibility into under whose laws it sits, and how it can be recovered without crossing borders. Architectures are becoming sovereignty-aware by default, with encryption, access policies, and compliance rules moving with the data – across borders and clouds. When sovereignty is built into design, compliance becomes a competitive advantage. In 2026, this combination of sovereignty and freedom of choice will allow organisations to innovate confidently within trusted boundaries. As digital ecosystems become borderless, identity is replacing infrastructure as the perimeter of security. In Singapore, phishing attempts surged by about 49% to more than 6,100 cases in 2024, with banking, government, and e-commerce among the most spoofed sectors; a reminder that most attacks now begin with stolen or abused identities. IDC anticipates that by 2026, cyber-resilient organisations will merge identity, data, and recovery policies into one continuous security fabric. Continuity is incomplete if identities remain corrupted. The ability to restore verified user integrity – not just restore systems – will become a cornerstone of operational assurance. This matters even more as AI starts talking to AI – autonomous agents initiating actions, sharing data, and making decisions on their own. In this AI-centric world, a trusted identity becomes the first checkpoint of safety, and recovery plans must prove that compromised identities have been reset, re-verified, and re-linked to clean data. In 2026, enterprises will recognise that AI initiatives stall not from lack of data, but from the inability to safely access and prepare the data they already have. Across APAC, multiple surveys show that data quality, security, and governance – not enthusiasm for AI – are the primary bottlenecks to scaling projects beyond pilots, with many organisations citing fragmented data estates and compliance concerns as the main reasons initiatives slow or stall. Historical data will be reframed from “backup insurance” to a strategic intelligence asset, if activated responsibly. This will accelerate the rise of sovereign, resilience-aware data rooms – secure environments that connect governed backup data directly to AI platforms and data lakes without risky, ad-hoc workflows. By providing controlled, self-service access with built-in classification, lineage, and compliance, data rooms will turn protected data into clean, compliant, AI-ready fuel that can power analytics and AI without breaching local data protection rules. While AI dominates today’s headlines, quantum computing defines tomorrow’s cryptographic risk. Post-quantum cryptography (PQC) readiness is now a resilience imperative. Data protected under today’s algorithms (RSA, ECC) may be vulnerable within a decade. Forward-looking enterprises are beginning crypto-inventory audits, deploying quantum-safe algorithms, and redesigning backup and recovery systems with cryptographic agility – for example, trialling QKD and PQC over quantum-safe national networks, or working with telcos that now offer quantum-safe national networks. Quantum readiness of the future is about ensuring that sovereignty, encryption, and recovery will still hold when quantum attacks inevitably occur. For heavily regulated sectors and high-IP manufacturers, that means treating crypto-agility as part of core resilience architecture today. The Architecture of Trusted Leadership Governance, sovereignty, and resilience are converging into a single mandate: proof of trust. Boards no longer accept assurances – they expect evidence. Recovery metrics, audit trails, and cleanroom validations are becoming the language of accountability across highly regulated sectors worldwide. As that shift continues, traditional measures such as Recovery Time Objective (RTO) and Recovery Point Objective (RPO) will not be enough on their own, because they say little about whether restored data is truly trustworthy. Measures such as Mean Time to Clean Recovery (MTCR) – the time needed to bring critical applications, infrastructure and validated-clean data back to a trusted state – will increasingly shape how APAC leaders judge whether their cyber-resilience investments are working. By 2030, half of the region’s digital value will come from organisations that scale AI responsibly. That responsibility rests on three pillars: Enterprises that embed these pillars into their design will be best placed to move from AI hype to trusted autonomy. They will operate across borders without compromise, turn compliance into credibility and give both humans and AI systems a foundation of data they can safely depend on.

AI, Uncategorized

Accelerating biomedical insights: How AI can speed up literature analysis in genomics

Accelerating biomedical insights: How AI can speed up literature analysis in genomics The ability to synthesise and curate vast amounts of scientific literature is becoming a critical differentiator for biomedical organisations seeking to stay ahead. For Aaqib Alavy, a UCL Masters candidate specialising in artificial intelligence, this challenge is at the heart of work he currently does. Aaqib, who works on synthesised genetic variant information, is focused on streamlining the process of literature analysis and data curation in genomics. “I’m doing a lot of work within that space, looking at how AI and machine learning can aid and support the medical field,” he explained. “Right now, I’ve been working on a project within genomics, but more specifically, literature analysis and data curation within that field.” Crucial streamlining The stakes are high. As Aaqib noted, “With the flow of genomics, a lot of it is advanced by studies and scientific literature comes out every day; there are multiple articles produced regularly. “ “There’s a lot of research being done, but not enough resources to digest that research. Trying to speed up that process is part of what I’m looking into and working on.” In the field of biomedicine and healthcare, genomics companies aim to provide medical professionals with updates on genetic variants – mutations in DNA that can be linked to certain diseases. “You can think of it like an update in the form of synthesised information that is a very easy and digestible review for medical professionals, to look at and be able to understand,” Aaqib said. Profound implications Streamlining workflows and synthesising scientific literature has profound implications for the biomedical research community, for example challenging or consolidating consensus around a particular scientific finding that can impact patient care from risk assessment to preventative strategies. Aaqib explained, “If a study comes out presenting evidence that an existing benign variant now shows signs of actually being pathogenic, this is now an important area to potentially direct more resources and focus towards further consolidating that finding; however with the sheer volume and rate of publication, findings like these can often go undetected for extended periods of time.” This is why accelerating the process of curating and reviewing new findings can help the biomedical community more rapidly detect relevant findings and align their efforts with the latest evidence. Challenges The technical challenge, however, is formidable. “One of the biggest bottlenecks is the computational cost, and when it comes to optimising the resources needed whilst maintaining results that are both accurate and robust,” Aaqib shared.  If a study comes out presenting evidence that an existing benign variant now shows signs of actually being pathogenic, this is now an important area to potentially direct more resources and focus towards further consolidating that finding; however with the sheer volume and rate of publication, findings like these can often go undetected for extended periods of time.Aaqib Alavy “Right now, I’ve been working on a locally developed implementation where you can use consumer grade GPUs, as well as using models and technology that are both a lot more accessible, and more traceable and transparent when it comes to verifying results and how they were achieved.” He described how he uses LLMs or large language models, to analyse structured and formatted data like complex tables, where semantic context is limited. A heuristic-based relation extraction approach identifies co-occurrences of genomic entities within text and assesses their potential relationships using scoring models. These associations are then validated or expanded by cross-referencing known databases. APIs (Application Programming Interfaces) play a crucial role in Aaqib’s work by providing powerful tools for data extraction and analysis. A specific example he highlighted is Llama’s extraction tool, Llama Extract. However, Aaqib  noted significant challenges with API-based LLM solutions. “Because it’s an LLM, it’s not as deterministic and transparent. There’s also the issue of hallucinations, where LLMs will make up information for the sake of answering a prompt or completing a task.” Explainability and robustness Another critical issue is explainability – an extremely important requirement in the medical field. “Because it’s the medical field, genomics, that is probably one of the most crucial and vital components – being able to understand why these insights are the insights that they are, is extremely important for both medical professionals and the larger biomedicine community,” Aaqib emphasised. “If you want to make conclusive decisions, you need to understand why a certain tool, model, or technology has yielded the insights that it has.” Despite these challenges, Aaqib sees broad potential for the technology. “There’s a lot of benefit for other industries, because it’s essentially information synthesis. Being able to have that for any industry, any application, is extremely useful.”  For example, a significant technical hurdle exists in extracting and processing PDF documents, a problem that stems from the inconsistent nature of PDF documents: “PDFs are extremely varied across the board in terms of structure, layout, and semantics,” he explained. And as the volume of biomedical literature continues to grow, the need for advanced tools to curate and synthesise information will only intensify. “The advancement of information synthesis, literature analysis within the field of genomics and the wider medical field can really, really aid the whole world of biomedicine as a whole,” Aaqib emphasised. “Because the main bottleneck is moving along findings down the pipeline… being able to produce and create tools that can aid in that review and streamline the process is extremely useful.” Ultimately, the ability to effectively parse documents in challenging formats could have broader implications beyond biomedicine, as Aaqib had also suggested that solving the PDF extraction challenge could potentially benefit multiple industries, ultimately enhancing knowledge sharing within these sectors.

Automation, Uncategorized

Finshape accelerates global growth with new CEO

Finshape accelerates global growth with new CEO Finshape, a leading provider of digital banking solutions, announced the appointment of Neil Budd as the company’s new CEO. PRAGUE, Jan. 7, 2026 /PRNewswire/ — Budd joins Finshape an experienced executive with 25+years in banking, technology and consulting. He will focus on the company’s continued international growth, strengthening value delivery for clients, and expanding activities primarily across Western Europe, the Middle East and the APAC region. Current CEO Petr Koutný will move to position of the Chairman of the Board. Neil Budd, CEO of Finshape Financial institutions today are working out how to turn their investments in digitalisation into measurable outcomes in conjunction with understanding how to harness the potential of artificial intelligence in a meaningful way. “Banks are looking for technology partners they can trust for the long term. Finshape has a strong, relevant product portfolio, experienced teams, and stable, trusted relationships with banks. With our ADBO System and new capabilities in loyalty and personalisation, we will continue to help banks deliver tangible value to their customers and accelerate our growth journey to new markets” said Neil Budd. Finshape has been growing for the long term, particularly in CEE, where it works with major financial institutions including Erste Group, Raiffeisen Bank International AG, OTP Bank Group or Banca Transilvania. In 2025 Finshape generated EUR 55 million in revenue according to its consolidated results and grew 30% year-on-year. The company is also expanding its international footprint. Further growth has been supported by the recent acquisition of the loyalty platform Realtime-XLS from Collinson Group, and a strategic partnership with Dubai Islamic Bank, the largest Islamic bank in the United Arab Emirates. These steps strengthen Finshape’s role in global digital banking. Finshape intends to be a market leader in implementing technologies, specifically focusing on the meaningful integration of AI into the operations and into client offerings to deliver measurable outcomes for banks. Before joining Finshape, Neil worked at various leading global consulting and advisory firms. At Finastra, he served as Vice President Global Head Strategic Partnerships Ecosystem and Alliances as well as Global Head Managed Services. At Accenture, he was Managing Director. Today, Finshape serves 100+ financial institutions in 25 countries across four continents.  The company’s team includes 600+ specialists, supporting tens of millions of end users worldwide. Finshape’s growth is driven by a combination of organic development and targeted acquisitions.  Read More

Automation, Uncategorized

Johnson Controls and Thamrin Nine now power southern hemisphere’s highest building with green technology

Johnson Controls and Thamrin Nine now power southern hemisphere’s highest building with green technology  Johnson Controls, the global leader for smart, safe, healthy and sustainable buildings, celebrated the successful completion of a multi-year collaboration with Thamrin Nine, a flagship multi-purpose complex in central Jakarta’s central business district that spans commercial, hospitality and retail spaces. The project, which helps reduce energy use by up to 30 percent, recently achieved BCA (Building and Construction Authority) Green Mark Platinum certification, recognizing its outstanding environmental performance. As part of the project, Johnson Controls provided chiller plant design and implementation, engineering services and building automation systems for two of Thamrin Nine’s skyscrapers—the Autograph Tower, the tallest building in the Southern Hemisphere at 382.9 meters, and the Luminary Tower. With building systems fully deployed across both towers, Johnson Controls will continue supporting the development through ongoing service and maintenance to ensure long-term efficiency, comfort and enhance equipment lifespan. Smart Building Technologies for a Future-Ready Urban Icon Johnson Controls designed, supplied and installed a full suite of integrated building solutions tailored to the complex’s high-performance needs. Thamrin Nine marks the company’s first chiller plant optimization project in Indonesia, which aims to make cooling systems work smarter through advanced technologies and improved operations. Through turnkey project management that covered design, implementation, and integration across systems, Johnson Controls was able to deliver significant energy and cost savings. These included: “This project is a defining milestone for urban development in Jakarta,” said Michael Wiener, Design Director, Thamrin Nine. “Our goal was to change the way people live, work and play in Jakarta —delivering an integrated destination that combines scale, choice, convenience, and sustainability. Johnson Controls has been the ideal partner to help us achieve that vision, and we appreciate their continued support in maintaining performance at the highest standards.” “Our partnership with Thamrin Nine exemplifies how future-forward engineering and digital optimisation can deliver not just iconic buildings, but greener, smarter cities. By integrating advanced building technologies, we’re creating healthier, more sustainable indoor environments that reduce operational costs, lower carbon emissions, and enhance occupant well-being,” said Wibawa Jati Kusuma, General Manager, Malaysia & Indonesia, Johnson Controls. “With ongoing service and support, we’re committed to helping Thamrin Nine maintain optimised performance and set the pace for sustainable urban development in Indonesia.” A Model for Sustainable Development in ASEAN The Thamrin Nine partnership builds on Johnson Controls’ 140-year legacy of innovation and leadership in smart, safe, healthy and sustainable technologies and is part of its expanding portfolio of sustainability-driven projects across Southeast Asia, including: These proven technologies and approaches, adapted for the Jakarta context, support Indonesia’s Net Zero Emissions by 2060 vision—demonstrating how global expertise can address local environmental challenges at scale. As it celebrates its 140th anniversary in 2025, Johnson Controls continues to redefine building performance, driving the next era for commercial buildings, transforming industries and powering its customers’ missions. (Adapted from press release)

Automation, Uncategorized

Tata Communications’ key predictions shaping Asia’s tech landscape in 2026

Tata Communications’ key predictions shaping Asia’s tech landscape in 2026 By Amitabh Sarkar, Vice President & Head of Asia Pacific and Japan – Enterprise at Tata Communications As we move into 2026, the stakes for getting data foundations right have shifted dramatically in Asia Pacific. AI has already moved beyond proof-of-concept, with AI-related investments in Asia/Pacific expected to grow 1.7x faster than overall digital technology spending, according to IDC. Data remains the first bottleneck and the biggest multiplier. Organisations will need to invest early in data governance, quality management and building a single source of truth across silos. This will go hand-in-hand with investing in the right digital infrastructure: environments that are elastic, scalable and performant, and able to support the heavy compute demands of AI across cloud and edge. As more real-time use cases emerge, we will see a stronger push toward processing data closer to where it is generated, reducing latency and improving the speed of insight. Equally important is culture. AI programmes only succeed when people understand how to work with these systems — when there is clear ownership, the right skills in place, and a mindset that encourages experimentation, learning and responsible use. Voice AI is entering a new phase in Asia Pacific — one shaped by three converging forces: the surge in enterprise AI investment, the region’s linguistic and cultural complexity, and rising expectations for real-time, empathetic customer service. While AI innovation has historically focused on text-based chat and automation pilots, we’re entering a stage where enterprises are no longer experimenting with conversational AI but enabling systems that can handle live dialogue, context retention, sentiment understanding, and multilingual switching as the default, not the exception. A report on 2026 Asia Pacific predictions notes over 90% of enterprises will prioritise quantum-safe security, reflecting growing recognition that future-proofing data and communications is essential for competitiveness and trust. Building on the momentum of previous years, organisations in finance, healthcare, and other data-sensitive industries are moving beyond proofs-of-concept to implement quantum-resilient algorithms, quantum key distribution (QKD), and secure quantum communication channels. In 2026, this shift signals a turning point: quantum technologies are no longer niche research projects—they will become a core part of enterprise risk strategy, compliance frameworks, and long-term innovation roadmaps across APAC. As enterprises across Asia Pacific scale AI adoption, threat actors are increasingly leveraging AI to automate attacks, craft hyper-targeted exploits, and weaponise synthetic content. In 2026, cybersecurity will become inseparable from the infrastructure that powers AI and cloud workloads. Zero-trust architectures and continuous verification are now critical foundations for enterprise security, particularly as AI workloads expand across cloud, edge, and hybrid environments. Networks themselves are evolving to support both performance and security at scale. For example, large-scale AI-ready networks demonstrate how high-capacity, low-latency infrastructure can enable compute-intensive AI applications while embedding robust access controls, data integrity measures, and compliance standards.

Scroll to Top