Google bets on AI as a lever for hypergrowth in business

Google bets on AI as a lever for hypergrowth in business

Miguel Ángel Gombau, Tech Marketing Manager at SNGULAR

Miguel Ángel Gombau

Tech Marketing Manager at SNGULAR

April 14, 2025

Las Vegas has once again become the epicenter of technological innovation with the celebration of Google Next 2025

This year’s edition of Google Cloud’s most notable event dazzled with its bold vision and groundbreaking announcements, positioning Artificial Intelligence (AI) not just as a tool, but as the driving core of growth for companies and developers.

Under the theme Reimagining how we work, build software, and create intelligent agents, Google Cloud unveiled a wealth of innovations promising to redefine the future of cloud computing and the way we professionally interact with technology.

The message was clear. Google is massively investing in AI across its entire tech stack, from the underlying infrastructure to cutting-edge models and development platforms. This commitment is embodied in a planned $75 billion investment in 2025 dedicated to servers and data centers, crucial for powering AI computing and cloud business.

The scale of this bet was evident in the advancements showcased, ranging from the seventh generation of Tensor Processing Units (TPU), Ironwood, which exponentially boosts performance and energy efficiency, to Google’s global private enterprise network, Cloud WAN, optimized for application performance with promises of speed improvements and cost reduction.

These infrastructure innovations lay the foundation for the next wave of AI-powered applications, capable of processing data at unprecedented speed and scale.

Sngular and Google Cloud support companies on their journey toward digital transformation, helping them implement innovative solutions like those presented at Google Cloud Next 2025.

Interoperability and productivity: Google’s vision for intelligent enterprise agents

One of the key pillars of Google Next 2025 was the unveiling of a comprehensive ecosystem for the creation, deployment, and management of AI agents. Google’s vision centers on multi-agent systems as the future of intelligent automation and enterprise productivity. To make this vision a reality, key components were introduced to foster a robust, open, and interoperable environment.

Vertex AI, Google Cloud’s comprehensive platform for production AI, is further consolidated by orchestrating models, data, and, prominently, agents. Within Vertex AI, the highlight is the Agent Engine, an enterprise-grade managed runtime designed to deploy agents securely, with memory management and evaluation tools. This provides companies with the infrastructure needed to take their AI agents from development to production with the confidence of a robust and scalable platform.

To radically simplify the building of these agents, Google introduced the Agent Development Kit (ADK). This new open-source Python framework is designed to build sophisticated agents and multi-agent systems with granular control and flexibility, supporting debugging, versioning, and testing. The ADK integrates seamlessly with Vertex AI Agent Engine for managed deployments or with Cloud Run for those seeking greater control. The ADK’s openness, complete with documentation, GitHub repo, and quick-start guides, underscores Google’s commitment to an open, collaborative development ecosystem.

Inter-agent interoperability, a crucial aspect of building complex and heterogeneous multi-agent systems, is addressed with the launch of the Agent2Agent (A2A) protocol. This new open protocol aims to become the standard that allows AI agents developed with different tools or by various providers to communicate and collaborate securely and in a standardized way. A2A is backed by more than 50 partners and services, signaling likely broad industry adoption. Built on existing web standards (HTTP, JSON-RPC, SSE), the protocol is security-first, modality-agnostic, and capable of handling long-running tasks. Its functionality includes agent capability discovery (Agent Card), coordinated task management, and output format negotiation. It’s worth noting that A2A complements the Model Context Protocol (MCP): while MCP connects an agent to its tools, A2A connects agents to each other for collaboration. Google’s ambition is for A2A to achieve widespread adoption, becoming the de facto standard for agent communication, similar to HTTP for the web.

To facilitate employee interaction with business information and AI agents, Google introduced Agentspace, an enterprise hub that combines Google’s multimodal search power with agents’ ability to act. Agentspace offers unified search that connects to popular enterprise applications like Drive, SharePoint, Jira, Confluence, and ServiceNow, enabling centralized multimodal search. It allows users to ask complex questions, receive summaries, syntheses, and recommendations based on enterprise knowledge, all while respecting access permissions. Furthermore, Agentspace supports interactions with pre-built Google agents (like Deep Research, Idea Generation, and NotebookLM Plus) or with custom agents to automate tasks and workflows. Built on Google Cloud’s secure infrastructure, Agentspace uses knowledge graphs to personalize the experience and respects access controls. Its integration with the Chrome browser further streamlines access to enterprise data and agents directly from the search bar.

Google’s agent strategy is multifaceted, aiming to catalyze an interoperable ecosystem by simultaneously launching an open-source development framework (ADK), an open communication protocol (A2A), a managed deployment platform (Agent Engine), and an enterprise user interface (Agentspace). The vision is that as more developers and enterprises adopt these tools and standards, the ecosystem’s value will grow exponentially. However, this new paradigm also brings challenges in governance and security across distributed multi-agent environments.

Gemini fully integrates into the software development lifecycle

Gemini, Google’s family of AI models, is not only establishing itself as a powerhouse in its own right but also as a cross-cutting capability embedded throughout the entire Google Cloud ecosystem. At Google Next 2025, its evolution was showcased as an active collaborator across the entire application lifecycle, from development to operations.

Gemini Code Assist goes beyond simple code completion, introducing advanced agent capabilities that allow it to handle complex development tasks, transforming it from a passive tool into an active collaborator. Its new features include tool integration, enabling easy connection to external services and APIs (Atlassian, Sentry, Snyk) using the @ symbol in chat. Additionally, it’s now available in Android Studio for professional development, joining the already supported VS Code, JetBrains IDEs, Cloud Workstations, and Cloud Shell Editor. Furthermore, Gemini Code Assist can now interact via a Kanban board, displaying the tasks it’s working on and enabling developers to interact directly with it—redefining software development collaboration.

In the area of cloud management, Gemini Cloud Assist simplifies operational tasks by assisting teams throughout the application lifecycle. A standout new feature is accelerated troubleshooting (Investigations), which uses AI to intelligently analyze logs, metrics, configuration changes, and runbooks, rapidly identifying root causes of issues and proposing solutions. Cost optimization is another key area, with Gemini Cloud Assist offering custom insights and recommendations to optimize cloud spend, identifying inefficiencies and suggesting improvements in resource utilization, with integration into tools like Cloud Hub and FinOps Hub.

Even in application design, Gemini Cloud Assist integrates with Application Design Center, allowing users to describe the desired infrastructure in natural language to automatically generate architecture diagrams and Terraform code. The contextual integration of AI assistance across multiple Google Cloud services (Storage, Observability, Firebase, Databases, Networking, Security, IAM) provides relevant help based on the specific task at hand.

The Gemini model family also expands with Gemini 2.5 Pro, the most intelligent model from Google to date, noted for its advanced reasoning capabilities. According to the ChatbotArena leaderboard, Gemini 2.5 Pro ranks as the best model in the world. Its ability to simulate a Rubik’s Cube with adjustable dimensions and generate complex physics simulations demonstrates a significant leap in reasoning and the production of robust interactive code. Also announced was Gemini 2.5 Flash, a low-latency, cost-efficient model with built-in reasoning capabilities. Gemini 2.5 Flash allows users to control how much the model reasons, balancing performance with budget. These models are available in AI Studio, Vertex AI, and the Gemini app.

Gemini’s power also extends into media generation, with Veo 2 as the leading video generation model, capable of creating multi-minute 4K videos with SynthID watermarks. It features new editing tools, such as camera presets, foreground/background control, and dynamic inpainting and outpainting. For audio generation, Lyria is introduced as the first hyperscaler-powered capability to transform text prompts into 30-second music clips. These media generation capabilities are available in Vertex AI.

Discover Sngular’s Artificial Intelligence solutions to optimize processes and create new business opportunities.

Platforms and tools for application-centered development

At Cloud Next 25, Google Cloud introduced a range of platforms and tools designed to abstract away infrastructure complexity and provide an integrated environment that enables teams to focus on building and operating applications cohesively.

Firebase Studio emerges as a new cloud-based development environment (IDE) with agents, an evolution of Project IDX, specifically designed to accelerate the creation and deployment of modern full-stack applications, especially those integrating AI. It enables rapid prototyping using the App Prototyping agent to generate functional web app prototypes (initially Next.js) from natural language descriptions, images, or sketches. The entire environment is based on CodeOSS with integrated Gemini assistance, emulators, terminal, and access to Open VSX extensions. Its native integration with Firebase and Vertex AI services streamlines the creation of complete experiences. Firebase Studio allows importing existing projects (Git), creating custom templates, and deploying to Firebase App Hosting, Cloud Run, or private infrastructure.

Google Kubernetes Engine (GKE) reinforces its position as a fundamental platform for running AI workloads at scale, with a particular focus on enabling more efficient inference. The new GKE Inference Gateway capability dramatically optimizes performance and cost when serving AI models on GKE, delivering significant reductions in consumption and latency, and increasing throughput through model-aware intelligent load balancing and dynamic routing. Simultaneously, Cluster Director for GKE simplifies the management of large VM clusters with accelerators (GPU/TPU) as a single entity, facilitating large-scale training and inference.

Cloud Run continues to be a cornerstone of Google Cloud’s serverless ecosystem, ideal for deploying containerized APIs and backends, including those created with Firebase Studio or as part of agent-based architectures. Specific policies (Guardrails) can be defined to control deployment behavior.

App Hub introduces a new way of organizing Google Cloud resources by logically grouping them into "applications" that reflect the business structure, rather than managing isolated individual resources. This simplifies visibility, troubleshooting, and cost optimization by providing consistent application context, integrating natively with tools like Application Design Center and Gemini Cloud Assist.

The convergence of these tools (Firebase Studio, GKE Inference Gateway, App Hub, along with Gemini Cloud Assist) signals a fundamental shift in cloud management, moving away from detailed infrastructure configuration toward managing the entire application lifecycle. This will allow teams to focus on the functional value of their applications and adopt advanced technologies with less operational friction.

Putting developers at the center of change

The Google Next 25 Developer Keynote focused on empowering developers to build the next generation of applications. The keynote highlighted three key areas of innovation: the ability to build intelligent agents, the increase in software engineering productivity with Code Assist and Cloud Assist, and the potential of Gemini models with their massive context window and multimodal support.

The presentation of the Agent Development Kit (ADK) demonstrated the simplicity and power of building agents with instructions, tools, and models. The ability to create an agent that generates a professional proposal in PDF format from ideas and blueprints highlights the ADK’s potential to automate complex tasks and save developers significant time.

Another important aspect explored was the creation of multi-agent systems using ADK and Vertex AI, showcasing how to orchestrate multiple specialized agents to handle an end-to-end process. A demonstration of debugging a multi-agent system with Cloud Investigations highlighted the tools Google provides to ease the development and maintenance of complex agent-assisted applications. The launch of the Agent2Agent (A2A) protocol and its potential to enable communication and collaboration between agents built on different platforms and by different providers reinforces Google’s commitment to an open and interoperable ecosystem.

Google also emphasized the flexibility offered to developers by allowing them to use Gemini in their preferred IDEs, such as Windsurf, and by providing access to a wide range of models through Vertex AI Model Garden, including third-party models like Llama 4. This underscores choice and openness as core principles of Google Cloud’s developer strategy.

The invisible muscle behind the AI revolution

Behind all these innovations lies a robust and secure global infrastructure, optimized for the demands of AI.

Google’s AI Hypercomputer is a supercomputing system designed to simplify AI deployment, enhance performance, and optimize costs. It supports the best hardware platforms and unifies a unique software stack and consumption model, allowing users to use the hardware that best suits their needs and transition smoothly between systems. In addition to the Ironwood TPUs, Google Cloud has enhanced its GPU portfolio with the availability of A4X and A4 VMs, powered by NVIDIA’s GB200 and B200 Blackwell GPUs—becoming the first cloud provider to offer both. It was also announced that Google Cloud will be among the first to offer NVIDIA’s next-generation Vera Rubin GPUs, delivering up to 15 exaflops of FP4 inference performance per rack.

To meet the intense storage needs of AI workloads, new storage innovations were introduced. Hyperdisk Exapools offer the highest aggregate performance and capacity per AI cluster of any hyperscaler. Anywhere Cache keeps data close to accelerators, improving storage latency by up to 70% to reduce training time. Rapid Storage, Google’s first zonal object storage solution, delivers five times lower latency for random reads and writes compared to the fastest comparable cloud alternative.

In the area of security, Google introduced Google Unified Security, which integrates visibility, threat detection, and response into a single interface covering the entire attack surface. Additionally, AI security agents powered by Gemini were announced to automate critical tasks within Google Security Operations (smart alert triage) and Google Threat Intelligence (advanced malware analysis). Google also announced a definitive agreement to acquire Wiz, a leading multi-cloud security platform, to provide better cybersecurity options for companies and governments worldwide.

Customers are also driving innovation with Google Cloud technology

Throughout Google Next 2025, numerous examples were shared of how customers across industries are using Google Cloud’s AI to drive innovation and transform their businesses. McDonald's is integrating AI at the core of its operations to improve customer experience and streamline team workflows. Intuit is simplifying tax preparation with Document AI, part of Vertex AI. Honeywell has embedded Gemini in its product development to optimize lifecycle management for millions of products. Deutsche Bank has created DB Lumina, a research agent powered by Gemini and Vertex AI, to improve productivity and data analysis in a highly regulated industry.

In customer service, Reddit launched Reddit Answers, a new AI-powered way to get information and recommendations based on real user conversations. Lowe’s is revolutionizing product discovery with Vertex AI Search. Mercado Libre has deployed Vertex AI Search across millions of listings to help customers find products they love faster. Verizon is enhancing its customer experience with Google Cloud’s Customer Engagement Suite, using AI to provide personalized insights to service representatives.

Google Cloud’s creative AI capabilities are also being leveraged by companies like WPP, Monks.Flow, and The Brandtech Group to revolutionize marketing and content production. The partnership with Adobe to integrate Imagen 3 and Veo 2 into applications like Adobe Express promises to further democratize access to cutting-edge creative tools. The immersive Wizard of Oz experience at the Las Vegas Sphere, enriched by Google AI and Veo 2, exemplifies the potential of AI to reimagine entertainment.

In the data domain, Mattel is using Gemini to synthesize millions of consumer feedback points and gain insights to improve its products. Spotify relies on BigQuery to manage its massive data scale and deliver personalized experiences to millions of users. The Nevada Department of Employment, Training and Rehabilitation is using agents powered by BigQuery and Vertex AI to accelerate unemployment claim processing.

These examples demonstrate the tangible impact of Google Cloud AI across industries, driving efficiency, enhancing customer experiences, fostering innovation, and unlocking new possibilities.

Discover Sngular’s success stories in implementing AI and Cloud solutions, among others, for leading companies.

A future powered by Artificial Intelligence

Google Next 2025 made it clear that Artificial Intelligence is the driving force shaping the future of technology and digital transformation. 

Google’s commitment to investing in infrastructure, cutting-edge models, open development platforms, and an interoperable agent ecosystem positions the company as a leader in this new era. Furthermore, its vision of democratizing access to AI and empowering developers and businesses to build innovative solutions was the guiding thread of an event that promises to have a lasting impact on how we interact with technology and build the digital future. 

The next edition, scheduled for April 22–24, 2026, in Las Vegas, is already generating anticipation for the new innovations Google Cloud will surely unveil.

Miguel Ángel Gombau, Tech Marketing Manager at SNGULAR

Miguel Ángel Gombau

Tech Marketing Manager at SNGULAR

Experienced Engineer and Marketing Manager, with a demonstrated history of working in Enterprise and Corporate Business, Solutions, Technological Innovation and Strategic and Digital Marketing.


Our latest news

Interested in learning more about how we are constantly adapting to the new digital frontier?

Sngular and Fakeeh Care Group join forces to create an AI Center of Excellence in KSA
Sngular and Fakeeh Care Group join forces to create an AI Center of Excellence in KSA

Corporate news

April 22, 2025

Sngular and Fakeeh Care Group join forces to create an AI Center of Excellence in KSA

Privacy at risk, what do they know about you and how can you protect yourself?
Privacy at risk, what do they know about you and how can you protect yourself?

Insight

April 8, 2025

Privacy at risk, what do they know about you and how can you protect yourself?

🧐 What is Mockoon and why should you use it?
🧐 What is Mockoon and why should you use it?

Tech Insight

April 2, 2025

🧐 What is Mockoon and why should you use it?

The Seven Phases of a Cyberattack: A Detailed Look at the Cyber Kill Chain
The Seven Phases of a Cyberattack: A Detailed Look at the Cyber Kill Chain

Tech Insight

March 26, 2025

The Seven Phases of a Cyberattack: A Detailed Look at the Cyber Kill Chain