Skip to main content

Mastering Google's Interactions API: A Unified Gateway to Gemini Models and Deep Research Agent

· 12 min read
Raphaël MANSUY
Elitizon Ltd

The AI development landscape is shifting from stateless request-response patterns to stateful, multi-turn agentic workflows. Google's new Interactions API provides a unified interface designed specifically for this new era—offering a single gateway to both raw Gemini models and the fully managed Deep Research Agent.

In one sentence: The Interactions API is a unified endpoint for interacting with Gemini models and agents, featuring server-side state management, background execution for long-running tasks, and native support for the Deep Research Agent.

Context Engineering: Inside Google's Architecture for Production AI Agents

· 22 min read
Raphaël MANSUY
Elitizon Ltd

The progression of Generative AI from novelty to enterprise cornerstone has necessitated a fundamental shift in system construction methodology. Early LLM adoption emphasized "prompt engineering"—ad-hoc string concatenation, trial-and-error phrasing, and minimal state management. While sufficient for simple chatbots, this approach collapses under production demands: reliability, observability, latency, and cost-efficiency.

The Google Gen AI Agent Development Kit (ADK) signals the arrival of Context Engineering—a discipline that treats context not as a mutable string buffer but as a compiled view over rich stateful systems.

Industrialization of Agency

Fast-track Your GenAI Agents: Deep Dive into the Google Cloud Agent Starter Pack

· 5 min read
Raphaël MANSUY
Elitizon Ltd

Building a GenAI agent prototype on your laptop is magic. You write a few lines of Python, hook up an LLM, and suddenly you’re chatting with your data. But taking that magic from a Jupyter notebook to a production environment—secure, scalable, and observable—is where the real headache begins.

Enter the Google Cloud Agent Starter Pack.

This open-source repository is Google’s answer to the "prototype purgatory" problem. It’s a comprehensive toolkit designed to bootstrap production-ready generative AI agents on Google Cloud Platform (GCP) in minutes, not months.

Observing ADK Agents: OpenTelemetry Tracing with Jaeger

· 7 min read
Raphaël MANSUY
Elitizon Ltd

You build an AI agent with Google ADK. It works. But when you ask "Why did the agent choose that tool?" or "Which LLM call took 5 seconds?" – you're flying blind.

Enter distributed tracing: Jaeger visualizes every step your agent takes, from reasoning to tool execution to LLM calls. ADK has built-in OpenTelemetry support, making this a breeze... once you understand one crucial gotcha.

This post shows you the complete picture: what to do, why it matters, and the one thing that trips up most developers.

Jaeger UI showing traces from an ADK agent

Optimize Your Google ADK Agent's SOP with GEPA: Stop Manual Tweaking

· 6 min read
Raphaël MANSUY
Elitizon Ltd

Your agent's instructions are its Standard Operating Procedure (SOP). In Google ADK, this SOP lives in the agent's prompt—the detailed instructions that guide every decision, every tool call, every response.

The problem? Writing the perfect SOP manually is nearly impossible. You add rules to fix failures. Each new rule breaks something else. Your agent becomes unpredictable. Your SOP becomes a mess of band-aids.

The solution? GEPA (Genetic Evolutionary Prompt Augmentation)—automatic SOP optimization that learns from failures and evolves better instructions through real testing.

Gemini Enterprise: Why Your AI Agents Need Enterprise-Grade Capabilities

· 22 min read
Raphaël MANSUY
Elitizon Ltd

The BIG Question: Why Should You Care?

Your AI agents work great in development. They handle complex workflows, reason through problems, and integrate with your tools. In production, you face scale, security, compliance, and reliability demands that standard setups cannot guarantee.

Gemini Enterprise changes this.

When building AI agents for enterprises with data privacy concerns or for regulated industries, you need to understand the gap between standard AI models and enterprise-grade solutions.

TIL: Context Compaction with Google ADK 1.16

· 9 min read
Raphaël MANSUY
Elitizon Ltd

Long agent conversations accumulate thousands of tokens. After 100+ exchanges, sending the entire history to the model becomes expensive and slow. Context Compaction fixes this by intelligently summarizing old interactions.

In one sentence: Context Compaction automatically summarizes older conversation events using an LLM, reducing token usage while preserving conversational context.