Agent Patterns
Reusable patterns for agents that consume and produce WireLog analytics.
Pattern 0: Schema discovery
Before writing any queries, inspect the project’s data to understand what events exist and what properties they carry.
# What event types exist? What properties does each have?inspect * | last 30d
# Deep dive into a specific event's schemainspect signup | last 7dThe agent reads the inspect output to learn event names, property keys, coverage percentages, data types, and sample values. This replaces guesswork with data-driven query construction.
Pattern 1: Morning briefing
Agent runs discovery and key metric queries on a schedule, summarizes trends.
# Discover what happened* | last 24h | count by event_type | top 10
# Weekly signup trendsignup | last 7d | count by day
# Funnel healthfunnel signup -> activate | last 7dThe agent reads the Markdown tables, compares to prior periods, and produces a natural-language summary: “Signups up 12% WoW. Activation rate steady at 65%. No anomalies.”
Pattern 1b: Script-tag-only baseline
When a project has only the browser Script Tag installed, start analysis with page_view and sessions queries before assuming custom product events exist.
# Traffic trendpage_view | last 7d | count by day
# Top pagespage_view | last 30d | count by _path | top 20
# Focused path allowlistpage_view | where _path in ["/","/pricing","/docs"] | last 7d | count
# Navigation flowpaths from page_view | last 30d | by _path
# Acquisition qualitysessions | last 30d | count by session.utm_sourceThis avoids overfitting to non-existent custom events and gives immediate baseline traffic, journey, and channel insight.
Pattern 2: Agent self-tracking
The agent tracks its own actions as events. Useful for auditing agent behavior, measuring tool usage, and debugging failures.
Track:
{ "event_type": "agent_action", "user_id": "agent-001", "event_properties": { "action": "query_analytics", "input": "signup | last 7d | count", "output": "142 signups", "duration_ms": 340, "model": "claude-opus-4-6" }}Query own performance:
agent_action | last 7d | count by event_properties.actionagent_action | where event_properties.action = "query_analytics" | last 7d | avg event_properties.duration_msPattern 3: Anomaly detection
Agent compares current period to previous period and flags significant changes.
Current week:
signup | last 7d | count by dayPrevious week:
signup | from 2026-02-08 to 2026-02-15 | count by dayThe agent compares the two tables row-by-row. If any day drops more than 20% below the same day in the prior week, it flags the anomaly and investigates:
signup | where _platform = "mobile" | last 7d | count by daysignup | where _platform = "web" | last 7d | count by day“Mobile signups dropped 35% on Tuesday. Web was flat. Investigate the mobile signup flow.”
Pattern 4: User investigation
Agent drills into a specific user’s activity timeline.
user "alice@acme.org" | last 30d | listReturns all events for that user in chronological order. The agent reads the timeline, identifies patterns, and answers questions: “Alice signed up on Feb 1, activated on Feb 3, but hasn’t returned since Feb 10.”
Drill into a company:
* | where user.email_domain = "acme.org" | last 30d | count by event_type* | where user.email_domain = "acme.org" | last 30d | unique distinct_idPattern 5: Automated funnel analysis
Agent discovers events, builds funnels, and identifies conversion bottlenecks.
# Step 1: Discover events and their propertiesinspect * | last 30d
# Step 2: Build the funnelfunnel signup -> activate -> purchase | last 30d
# Step 3: Break down by platformfunnel signup -> activate -> purchase | last 30d | by _platform
# Step 4: Break down by acquisition channelfunnel signup -> activate -> purchase | last 30d | by user.acquisition_channelThe agent identifies the biggest drop-off, segments by relevant dimensions, and recommends action: “Activation is the bottleneck (66% -> 25%). Mobile users convert at half the rate of web. Prioritize mobile onboarding improvements.”
Pattern 6: Feedback loops
Agent tracks the outcomes of its own recommendations, then measures whether they worked.
Track the recommendation:
{ "event_type": "agent_recommendation", "user_id": "agent-001", "event_properties": { "recommendation": "add_mobile_onboarding", "target_metric": "mobile_activation_rate", "baseline": "32%" }}Later, query whether the target metric improved:
funnel signup -> activate | where _platform = "mobile" | last 7dCompare to the baseline. The agent closes the loop: “Mobile activation improved from 32% to 41% after onboarding changes were deployed.”
Query recommendation outcomes:
agent_recommendation | last 30d | listThe agent reviews its past recommendations and their outcomes, learning which types of suggestions produce results.
Next steps
- Agents overview — integration paths (skills, HTTP)
- Query language — full DSL reference