Skip to main content

POD Charter -- AI Support Copilot

Engagement: AI Support Copilot Pilot Version: 1.0 Date: 2026-04-30 Framework ref: Doc 01, Section 6

Signatories:

RoleNameSignatureDate
POD LeadAmit____________________
Implementation ManagerShivani____________________
Client SponsorPrasanna____________________

1. Mission

Deliver a working AI copilot that enables support agents to resolve tickets faster and more consistently by automating classification, knowledge retrieval, action recommendation, and response drafting -- targeting 85% accuracy and autonomous handling of 30% of recurring tickets, while maintaining human approval on every response. The pilot will validate feasibility, establish baseline metrics, and produce a complete knowledge transfer package for production deployment.


2. Success Criteria

#MetricTypeTargetMeasurement Method
1Classification accuracyAI Quality>= 85%Exact match on category + priority against golden dataset
2Retrieval accuracyAI Quality>= 85%Expected KB article appears in top-K retrieved set
3Action recommendation accuracyAI Quality>= 85%Exact match on recommended action (Reply / Ask / Escalate)
4Response acceptance rateBusiness>= 70%% of drafts agents accept without major rewriting
5Auto-answer coverageBusiness30%% of recurring tickets copilot handles autonomously (with human approval)

Go-live validation: 1,000 synthetic questions evaluated by client's support team lead before production readiness sign-off.


3. Scope and Out-of-Scope

In Scope (Phase 1 Pilot)

  • Ticket classification (category, priority, sentiment)
  • KB article retrieval via hybrid search (vector + BM25)
  • Action recommendation (Reply / Ask for more info / Escalate) with reasoning
  • Response drafting grounded in KB articles with citations
  • Confidence scoring per output
  • Feedback loop (agent rates/edits responses, system stores corrections)
  • Guardrails (profanity filter, misuse prevention, graceful failure)
  • Standalone web application (three-panel dashboard: ticket queue, detail, copilot sidebar)
  • Evaluation harness with automated scoring
  • 1,000-question synthetic evaluation run
  • Complete documentation and knowledge transfer package

Out of Scope (Phase 1)

  • Freshdesk API integration (architecture designed for it, not built)
  • Multi-language support (English only)
  • Customer-facing AI (agent-facing only)
  • Auto-send / autonomous resolution without human approval
  • Live KB refresh (static dataset for pilot)
  • Production deployment, scaling, load testing
  • Phase 2 features (not yet defined)

4. Operating Cadence

CeremonyFrequencyDurationParticipantsOwner
Sprint2 sprints totalSprint 1: May 1-10, Sprint 2: May 11-16Full PODShivani
Weekly status emailWeeklyAsync (written)Prasanna, PODShivani
Weekly sync callWeekly30 min (Google Meet)Prasanna, Amit, ShivaniShivani
Sprint demoPer sprint30-45 minPrasanna, full PODAmit
POD standupDaily15 min (internal)Full PODAmit
Sprint retroPer sprint30 min (internal)Full PODAmit

Channels: Email for written updates, Google Meet for calls.

Decision turnaround: Client commits to 2 hours to 1 business day for all decisions.


5. Decision Rights

TierAuthorityExamplesEscalation needed?
POD-InternalPOD Lead + owning roleLibrary choice, prompt structure, code style, sprint task orderingNo
POD LeadAmit, informed by PODArchitecture patterns, model selection, evaluation thresholds, release readiness, tech stack changesNo
Client ApprovalPrasanna + ShivaniScope changes, milestone shifts, data access changes, success criteria changes, production deployment decisionsYes -- Shivani raises to Prasanna
Gyde LeadershipEngineering DirectorDeviation from framework non-negotiables, commercial changesYes -- Amit raises to Gyde leadership

6. Escalation Paths

Escalation TypeGyde ContactClient Contact
TechnicalAmit (POD Lead)Prasanna
Delivery / CommercialShivani (PM)Prasanna
Governance / SecurityShubham (Governance Eng)Prasanna
2nd Level (Gyde)Shubham (Escalation SPOC)--

Escalation protocol: Same-day transparency. If any risk materializes, the client hears about it within the same business day via email, not deferred to the weekly update.


7. Definition of Done

An increment is releasable to the client environment when ALL of the following are met:

#GateVerified by
1All acceptance criteria for committed stories are metNishka (QA)
2Evaluation metrics are at or above target thresholdsNishka + Amit
3No critical or high-severity bugs openNishka
4Code reviewed and merged to main branchAmit
5All prompts and data versioned in source controlAtharva + Nancy
6Security review passed (no blocking findings)Shubham
7Documentation updated (architecture, ADRs, runbooks)Amit
8Demo-ready in staging environmentAmit

8. Risks and Assumptions

Top 5 Risks

#RiskLikelihoodImpactMitigationOwner
1Low dataset diversity (11 unique scenarios) limits model generalizationHighHighGenerate diverse synthetic data early; flag limitation to clientNishka + Atharva
2Tight timeline (16 days) with hard deadlines leaves no bufferHighHighRuthless prioritization; cut polish, not core capabilities; daily standup to catch blockers earlyShivani + Amit
3Gemini accuracy may not reach 85% on first pass for all metricsMediumHighBuild LLM-agnostic architecture; have fallback to GPT-4o or Claude; iterate prompts rapidlyAtharva + Amit
4Reporting category has zero training data but is in eval setMediumMediumAdd 2-3 synthetic Reporting tickets; accept cold-start performance and documentNancy
5On-prem handover requirements may surface late constraintsLowMediumDocument all dependencies and infrastructure early; use only self-hostable componentsAmit

Assumptions

#AssumptionImpact if wrong
1Excel dataset is representative of production ticket patternsPilot accuracy won't predict production accuracy
2Prasanna is available for decisions within 1 business daySprint velocity drops; scope may slip
3GCP Vertex AI APIs are stable and available throughout the pilotNeed to switch LLM provider mid-sprint
485% accuracy is achievable with the provided KB contentMay need to renegotiate thresholds or expand KB
5Team members are dedicated to this engagement (no competing priorities)Deliverables at risk; may miss hard deadlines

Non-Negotiable Adherence

Per Doc 01, Section 5.1, this engagement adheres to all five framework non-negotiables:

#Non-NegotiableHow We Fulfill It
1Threat modeling and secrets managementShubham delivers threat model; all API keys via GCP Secret Manager or env vars, never in code
2Evaluation before productionEval harness gates every release; 1,000-question run before go-live
3Versioned data and promptsAll prompts, datasets, and configs in Git; every change is a tracked commit
4Audit trail for AI decisionsEvery copilot decision logged with input, output, confidence, reasoning, and sources
5Incident response readinessRunbooks for top failure modes included in knowledge transfer package

POD Composition

RoleNameKey Responsibilities
POD LeadAmitArchitecture, UI, code review, tech decisions, demos
AI EngineerAtharvaLLM prompts, retrieval pipeline, confidence scoring, feedback loop
Data EngineerNancyData ingestion, KB indexing, embeddings, vector store, data quality
QANishkaEval harness, golden dataset, synthetic data, adversarial testing
Governance EngineerShubhamThreat model, guardrails, security review, compliance
Implementation ManagerShivaniCharter, sprint planning, status reports, risk register, client comms

This charter is effective upon signature by all three signatories and remains in force for the duration of the engagement. Any material changes require written agreement from all parties.