The problem
An internal LLM trained on older sales materials was providing answers that conflicted with current guidance. Sales reps were following outdated advice, creating inconsistency in pitches and closing strategies. More critically, the AI's incorrect answers were undermining trust in the knowledge base itself—if the AI said it, maybe the KB was wrong?
The tension
- Speed vs. accuracy: The Gen AI team wanted to launch quickly to show value. Sales wanted perfect accuracy.
- Convenience vs. authority: AI is appealing for its speed. But in sales, authority matters—reps need to trust the source.
- Training data drift: The model was trained on Q2 materials. Q3 and Q4 updates never made it into retraining.
Approach
Rather than choose between speed and accuracy, I advocated for a hybrid model that prioritized accuracy while maintaining AI convenience.
Root cause analysis
- Discovered the training data was 4+ months old when launched.
- Identified 23 specific pieces of guidance where AI conflicted with current KB.
- Conducted interviews with sales leaders to understand trust issues.
- Built a framework for how different content types should be handled (product specs vs. playbooks vs. objection handling).
The hybrid approach
Instead of relying purely on AI-generated answers, we built a system where:
- AI retrieves and summarizes, but every answer includes a link to the source material in the KB.
- High-confidence answers (product specs, pricing) are pre-vetted by Product before going into the training set.
- Lower-confidence answers (strategy, playbooks) always suggest "read the full article" as a secondary action.
- Feedback loops flag when AI answers conflict with KB updates, triggering retraining.
- Transparency: The AI discloses its training date and confidence level in answers.
Execution
- Partnered directly with the Gen AI engineering team to troubleshoot retrieval logic and implement fixes.
- Worked with Gen AI eng team to redesign the training pipeline to include real-time KB updates.
- Created a "source linking" feature so every AI answer could cite back to KB articles.
- Established a cross-functional governance model: Product owns accuracy, Sales owns use cases, Gen AI owns performance.
- Built monitoring dashboards to catch accuracy drift in real time.
Results
Adoption & engagement
- 3.41% of overall KB traffic now comes through AI queries.
- 45% of AI-answered questions result in a click-through to the full KB article (indicating legitimacy and deeper learning).
- Zero reported instances of AI providing information that conflicts with current KB guidance since launch.
- Reps report higher confidence in AI answers because they link back to source.
Business impact
- Maintained trust: Sales leaders see the AI as a supplement, not a replacement for the knowledge base.
- Faster scaling: The hybrid model allows us to scale AI to new product lines without manual accuracy review.
- Informed decisions: Governance model has already prevented launching AI guidance on 3 evolving product areas.
- Precedent for Gen AI integration: This framework became the template for other Gen AI initiatives across Meta sales.
Key learnings
1. AI is not a replacement for systems. A good AI tool is only as good as its underlying data. Invest in the source of truth first.
2. Speed vs. accuracy is a false choice. When you must choose between fast and wrong vs. slow and right, the third option is often "fast AND right, with transparency about trade-offs."
3. Governance matters for Gen AI. Without clear ownership of accuracy, bias, and updates, AI systems drift and fail silently. Cross-functional governance prevented that here.
4. Users want transparency. Reps don't mind AI-generated answers if they know the AI's confidence level and can access the source. Build trust through transparency.