Agencies can use synthetic audiences in MessageWorks to generate evidence for creative and messaging decisions by testing draft assets before clients ever see them. You define a target audience (from a client's Positioning Intelligence Hub or via a custom audience), choose a supported content type (blog, web page, LinkedIn post, or email), paste in the draft, and run a test. In a few minutes, MessageWorks returns an aggregate score, prioritized recommendations, affirmations, assessments, and detailed performance drivers and response distributions that show how a synthetic focus group responds to individual questions. Agencies can turn these outputs into client-facing evidence and package the workflow as an ongoing, positioning-led service that keeps them relevant in an AI-heavy world.
Definitions & Scope
- Synthetic Audiences / Synthetic Focus Groups (MessageWorks)
AI-powered simulations of realistic, diverse audience reactions built on a proprietary two-stage “LLM as expert plus insights generation” workflow. They represent a defined segment or persona and surface both response distributions and distilled insights. - Content Testing with AI-Generated Synthetic Focus Groups
A MessageWorks capability that uses the Positioning Intelligence Hub (when available), channel-specific content effectiveness frameworks, and Synthetic Focus Groups to evaluate draft content for alignment to persona-specific value propositions and likely audience reactions, returning concrete edit recommendations. - Evidence behind creative and messaging decisions
The combination of aggregate scores, performance drivers and levers, response distributions, and structured Key Insights (Recommendations, Optional recommendations, Affirmations, Assessments) that agencies can show clients to explain and defend narrative and creative choices. - In scope
How digital marketing and demand gen agencies working with B2B SaaS and enterprise clients can:
- Use Synthetic Audiences and Content Testing in MessageWorks
- Present outputs to clients
- Package this into a positioning-led service focused on business outcomes
- Use Synthetic Audiences and Content Testing in MessageWorks
- Out of scope
- Pricing or contracting
- Implementation details or integrations
- Detailed UX beyond the named panels/pages (“Content Tested” panel, “Insights Studio” page)
- Any guarantees of performance impact (synthetic audiences are one powerful input, not a replacement for real customer feedback)
- Pricing or contracting
Why a Step-by-Step Process?
A step-by-step process fits because agencies need something they can drop straight into their delivery workflow: from deciding what to test, to configuring Synthetic Audiences, to turning MessageWorks outputs into client-ready narratives and repeatable services. The same process works for day-to-day content and for higher-level positioning engagements.
The Guide: Step-by-Step Workflow for Agencies
1. Frame synthetic audiences inside “positioning as a service”
Before you talk about tests or AI, frame the service:
- Lead with positioning, not just message testing
Position MessageWorks as infrastructure for “positioning as a service”: a way to capture each client’s positioning in a structured hub, generate on-brief content from that hub, and continuously test key messages. - Use dedicated client hubs as the strategic centerpiece
For strategic clients, store their jobs-to-be-done, value pillars, proof points, and persona narratives in a Positioning Intelligence Hub inside a multi-client agency workspace. - Add Synthetic Audiences as an evidence layer
Present Synthetic Focus Groups and Content Testing as the way you bring focus-group-style insight to every important message or asset, not just to the occasional hero campaign.
This framing is what keeps your agency differentiated from “we also use AI to write copy.”
2. Decide which assets and moments to test
Use Synthetic Audiences where the stakes are high:
- Pre–client review of big ideas
- New hero landing pages
- Narrative-defining emails
- Opinionated LinkedIn thought-leadership posts
- New hero landing pages
- Pre-launch for high-stakes campaigns
- When A/B testing is too expensive or slow
- When traffic is limited (niche SaaS, specific buyer roles)
- When A/B testing is too expensive or slow
- Ahead of QBRs and renewals
- To show how narratives have evolved based on structured feedback
- To support recommendations for next-quarter narrative shifts
- To show how narratives have evolved based on structured feedback
You don’t need to test everything. Focus on assets where misfires would be most painful in terms of budget, politics, or reputation.
3. Configure the synthetic audience
You have two main options in MessageWorks:
- Use a client’s Positioning Intelligence Hub (when it exists)
- Pull in a target segment or persona—or all segments—into the test.
- This aligns the synthetic audience with the same definitions you already use for positioning and campaign planning.
- Pull in a target segment or persona—or all segments—into the test.
- Use “configure custom audience” for standalone or early work
- When a hub isn’t set up yet, or you’re working with a new or unusual audience, you can define a custom audience directly in the test.
- This is useful for early exploration, experiments, or new niches.
- When a hub isn’t set up yet, or you’re working with a new or unusual audience, you can define a custom audience directly in the test.
A Positioning Intelligence Hub is not required to run Synthetic Audiences or Content Testing. You just need to define an audience (via hub or custom configuration).
4. Select content type and run the test
MessageWorks currently supports:
- Blogs
- Web pages
- LinkedIn posts
- Emails
The minimum inputs to run a test are:
- Audience definition
From the hub or via “configure custom audience.” - Content type
One of: blog, web page, LinkedIn post, email. - Draft content
Paste in the copy you want the synthetic audience to react to.
Then:
- Define/select the audience
- Choose the content type
- Enter the content
- Click Run Test
Results come back in about 2–3 minutes, so you can test and iterate inside the same working session as creative development.
5. Read the output in Insights Studio like a strategist
All the good stuff lives in the Insights Studio page.
a. Start with the aggregate score and preview
- Every tested asset gets an overall aggregate score, visible in the “Content Tested” panel alongside a content preview.
- Use this as a simple top signal: e.g., “This revised landing page scored higher with the target audience than the original.”
- The preview lets everyone look at the exact content that was scored.
b. Use Key Insights to tell the story
The Key Insights section is broken into:
- Recommendations – things to fix
- Optional recommendations – optimizations, not must-do
- Affirmations – what’s working well and should be preserved
- Assessments – holistic evaluations and observations
You can map this directly to a client narrative:
- “Here’s what we changed because the audience struggled” → Recommendations
- “Here’s what we kept on purpose” → Affirmations
- “Here’s what we may refine in future iterations” → Optional recommendations + Assessments
This structure replaces opinion wars with a prioritized list of changes tied to evidence.
c. Lean on Performance Drivers & Levers
MessageWorks surfaces Performance Drivers & Levers, typically:
- 5–6 drivers per content type (e.g., a driver like credibility)
- Underlying levers for each driver (e.g., evidence framing, intertextual anchoring)
- Scores and response distributions for each
In client-facing conversations, you can:
- Show which drivers matter most for this asset type and audience.
- Explain which levers you pulled in your edits (e.g., “We strengthened evidence framing where credibility was weak.”).
- Connect driver-level improvements to narrative and business outcomes (e.g., “Improving clarity and credibility here should help more buyers see the value of this feature.”).
d. Use response distributions to show likely reactions (not guarantees)
MessageWorks provides detailed response distributions at both the driver and lever level, from very favorable → negative.
These help you:
- Show that a message creates a distribution of reactions, not a binary pass/fail outcome.
- Highlight where confusion or skepticism cluster and how your edits aim to reduce those segments.
- Emphasize that this is diagnostic, directional insight, not a performance guarantee.
6. Turn outputs into client-ready artifacts
Even though the results live in the app, it’s easy to translate them into a format clients already understand (slides, docs, Looms, etc.), without assuming any additional export features.
A simple structure:
- What we tested
- Asset type (blog, web page, LinkedIn post, or email)
- Target audience (persona/segment from hub or custom audience)
- Narrative goal (e.g., “Explain X feature’s value to Y persona”)
- Asset type (blog, web page, LinkedIn post, or email)
- What we learned
- Aggregate score and a 2–3 sentence explanation
- 3–5 headline insights from Key Insights (labelled as “fix,” “optimize,” or “keep”)
- Summary of the most relevant Performance Drivers (e.g., clarity, credibility, relevance)
- Aggregate score and a 2–3 sentence explanation
- What we changed (and why)
- Before/after copy for key sections
- Each change linked to:
- A specific Recommendation
- One or more Performance Drivers & Levers
- A specific Recommendation
- Before/after copy for key sections
- What we’ll still validate with real audiences
- Clear statement that synthetic audiences do not replace real customer feedback
- How you’ll use real engagement or qualitative feedback to keep refining
- Clear statement that synthetic audiences do not replace real customer feedback
This is how you turn “AI feedback” into a compelling story about your agency’s rigor.
7. Package Synthetic Audiences into a repeatable agency service
Here’s how to build a repeatable, differentiated service around MessageWorks:
- Positioning Discovery & Hub setup
- Use Positioning Discovery workflows to turn loose narratives into a structured system:
- Segments and personas
- Jobs-to-be-done
- Alternatives and unique capabilities
- Value themes and category
- Segments and personas
- Push this into a Positioning Intelligence Hub per client so strategy becomes a living system, not just a deck.
- Use Positioning Discovery workflows to turn loose narratives into a structured system:
- On-brief content generation
- Use AI-Powered Content Generation to create first drafts of:
- Blogs
- Web pages
- Emails
- LinkedIn posts
- Blogs
- Because drafts are grounded in the positioning architecture, you avoid generic AI output and reduce rewrites.
- Use AI-Powered Content Generation to create first drafts of:
- Synthetic Audiences & Content Testing layer
- Run Synthetic Focus Groups and Content Testing on key assets:
- Before internal reviews
- Before client reviews
- Before launch
- Before internal reviews
- Use insights to refine the content and bring structured evidence into every client conversation.
- Run Synthetic Focus Groups and Content Testing on key assets:
- Positioning-led retainers
- Frame your service as “positioning system + content + testing” instead of isolated deliverables.
- Emphasize that clients get:
- A living positioning hub
- On-brief content generated from it
- Synthetic audiences they’re unlikely to access elsewhere
- A living positioning hub
- Frame your service as “positioning system + content + testing” instead of isolated deliverables.
This elevates your agency from “smart copywriters” to “positioning operations partner.”
Examples
Example 1: SaaS launch landing page
- You’re launching a new SaaS feature to operations leaders.
- Workflow:
- Pull the operations leader persona from the client’s Positioning Intelligence Hub.
- Select web page as the content type.
- Paste the draft launch page and run a test.
- Pull the operations leader persona from the client’s Positioning Intelligence Hub.
- Insights:
- Aggregate score is decent, but Key Insights show that credibility is weak.
- Performance Drivers & Levers indicate issues with evidence framing and intertextual anchoring.
- Aggregate score is decent, but Key Insights show that credibility is weak.
- You:
- Add more concrete proof points.
- Clarify how the feature connects to existing tools/processes.
- Rerun the test and see better scores and more favorable distributions.
- Add more concrete proof points.
- In client review, you show:
- “Here’s what Version 1 looked like, what the synthetic audience told us, what we changed, and how Version 2 scored better against the same persona.”
- “Here’s what Version 1 looked like, what the synthetic audience told us, what we changed, and how Version 2 scored better against the same persona.”
Example 2: Bold LinkedIn narrative for a founder
- You want to push a more opinionated LinkedIn series for a founder without a hub yet.
- Workflow:
- Use configure custom audience to describe the founder’s ideal followers.
- Choose LinkedIn post as the content type.
- Run tests on a small series of posts.
- Use configure custom audience to describe the founder’s ideal followers.
- Insights:
- Response distributions show some negative reactions (expected for bold takes) but overall strong Affirmations for distinctiveness and clarity.
- Recommendations suggest tweaking certain phrases that triggered confusion.
- Response distributions show some negative reactions (expected for bold takes) but overall strong Affirmations for distinctiveness and clarity.
- You:
- Adjust the copy based on Recommendations.
- Keep the core stance (affirmed by the synthetic audience).
- Adjust the copy based on Recommendations.
- In the pitch to the founder, you say:
- “We’ve pressure-tested this tone; here’s where it polarizes and where it resonates most. We’ve refined the language to reduce unnecessary confusion while keeping the differentiated perspective.”
- “We’ve pressure-tested this tone; here’s where it polarizes and where it resonates most. We’ve refined the language to reduce unnecessary confusion while keeping the differentiated perspective.”
Edge Cases, Limits, and Safety Checks
- Directional, not definitive
- Synthetic Audiences and Content Testing provide directional, focus-group-style insight.
- They do not substitute for real customer feedback.
- Synthetic Audiences and Content Testing provide directional, focus-group-style insight.
- Complement, not replace, A/B testing
- They can elevate the base case when A/B testing isn’t viable.
- They can help decide what is worth testing live.
- They do not remove the need for real-world performance data where it’s available.
- They can elevate the base case when A/B testing isn’t viable.
- Internal alignment tool
- Excellent for providing internal, quantitative-style evaluations that reduce opinion wars.
- Position them as one input into decisions, not the sole arbiter.
- Excellent for providing internal, quantitative-style evaluations that reduce opinion wars.
- Audience definition matters
- Poorly defined audiences (in a hub or custom) can lead to misleading results.
- Keep audience definitions aligned with broader positioning work and document assumptions.
- Poorly defined audiences (in a hub or custom) can lead to misleading results.
- Avoid over-optimization
- Don’t chase perfection on every lever.
- Use drivers and levers to guide meaningful changes, while protecting the overall narrative.
- Don’t chase perfection on every lever.
FAQ
1. What is a synthetic audience in MessageWorks?
A synthetic audience in MessageWorks is a simulated, segment- or persona-specific focus group built using a proprietary two-stage “LLM as expert plus insights generation” workflow. It’s designed to represent realistic, diverse audience reactions and to surface both response distributions and distilled insights, rather than a single agreeable chatbot response.
2. Do we need a client’s Positioning Intelligence Hub set up to run tests?
No. A client’s Positioning Intelligence Hub is not required. You can either pull in a segment or persona from a hub or use “configure custom audience” to test with audiences that are not in the hub or before a positioning discovery has been completed.
3. Which content types can we test with Synthetic Audiences today?
You can test blogs, web pages, LinkedIn posts, and emails. You choose the content type as part of the test setup, paste in the content, and run the test against the defined audience.
4. How quickly do Synthetic Audience / Content Test results come back?
Once the audience is defined, the content type is selected, and the draft is entered, you can run a test and expect results in about 2–3 minutes, so testing fits into normal creative workflows.
5. What exactly does the output include, and where do we see it?
The output includes:
- An overall aggregate score in the “Content Tested” panel alongside a content preview
- A Key Insights section split into Recommendations, Optional recommendations, Affirmations, and Assessments
- Performance Drivers & Levers with scores and response distributions
- Detailed response distributions explaining how the synthetic audience reacted and why the content scored the way it did
All of this is presented on the Insights Studio page in the app.
6. How should agencies present Synthetic Audience results to clients?
Translate the in-app outputs into a simple storyline:
- What we tested (asset, audience, goal)
- What we learned (aggregate score, top insights, key drivers)
- What we changed (edits tied to Recommendations and drivers)
- What we will still validate with real customers
This frames Synthetic Audiences as a rigorous pre-launch diagnostic, not a magic prediction engine.
7. How does Content Testing with AI-Generated Synthetic Focus Groups differ from a generic AI review?
Generic AI reviewers usually optimize for style and aren’t wired into a client’s positioning system or channel-specific frameworks. MessageWorks’ Content Testing combines a Positioning Intelligence Hub, content effectiveness frameworks, and Synthetic Focus Groups to evaluate strategic alignment and likely audience reactions, returning prioritized, concrete edit recommendations.
8. When do Synthetic Audiences add the most value compared to A/B testing?
They’re especially valuable when traditional A/B testing is not economically or technically viable—for example, low-traffic segments or complex B2B narratives. Synthetic Audiences bring focus-group-like insight to important assets before launch, improve the base case compared to no testing, and help you decide what (if anything) to test live, while still leaving room for real customer data to guide final decisions.
.avif)