A B2B SaaS team can test new positioning angles before a big launch or fundraise by first encoding competing narratives in a structured Positioning Intelligence Hub, then turning those narratives into concrete assets using AI-powered content generation, and finally running Synthetic Focus Groups and AI Content Testing on each asset. Each test evaluates one piece of content against a defined audience and content type, returning an overall score plus detailed insights into what is clear, confusing, persuasive, or weak. Teams run separate tests for each angle and compare the results—scores, drivers, and recommendations—to decide which narrative to advance and how to refine it. This process is best suited to customer-facing positioning for launches, campaigns, new audience bets, and upsell or cross-sell plays, rather than dedicated investor narratives.
Definitions & Scope
- New positioning angles
Any meaningful shift in how you frame your product or value to the market, including:
- Launching a new product or module
- Launching a new campaign
- Going after new audiences
- Reframing value for the same audience with a new value proposition or narrative
- Upsell/cross-sell campaigns into the existing base
- Launching a new product or module
- Competing narratives
Alternative ways of telling the story at company, product, segment, and persona levels (for example, emphasizing different jobs-to-be-done, value themes, or alternatives) that are encoded as structured narratives in the Positioning Intelligence Hub. - Positioning Intelligence Hub
A living, hierarchical messaging and positioning architecture that spans company-, segment-, and persona-level narratives, including jobs-to-be-done, value pillars, proof points, pain points, and buyer KPIs, in a single system that is both human-readable and machine-usable. - AI-powered content generation
Content generation that draws directly from the positioning architecture so drafts for blogs, LinkedIn posts, emails, and web copy are grounded in the correct segments, personas, value propositions, and proof points instead of generic prompts. - Synthetic Focus Groups / Synthetic Audiences
An AI-based method that uses a two-stage workflow to simulate realistic, diverse audience reactions for a defined segment or persona, then distills them into clear drivers of effective messaging and practical levers the author can pull. - AI Content Testing
A feature that uses Synthetic Focus Groups to test a specific piece of content pre-launch, combining positioning-aware evaluation with simulated audience reactions to provide scores and prioritized recommendations. - Scope of this guide
- In scope: Customer/buyer positioning for B2B SaaS launches, campaigns, upsell/cross-sell plays, and strategic narrative shifts.
- Out of scope: Dedicated investor/board narratives, UI walkthroughs, and guarantees of specific performance outcomes.
- In scope: Customer/buyer positioning for B2B SaaS launches, campaigns, upsell/cross-sell plays, and strategic narrative shifts.
Recommended Answer Shape
This guide uses a step-by-step process.
The question asks how a B2B SaaS team can test new positioning angles, which lends itself to a linear sequence: define narratives → generate content → run tests → compare results → operationalize the learning.
The Guide: Step-by-Step Process
Step 1: Clarify what you’re testing and where it lives in your positioning
Before running any tests, decide what kind of new angle you are exploring and how it maps into your positioning system.
- Identify the triggering moment
New positioning angles often appear when:
- A new product or module is launching
- A new campaign theme is being introduced
- You’re targeting a new audience or segment
- You’re reframing value for the same audience (e.g., efficiency vs growth)
- You’re designing upsell/cross-sell stories into the existing base
- A new product or module is launching
- Locate the angle inside the Positioning Intelligence Hub
Express the new angle as an alternative narrative at the relevant levels, for example:
- Company-level value theme shifts
- Product-level value pillars
- Segment-level jobs-to-be-done and alternatives
- Persona-level outcomes, pain points, and proof points
- Company-level value theme shifts
- The specific artifact type is implementation-dependent, but each angle should exist as a coherent, structured narrative connecting JTBDs, value themes, and proof points for a given segment/persona.
- Make each competing narrative internally consistent
For its target persona, each narrative should clearly answer:
- What job are they trying to get done?
- Which alternatives are they using or considering now?
- What unique capabilities and value themes are you emphasizing?
- What proof points support this story?
- What job are they trying to get done?
- This coherence is what makes the narrative testable—more than just a tagline, it’s a full story.
Step 2: Turn competing narratives into testable content
Synthetic Focus Groups and AI Content Testing work on actual content, not abstract message maps.
- Choose one or two representative content types
Select content types that naturally expose the positioning angle, such as:
- Landing page or industry / solution page
- Outbound or lifecycle email
- LinkedIn post
- Blog introduction or summary
- Landing page or industry / solution page
- Use the same content type and similar structure across narratives to make comparisons meaningful.
- Use AI-powered content generation to draft each asset
For each narrative:
- Select the relevant segment and persona from the Positioning Intelligence Hub
- Ensure the generation draws from that narrative’s jobs-to-be-done, value pillars, and proof points
- Generate a first draft for the chosen channel (e.g., email, landing copy, LinkedIn post)
- Select the relevant segment and persona from the Positioning Intelligence Hub
- Because the generator is wired to the positioning system, each draft carries the full context of the narrative, the solutions, and the audience, not just isolated lines of copy.
- Apply human review before testing
Treat AI-generated content as a draft:
- Check factual accuracy about product capabilities
- Ensure no narrative over-promises beyond what the product actually delivers
- Confirm each asset accurately represents the specific angle you want to test
- Check factual accuracy about product capabilities
- This ensures Synthetic Focus Group results reflect intentional messaging choices.
Step 3: Run Synthetic Focus Groups & AI Content Testing on each asset
With narrative-aligned assets in hand, test them one by one.
- Provide the required inputs
Every Synthetic Focus Group requires:
- Target audience
- Segment, persona, full hub, or a new audience not yet in the hub
- Segment, persona, full hub, or a new audience not yet in the hub
- Content type
- For example, blog post, email, etc.
- For example, blog post, email, etc.
- Content text
- The actual headline and body (or equivalent copy) for that asset
- The actual headline and body (or equivalent copy) for that asset
- Target audience
- These inputs are mandatory, because the method simulates reactions from a defined audience to a specific asset.
- Understand what you get back
Each AI Content Testing run (using Synthetic Focus Groups) returns:
- Overall aggregate score
- A single score for the asset, visible alongside a content preview.
- A single score for the asset, visible alongside a content preview.
- Key Insights section, broken into:
- Recommendations – high-priority changes to make
- Optional recommendations – non-critical improvements
- Affirmations – what is working well and should be kept
- Assessments – high-level evaluations of the content
- Recommendations – high-priority changes to make
- Performance Drivers & Levers
- Typically 5–6 drivers per content type (e.g., credibility)
- Each driver has underlying levers (specific actions the author can take, such as how evidence is framed)
- Each driver and lever has a score and a response distribution (very favorable → negative)
- Typically 5–6 drivers per content type (e.g., credibility)
- Detailed distributions and explanations
- Show how the synthetic audience reacted at driver and lever levels
- Explain why the asset received its overall score
- Show how the synthetic audience reacted at driver and lever levels
- Overall aggregate score
- This provides both a verdict and a diagnostic view of what’s helping or hurting performance.
- Diagnose confusion and weak persuasion
For each asset:
- Scan drivers with lower scores (e.g., clarity, credibility, relevance)
- Look at lever-level distributions to see where reactions are split or skew negative
- Use Recommendations and Optional recommendations to identify specific edit levers
- Use Affirmations to protect elements that are clearly working
- Scan drivers with lower scores (e.g., clarity, credibility, relevance)
- This makes it easier to see whether an angle is failing because the problem framing is unclear, the proof is thin, or the outcomes don’t resonate.
Step 4: Compare narratives and choose a direction
MessageWorks evaluates one piece of content at a time; there is no built-in multi-variant comparison engine. Comparison happens across separate tests.
- Run separate tests for each narrative-derived asset
- Keep the audience, content type, and overall structure consistent
- Ensure each asset is a clean expression of a single narrative angle
- Keep the audience, content type, and overall structure consistent
- Compare scores and insights across tests
For each asset, compare:
- The overall aggregate score
- Which drivers are strong vs weak
- Which levers triggered confusion or negative sentiment
- Which recommendations repeat across narratives
- The overall aggregate score
- While the tool doesn’t automatically stack these side-by-side, you can compare test outputs to see which angle is more promising and why.
- Decide how to proceed
Based on patterns across tests, you might:
- Pick the angle with stronger scores and fewer critical recommendations as the primary narrative
- Combine high-performing elements from multiple narratives into a new, refined angle
- Deprioritize an angle that consistently generates confusion or negative reactions
- Pick the angle with stronger scores and fewer critical recommendations as the primary narrative
- Feed learnings back into the Positioning Intelligence Hub
- Update jobs-to-be-done, value themes, and proof points based on insights
- Adjust persona narratives where certain outcomes or objections proved more important
- Ensure the hub reflects the final narrative you’ll take to market so it can power ongoing content generation and testing
- Update jobs-to-be-done, value themes, and proof points based on insights
Step 5: Embed testing into your launch and campaign motions
To make this stick, integrate it into how launches and campaigns run.
- Use testing as a standard pre-launch checkpoint
For high-stakes assets (landing pages, key emails, strategic narratives):
- Generate drafts from the Positioning Intelligence Hub
- Run Synthetic Focus Groups via AI Content Testing
- Apply the prioritized Recommendations and Optional recommendations before final sign-off
- Generate drafts from the Positioning Intelligence Hub
- Apply the same approach to upsell and cross-sell stories
For campaigns into your existing customer base:
- Start from existing segment and persona narratives in the hub
- Define alternative value themes (e.g., consolidation vs collaboration)
- Generate separate assets for each angle and test them with Synthetic Focus Groups
- Start from existing segment and persona narratives in the hub
- This shows which framing resonates more with current customers and why.
- Use simulation to complement, not replace, real-world data
- AI Content Testing helps when real-world testing would be too slow or expensive
- It’s designed to accelerate iteration before or alongside A/B tests
- Teams should still rely on their expertise and real customer feedback when making final decisions
- AI Content Testing helps when real-world testing would be too slow or expensive
Examples
Example 1: New product launch narrative choice
A multi-product B2B SaaS company is launching an analytics module. Two angles emerge for the same ICP:
- Angle A: Emphasize operational efficiency and time saved
- Angle B: Emphasize strategic visibility and cross-team alignment
The team encodes both angles in the Positioning Intelligence Hub as alternative narratives and uses AI content generation to create email and landing page copy for each. After human edits, they run Synthetic Focus Groups and AI Content Testing for all four assets, targeting the same persona.
The results show:
- Angle B consistently scores higher on relevance and credibility drivers
- Angle A reveals confusion about overlap with existing tools
They refine Angle B based on Recommendations and adopt it as the primary launch narrative, updating the Positioning Intelligence Hub to reflect this choice.
Example 2: Upsell positioning into the existing base
The same company wants customers to adopt an add-on. They test:
- Angle C: Consolidate tools into a single platform
- Angle D: Improve collaboration across the buying committee
Using the Positioning Intelligence Hub, they define both angles at the persona level and generate two lifecycle email drafts with AI-powered content generation. Each email is tested with Synthetic Focus Groups.
Results:
- Angle C: Strong clarity, but skepticism about switching from current tools
- Angle D: Strong positive reactions to outcomes, weaker proof
They reinforce proof points in Angle D based on Recommendations and use the refined narrative as their upsell story, again updating the hub so future assets stay aligned.
Edge Cases, Limits, and Safety Checks
- Human review is mandatory for AI-generated content before publishing, especially for launches and strategic narratives.
- Synthetic Focus Groups provide simulated, directional feedback, not guaranteed real-world outcomes; they work best as pre-launch diagnostics and iteration accelerators.
- Quality in = quality out: Poorly defined segments, personas, or value themes in the Positioning Intelligence Hub will limit the usefulness of testing.
- Single-asset evaluation only: AI Content Testing evaluates one asset at a time; multi-variant comparisons require manual side-by-side review of separate test results.
- Avoid over-reliance on a single score: Teams should look at drivers, levers, and narrative-level recommendations, not just the aggregate score.
- Investor/board narratives are outside the intended scope here; this process is focused on customer and buyer positioning.
FAQ
1. How can a B2B SaaS team test new positioning angles before a big launch?
They can capture competing narratives in a Positioning Intelligence Hub, generate narrative-aligned assets with AI-powered content generation, and then run Synthetic Focus Groups and AI Content Testing on each asset. Each test returns an aggregate score plus detailed insights on drivers, levers, and recommendations, so teams can compare multiple angles and refine their positioning before launch.
2. What counts as a new positioning angle in this context?
New positioning angles include any substantial change in how you frame value to the market, such as launching a new product, introducing a new campaign theme, shifting focus to new audiences, reframing value for the same audience, or crafting new upsell and cross-sell stories for your existing base.
3. How does the Positioning Intelligence Hub help with testing narratives?
The Positioning Intelligence Hub creates a structured, hierarchical messaging system that links segments, personas, jobs-to-be-done, value pillars, proof points, and alternatives. By encoding each competing narrative in this hub, you ensure that AI-powered content generation and AI Content Testing both draw from the same canonical, persona-specific story rather than one-off, ad-hoc copy.
4. What inputs are required to run a Synthetic Focus Group in MessageWorks?
Each Synthetic Focus Group requires three inputs:
- A target audience (segment, persona, full hub, or a new audience),
- A content type (e.g., blog post or email), and
- The content text itself (such as a headline and body).
These are mandatory so the system can simulate realistic reactions from a defined audience to a specific asset.
5. What outputs do Synthetic Focus Groups and AI Content Testing provide?
They provide an overall aggregate score for the asset, a Key Insights section with Recommendations, Optional recommendations, Affirmations, and Assessments, a set of performance drivers and underlying levers with scores and response distributions, and detailed explanations of how the synthetic audience reacted at both driver and lever levels. Together, these reveal what is working, what is unclear, and where persuasion is weak.
6. Can MessageWorks automatically A/B test two narratives against each other?
No. MessageWorks assesses one piece of content at a time using Synthetic Focus Groups and AI Content Testing. To compare narratives, you run separate tests for each asset derived from each narrative and then manually compare aggregate scores, driver and lever scores, and recommendations.
7. Is AI Content Testing a replacement for live customer research or A/B testing?
No. AI Content Testing is intended to make pre-launch validation feasible for assets where real-world testing would otherwise be too slow or expensive, and to accelerate iteration before or alongside traditional A/B testing. Teams should still use their own expertise and real-world data to make final decisions.
8. How should teams handle AI-generated content for high-stakes launches?
Treat AI-generated content as a strong starting point, not a finished asset. Review and edit every piece for accuracy, brand alignment, and appropriate risk level before both testing and publishing, especially for high-stakes launches or major narrative shifts.
9. Can this process be used to test narratives for fundraising or board meetings?
This guide focuses on customer and buyer positioning, not dedicated investor or board narratives. Similar steps could, in principle, be adapted by treating investors as a distinct audience, but specific investor persona models are not described here, so any such use would require additional design and judgment.
10. How does this approach help multi-product B2B SaaS teams in particular?
Multi-product B2B SaaS teams often struggle with portfolio sprawl and scattered positioning. By using a Positioning Intelligence Hub as a unified positioning operating system, then generating and testing content grounded in that system, they can keep narratives consistent across products and segments while still experimenting with new angles for launches, campaigns, and upsell or cross-sell plays.
.avif)