AI-Assisted Positioning Work: A How-To Guide

Your friendly Bestie Block | Comfort Crate | Supportive Square.

This is a companion piece to Using AI for Positioning Work Without Outsourcing Judgment. It is a how-to guide for the techniques discussed in the main article, containing prompts you can adapt, what to look for in the output, and where the exercise tends to break down.

None of these prompts work well without good inputs. The quality of the context you provide, such as your ICP definition, your actual draft messaging and your real objection set, determines most of what you get back.


1. Pressure-testing a value proposition

The goal: Find the structural weaknesses in your existing positioning before they surface in the field.

What to provide:

  • Your current value proposition or core message
  • A brief description of your primary buyer (role, company type, key priorities)
  • One or two named competitors, if relevant

Prompts to try:

"Here is our current value proposition: [paste]. Our primary buyer is [describe]. Assume the role of a skeptical [title] at a [company type] who has evaluated similar solutions in the past 12 months. What is this message missing? What would you need to hear to find it credible? What would make you dismiss it immediately?"

"Read this value proposition: [paste]. Which of our direct competitors — [Competitor A], [Competitor B] — could make the same claim without changing more than a few words? Where specifically does our differentiation break down?"

"Here is our messaging framework: [paste]. List every implicit assumption this message makes about what our buyer already believes or already wants. Flag any assumption that may not hold across our full ICP."

What to look for: The model will often identify the places where your message is doing rhetorical work — words like "seamless," "unified," or "purpose-built" that signal differentiation without specifying it. Pay attention to those flags. Also note where it generates objections your team has discussed internally but hasn't addressed in the message. Those are the gaps worth prioritizing.

Where it breaks down: If your value proposition is very generic, the output will be correspondingly generic. The model can only challenge what's specific enough to challenge. If you're getting vague feedback, that's usually a signal the message itself needs more specificity before pressure-testing is useful.


2. Synthesizing customer interview transcripts

The goal: Pull structured signal from raw qualitative research faster than manual review.

What to provide:

  • Three to ten interview transcripts or summaries (paste directly or summarize key exchanges)
  • The buyer persona each interview represents, if you have them segmented
  • The specific question you're trying to answer (don't leave this open-ended)

Prompts to try:

"Here are summaries from [N] customer interviews: [paste]. Identify the three to five most commonly expressed concerns or frustrations, and note which buyer types raised each one. Quote the most illustrative language where possible."

"Read these interview excerpts: [paste]. Map each expressed concern to one of these categories: product capability, implementation complexity, vendor trust, internal change management, pricing/value perception, or competitive comparison. Flag anything that doesn't fit cleanly."

"Here are win/loss interview notes from the past quarter: [paste]. What patterns distinguish accounts we won from accounts we lost? Focus specifically on how buyers described our differentiation — or failed to."

"Based on these transcripts: [paste]. Which objections appear most frequently? For each, note whether the buyer seemed to be expressing a factual concern, a perception concern, or a process concern — and explain your reasoning."

What to look for: The categorization of objections by type (factual vs. perception vs. process) is often the most useful output, because the response differs depending on the type. A factual concern might require a product change or proof point. A perception concern requires a messaging or sales motion response. A process concern is often about how you're being introduced or evaluated, not the product itself.

Where it breaks down: The model will find patterns in whatever you give it. If your interview set is small, or skewed toward a particular segment or outcome type, the patterns will reflect that skew without flagging it. Don't treat synthesis output as representative unless your underlying research is.


3. Identifying narrative gaps

The goal: Find what your messaging is not addressing, across your ICP's full range of concerns.

What to provide:

  • Your current messaging framework or key customer-facing assets
  • A defined ICP with the primary buyer roles involved in a typical decision
  • Your known objection set (if you have one documented)

Prompts to try:

"Here is our current messaging framework: [paste]. Our buying committee typically includes [list roles]. For each role, identify which of their likely concerns are addressed in this messaging and which are absent. Be specific about what's missing, not just that something is missing."

"Here are our most common sales objections: [paste]. Review this messaging: [paste]. For each objection, assess whether the current messaging addresses it directly, addresses it indirectly, or doesn't address it at all."

"We sell to [ICP description]. Our messaging focuses primarily on [describe current emphasis]. What categories of buyer concern are we most likely underweighting? What would a buyer in this segment be thinking about that our messaging doesn't speak to?"

"Read this one-pager: [paste]. Assume our buyer has already read three competitor one-pagers with similar claims. What questions would they still have after reading ours? What would they not yet know that they'd need to know before moving forward?"

What to look for: Pay attention to the gap between technical/capability messaging and business outcome messaging. Most enterprise SaaS messaging over-indexes on the former. The model will often surface the absence of answers to questions like "what happens if this doesn't work," "how long does value realization actually take," or "what does success look like in year two." These tend to be the concerns that stall late-stage deals.

Where it breaks down: This exercise assumes your ICP and objection map are accurate. If your ICP definition is aspirational rather than grounded in actual won deals, the gap analysis will reflect the aspirational version. Run this against your documented objection set if you have one, not against a generic buyer description.


4. Generating and stress-testing messaging variants

The goal: Produce alternatives for specific message components, then evaluate them against a consistent standard.

What to provide:

  • The specific component you want to vary (headline, positioning statement, differentiation claim)
  • The buyer context and what they care about
  • The constraint you're working within (word count, channel, stage of funnel)

Prompts to try:

"Here is our current positioning statement: [paste]. Generate five alternatives that make the same core claim but lead with [business outcome / risk reduction / speed to value / total cost of ownership] instead. Keep each under 30 words."

"Here are three headline variations for our homepage: [paste]. Evaluate each against these criteria: specificity of claim, believability without additional context, and differentiation from category-generic language. Score each 1–3 on each criterion and explain your reasoning."

"We're writing an outbound sequence targeting [buyer role] at [company type]. Here is our current subject line and opening: [paste]. Rewrite both to lead with [specific pain point or context] rather than our product capabilities. Keep the opening under three sentences."

What to look for: When generating variants, look for the model to diverge from your existing language patterns, not just rephrase them. If the variants all feel like versions of the same sentence, prompt more specifically: "generate one that leads with a risk framing, one that leads with an efficiency framing, and one that leads with a competitive displacement framing."

When asking the model to evaluate variants, the scoring is less important than the reasoning. Read the explanations. They'll often identify the specific word or phrase doing the most work, or the most damage.

Where it breaks down: Generation is easy; evaluation is harder. The model will produce variants that are grammatically clean and structurally sound regardless of whether they'd actually resonate with your specific buyer. Don't evaluate AI-generated variants using AI alone. Get them in front of a sales rep who runs discovery calls. Their reaction is more informative than the model's assessment.


A few general notes

Provide more context than you think is necessary. A two-sentence ICP description produces worse output than a paragraph that includes the buyer's actual priorities, typical objections, and how they evaluate vendors in your category.

Use the outputs as a starting point for conversation, not a deliverable. The most useful prompt outputs are the ones you bring into a team discussion: "the model flagged these three gaps — do we agree? Which one matters most right now?"

Save prompts that produce useful outputs. When a prompt reliably surfaces good signal, document it. Over time, a small library of prompts calibrated to your specific positioning context is more valuable than any individual output.

The limiting factor is almost always the quality of what you put in. Vague inputs produce fluent, useless outputs. Specific inputs like real buyer language, actual messaging, and documented objections, produce outputs worth working with.

Previous Post Next Post