Skip to main content
← Back to Blog
AI Strategy

15 Questions to Ask Before Hiring an AI Agency or Vendor

March 27, 202612 min readRyan McDonald
#AI vendor selection#procurement#due diligence#AI partnerships#risk management#implementation#business strategy

Key Points

  • 60% of AI projects fail primarily due to choosing the wrong partner; vendor selection directly determines success more than technology choice—use this 15-question checklist to separate serious partners from overpromisers.
  • Disqualifying red flags include guaranteed results, vague pricing, unwillingness to share references, proprietary claims with no specifics, and artificial urgency—genuine partners ask hard questions and recommend alternatives when AI isn't the right solution.
  • Green flags include asking hard questions about your goals, recommending against AI when it doesn't fit, sharing case studies with quantified impact, prioritizing data quality audits, and having clearly documented methodologies.

You're ready to bring AI into your business. The market is flooded with agencies promising moonshots, and it's impossible to tell the honest brokers from the overpromisers.

Here's the uncomfortable truth: 60% of AI projects fail, and the primary reason isn't the technology—it's choosing the wrong partner.

This checklist exists to protect you. Print it. Bring it to every vendor meeting. These are the questions that separate serious partners from vendors who'll drain your budget and disappear when results don't materialize.


Why This Matters: The AI Vendor Landscape Today

The AI services market is a gold rush. Every marketing agency, software shop, and consultant is suddenly an "AI expert." The barrier to entry is low (thanks to public models like GPT-4 and Claude), so the barrier to credibility has collapsed too.

What you're buying isn't AI—you're buying implementation expertise, domain knowledge, and accountability. Most vendors sell the sizzle and hope you don't ask about the steak.

The 15 questions below are organized into four categories. Use them as a filter to identify partners who:

  • Know what they're doing
  • Have proven experience
  • Price fairly and transparently
  • Stand behind their work

Category 1: Technical Fit (Questions 1–4)

These questions determine whether the vendor can actually solve your problem, not just solve problems in general.

Question 1: What Specific AI Models and Platforms Do You Use, and Why?

What you're listening for: Specific names (GPT-4, Claude, Llama 2, fine-tuned models). Technical reasoning tied to your use case, not generic recommendations.

Red flag: "We use proprietary AI that we've built internally." (Ask to see it. Most can't.)

Green flag: "For your use case, we'd use Claude for reasoning and GPT-4 for speed. Here's why..."

The vendor should explain trade-offs: cost vs. accuracy, speed vs. reasoning depth, open-source flexibility vs. proprietary performance. If they can't articulate this, they're guessing.

Question 2: Can You Show Me a Working Demo Using My Type of Data?

What you're listening for: A real prototype. Not a polished PowerPoint. Not a generic chatbot. Something that works on your actual data or a faithful simulation of it.

Red flag: "We'll build the demo after we sign the contract."

Green flag: They ask detailed questions about your data format, volume, and sensitivity—then show you something functional in the conversation.

This separates vendors who understand your domain from vendors who are selling templates.

Question 3: How Do You Handle Data Privacy and Security?

What you're listening for: Specifics about encryption (in transit and at rest), data retention policies, compliance frameworks (SOC 2, ISO 27001, GDPR, HIPAA if relevant), and whether data ever leaves your infrastructure.

Red flag: "We keep everything secure." (Not an answer.)

Green flag: "We use AES-256 encryption, zero-knowledge architecture where your data doesn't reach our servers, and here's our SOC 2 Type II certification."

If your data is sensitive (healthcare, financial, customer records), this is non-negotiable. Don't negotiate on this.

Question 4: What Happens to My Data After the Project Ends?

What you're listening for: Clear deletion timelines, data portability options, and whether they retain anything for training or monitoring.

Red flag: "We may keep some data for quality assurance." (For how long? In what form?)

Green flag: "We delete all your data within 30 days unless you request longer retention. You have full access to export it anytime."

This matters legally and competitively. Your data is your asset.


Category 2: Track Record (Questions 5–8)

Past performance doesn't guarantee future results, but it reveals patterns. Ask for evidence.

Question 5: Can I Talk to a Client in My Industry?

What you're listening for: A reference with a similar business model, similar project scope, and willingness to discuss it candidly.

Red flag: "We can't share client references for confidentiality." (Some secrecy is reasonable, but complete silence is suspicious.)

Green flag: They introduce you to a client who had a comparable challenge and can speak to both wins and struggles.

Call the reference. Ask:

  • Did the project stay on budget?
  • Did it deliver on the promised timeline?
  • Would they hire them again?

Question 6: What's Your Typical Project Timeline—and What Causes Delays?

What you're listening for: Realistic timelines (most AI projects take 3–6 months minimum) and honest assessment of what derails schedules.

Red flag: "We can ship a full AI solution in 4 weeks."

Green flag: "For a solution like yours, expect 4–6 months. Delays typically happen when data quality is worse than expected, or when stakeholders can't align on success metrics. Here's how we prevent that..."

They should acknowledge that every project is unique. If they have a fixed timeline, they're not accounting for real-world complexity.

Question 7: Show Me a Project That Failed. What Happened?

What you're listening for: Candor and learning. Vendors who've never had a setback are either new or lying.

Red flag: "All our projects succeed." (Impossible.)

Green flag: "We built a recommendation engine for a retail client that didn't meet the accuracy target. We hadn't accounted for how seasonal demand shifts—we updated the model, but it extended the timeline by 6 weeks. Here's what we learned..."

This is the most important question. A vendor's honesty about failure reveals maturity.

Question 8: What's Your Team's Actual AI Experience—Beyond Marketing Buzzwords?

What you're listening for: Credentials, specific project work, and evidence of depth. Not just "we have a machine learning team."

Red flag: "Our team includes a Chief AI Officer." (What's their background? How many people actually work on AI?)

Green flag: "Our lead engineer spent 4 years at [reputable AI lab], our data scientist has published on model optimization, and our project manager has shipped 12 AI implementations."

Ask them to introduce you to the actual people who'll work on your project—not just the salespeople. Evaluate their depth directly.


Category 3: Business Terms (Questions 9–12)

Money talks. How they price and structure deals reveals whether they're protecting themselves or protecting you.

Question 9: What's the Total Cost, Including Implementation, Training, and Ongoing Support?

What you're listening for: An itemized breakdown. Implementation cost, training cost, monthly or annual support, and infrastructure costs if they're hosting.

Red flag: "We'll charge you per hour at our rate." (This incentivizes slowness.)

Green flag: "Implementation will cost $150K, training and handoff $20K, and ongoing support (model updates, monitoring, optimization) is $8K per month."

Demand a complete picture. Include everything: data preparation, infrastructure, licenses, support, SLAs. Hidden costs are the #1 source of vendor conflict.

Question 10: Do I Own the IP and Custom Models You Build?

What you're listening for: Clear language stating that you own all work product and any custom models.

Red flag: "We retain rights to the underlying technology." (So they can sell your solution to your competitors.)

Green flag: "You own 100% of the custom code and models we build. We retain the right to use our frameworks and methodologies with other clients."

This is crucial. You should own what you pay for. Get this in writing.

Question 11: What Does Ongoing Maintenance Cost After Launch?

What you're listening for: A realistic estimate for model updates, infrastructure costs, and monitoring. This is often where "cheap" vendors become expensive.

Red flag: "Maintenance is minimal. Maybe $2K per month." (Model drift is real. Plans for addressing it should be detailed.)

Green flag: "We provide quarterly model retraining ($5K per quarter), 24/7 monitoring, and SLA-backed uptime guarantees. If performance degrades, we investigate and retrain at no extra cost—up to 2 retraining cycles per year."

Don't treat the launch as the end. Ongoing costs often exceed implementation costs over 3 years. Plan for it.

Question 12: What's Your Exit Strategy if We Part Ways?

What you're listening for: How easily you can switch vendors or bring the system in-house. Look for signs of vendor lock-in.

Red flag: "The custom models are built on our proprietary platform—you'll need to stay with us." (Or you start from scratch.)

Green flag: "We'll provide full documentation, model exports, and a 2-month transition support period to help you hand off to another team or integrate in-house."

Partnerships should feel optional, not forced. A vendor confident in their work won't trap you.


Category 4: Results & Accountability (Questions 13–15)

This is where theory meets reality. How will you know if this worked?

Question 13: How Do You Measure Success, and What Metrics Will You Report?

What you're listening for: Specific KPIs tied to your business goals, not vanity metrics.

Red flag: "We'll improve efficiency." (By how much? How do you measure it?)

Green flag: "For your use case, we'll track: time-to-resolution (currently 48 hours, target 4 hours), customer satisfaction on AI-assisted interactions (baseline 72%, target 88%), and cost per interaction (baseline $12, target $3). We'll report these monthly."

They should connect AI metrics to business outcomes. Revenue impact, cost reduction, time savings—something you actually care about.

Question 14: What Happens if the Project Doesn't Deliver the Promised ROI?

What you're listening for: Accountability. Do they have a recourse plan?

Red flag: "AI is unpredictable. There are no guarantees." (True, but not an excuse for no accountability.)

Green flag: "If we don't hit 75% of the projected ROI within 6 months post-launch, we'll do additional tuning at no cost, or issue a credit against your support contract."

Not all vendors will offer guarantees (some markets are too uncertain), but they should articulate a response plan if results fall short.

Question 15: What Does Post-Launch Support Look Like?

What you're listening for: A clear support structure: response times for issues, escalation paths, regular check-ins, and proactive monitoring.

Red flag: "We'll be available via email." (What's the response time? Who owns escalations?)

Green flag: "We provide 24/7 monitoring with 1-hour response time for critical issues, weekly check-in calls for the first quarter, and monthly performance reviews thereafter."

The post-launch period is where mediocre vendors ghost you. Demand clarity on support structure and SLAs.


Red Flags That Mean "Walk Away"

Even if they ace some questions, these are disqualifying:

Guaranteed results. "We guarantee 50% efficiency gains." (AI is probabilistic. Honest vendors hedge.)

No references or unwillingness to share them. (Zero transparency, likely zero reputation.)

Vague pricing. "We'll figure out costs after discovery." (Budget control matters. If they can't estimate, they're not experienced.)

"Proprietary AI" with zero specifics. (Likely wrapping a public model in a wrapper and charging a premium.)

Pressure to sign quickly. "This pricing is only good for 48 hours." (Real partners don't use artificial urgency.)


Green Flags That Indicate a Good Partner

They ask you hard questions. "What happens if this doesn't work? How will you measure it? Who owns success?" (Partners who push back on vague goals are less likely to disappoint.)

They recommend against doing AI when it doesn't fit. "For your problem, a basic workflow automation might work better and cost 80% less." (Confidence shows in honest assessment, not overselling.)

They share case studies with numbers. "We reduced manual data entry by 65%, saving 800 hours per year." Not "we improved efficiency."

They talk about your data quality first. "Before we build anything, let's audit your data. That determines what's possible." (Data is the foundation. They know it.)

They have a clearly documented methodology. They can walk you through how they approach projects, what phases look like, and what you'll be involved in.


At Rotate, We Welcome These Questions

This checklist isn't designed to make vendors uncomfortable—it's designed to make you confident. The vendors worth hiring should welcome every single one of these questions, because answering them honestly is how trust gets built.

We've seen too many businesses burned by vendors who oversold AI and underdelivered. The goal isn't to find the cheapest partner. It's to find the honest one.


Next Steps: Build Your Selection Process

Use this as your starting point. Before you take meetings, clarify what success looks like for your organization. Then ask these 15 questions in a structured way:

  • Phase 1: Use the technical fit questions (1–4) to narrow to vendors who understand your problem.
  • Phase 2: Verify their track record (5–8) by talking to references and evaluating their team.
  • Phase 3: Align on business terms (9–12) and get everything in writing.
  • Phase 4: Define success metrics (13–15) and only sign if accountability is clear.

For more on evaluating and choosing the right AI partner, see our guides on AI vendor selection, avoiding common AI implementation mistakes, and securing your AI investment with proper governance.

Ready to find a partner you can trust? We're here to answer these questions—and any others you have.

Let's Talk About Your AI Goals

The right vendor makes all the difference. If you're evaluating partners or want a second opinion on your AI strategy, we're here to help.

Schedule a consultation with our team to discuss your specific needs. No pressure. No pitch. Just honest conversation about what's possible.

Related Articles