AI Readiness: 5 Questions Before Your First Pilot
Every software vendor has an AI strategy now. Your accounting platform has AI features. Your CRM has AI features. Your email client is offering to write your messages. Somewhere between the hype and the headlines, real businesses are trying to figure out what, if any, of this actually applies to them.
The good news is that AI pilots do not have to be expensive or risky. The bad news is that most of the AI projects we see fail do so for reasons that could have been surfaced in a single afternoon of honest questions, before any money was spent.
These are five of those questions.
1. What specific process are you trying to improve?
"We want AI" is not a project. It is a wish. Every successful AI pilot we have seen starts with a specific, measurable process that someone wants to change.
The difference in framing looks like this:
- Too vague: "We want to use AI to be more productive."
- Right level: "We want to reduce the time we spend categorising and coding supplier invoices from three hours a day to under thirty minutes."
The right-level version tells you what to build, what to measure, and whether the pilot succeeded. The vague version leads to a six-month deployment with no way to know if anything improved.
If you cannot name the specific process in one sentence, you are not ready for a pilot. You are ready for a scoping exercise.
2. Is your data clean and accessible?
AI systems are pattern-matching engines. They need data to work with, and the data needs to be structured enough to feed in. Garbage in, garbage out is not a cliché in the AI context, it is the entire ballgame.
Before you pilot anything, walk through these questions with whoever owns the target process:
- Where does the data live? Is it in one system, or spread across a shared drive, an email inbox, and someone's head?
- Is it structured? A database of standardised records behaves differently from a folder of PDFs with varied layouts.
- Is it current? If the data is stale or out of sync with reality, the AI will confidently produce stale, out-of-sync answers.
- Can the AI tool actually access it? Many AI offerings assume you will paste content in. That does not scale.
If the data is a mess, fix the data first. An AI pilot on top of bad data produces confident nonsense faster than you can say "hallucination".
3. Who owns the outcome?
AI projects need a business sponsor, not just an IT enthusiast. The person who owns the outcome should be the person whose job gets easier if the pilot works, and whose numbers get embarrassing if it does not.
A common failure mode: IT hears about a new tool, runs a pilot, demos it to the business, and the business says "interesting, but we never asked for this". The pilot dies on the vine because nobody was ever going to change their workflow for a tool they did not request.
Before you start, nominate one person by name, in a specific role, who will decide whether the pilot was a success. That person should be available for the pilot, have authority to change the process, and care about the outcome.
4. What does success look like in 90 days?
If you cannot define success, you are not ready to start.
Good pilot success criteria share three features. They are specific, measurable, and time-boxed. A few examples:
- "Invoice processing time drops from 3 hours to 45 minutes per day, measured across a 10-day window, by end of month three."
- "At least 70% of customer support tickets are accurately categorised on first pass, with no more than 5% misrouted."
- "Five salespeople save a total of 15 hours per week on meeting notes, verified by time tracking for two weeks."
Notice what these have in common: you can tell, unambiguously, whether the pilot worked. Contrast with "improves the customer experience" or "makes us more competitive". Those are ambitions, not success criteria.
5. What is your fallback if it does not work?
Pilot means you can turn it off. If you cannot turn it off, it was not a pilot, it was a deployment.
Before you start, sketch out what happens when (not if) the AI produces wrong answers, or the vendor changes the pricing, or the tool is quietly deprecated in 18 months. Where does the existing process sit? Who retains the skills and the access? What is the rollback plan?
This is not pessimism, it is professionalism. Cloud services get killed. Models get deprecated. Vendors get acquired. Design the pilot so that your business is not structurally dependent on the specific tool you are testing.
What to do with these answers
If you can answer all five, you are in a small minority of businesses that are genuinely ready to pilot. Run the smallest, cheapest version you can. Measure honestly. Decide on facts.
If you cannot answer some of them, that is the work. AI readiness is mostly about clarity of intent, clean data, and honest accounting for outcomes. The AI part is the easy bit.
We help Australian businesses think through AI adoption before they spend money on it. Our AI Strategy & Integration service starts with a two-week readiness assessment: a workflow audit, three to five prioritised opportunities, and an honest roadmap with effort and ROI estimates.
Need help with ai strategy?
Our free IT health check will show you exactly where your business stands and what to prioritise. No obligation.
Book your free health checkGet IT insights in your inbox
Practical tips for Australian businesses. No spam. Unsubscribe anytime.
More from the blog
Microsoft 365 Security Defaults Are Not Enough
Security Defaults are a reasonable baseline, but they leave real gaps for any Australian business handling client data, working in a regulated industry, or with more than a handful of staff. Here is what they cover, what they miss, and what to do about it.
CybersecurityEssential Eight for Small Business: Where to Start
You don't need to tackle all eight at once. The three Essential Eight strategies that matter most for Australian SMBs, and how to get to Maturity Level One without losing your mind.