AI Guidance & Advisory
Someone in your corner while you figure out AI.
Every business is being told they need AI. Not all of them do — at least not yet, not in the way they’re being sold it. Cigma helps you cut through the noise, understand what’s genuinely useful, and avoid the mistakes that cost real money.
Independent advice.
From people who’ve seen it all.
We’re not here to sell you AI tools. We don’t have vendor partnerships or referral arrangements. We sit on your side of the table — asking the right questions, challenging the wrong assumptions, and helping you make decisions you can stand behind.
Where to start
Most businesses don’t need a sprawling AI strategy. They need a straight answer to a simple question: where, specifically, would AI create genuine value here — and where would it just create noise?
- An honest assessment of where AI fits your business — and where it doesn’t
- Use-case identification without the vendor hype
- A clear-eyed view of your data, your processes, and your readiness
- A board-ready brief that explains the opportunity and the risk in plain English
- A prioritised shortlist of what to try first — and what to leave alone for now
Making the right decisions
The AI market is full of impressive demos and confident salespeople. We help you evaluate what you’re actually being sold, ask the questions vendors don’t want to answer, and choose tools that fit your business — not their pipeline.
- Vendor evaluation and challenge — on your behalf, not theirs
- Honest assessment of build vs. buy vs. wait
- Due diligence on AI tools before you commit budget
- Contract and commercial sense-checking
- A second opinion when the internal pressure to say yes gets loud
Getting governance right
Once AI is running in your business, someone needs to own it — and be able to answer the board’s questions in plain English. Most businesses haven’t worked out who that is. We help you fix that before it becomes a problem.
- Named ownership for every AI system and its outputs
- Clear policies on what staff can and can’t use AI for
- GDPR and AI Act obligations explained in language that’s actually useful
- Board-ready reporting on AI risk and performance
- An incident plan for when something goes wrong — not after it does
We don’t sell AI tools and we don’t get commission from the ones we recommend. Our only job is to make sure the decisions you make about AI are the right ones for your business.
What works.
What doesn’t.
After working through dozens of UK AI projects — the ones that delivered and the ones that didn’t — the patterns are clear. Here’s the honest version of what separates them.
What every AI adoption should include
Start with the problem, not the platform
The right question is never “which AI tool should we buy?” It’s “what specific problem are we actually trying to solve?” Define that clearly first. Everything else follows.
Get independent advice before you buy anything
The people selling you AI tools have a strong interest in you buying them. Get someone in your corner who doesn’t — before you sign a contract or commit a budget.
Name a human owner before anything goes live
Every AI system needs a person who is accountable for what it produces. That person needs to be named, briefed, and empowered before the system is switched on. No owner means no governance.
Be honest with your board about the risks
Boards are under pressure to approve AI investment. The ones that go wrong are often the ones where nobody presented the downside clearly. Your board needs the full picture — not just the vendor’s version of it.
Tell your staff what AI can and can’t do
Staff who don’t understand what a tool is doing — or what it gets wrong — will either avoid it entirely or trust it too much. Clear guidance and honest training makes the difference.
Plan for something going wrong
Every AI system will eventually produce an output that causes a problem. The businesses that handle it well are the ones who planned for it. The ones who didn’t are the ones who end up in the press.
The mistakes we see every week
Buy something because a competitor has it
FOMO drives more failed AI projects than any technical problem. What works at a competitor — with different people, different data, and a different business — may be entirely wrong for you.
Let the vendor define what success looks like
If the company selling you the tool is also the one measuring whether it worked, you have a problem. Define your own success criteria before any contract is signed — in your language, not theirs.
Assume AI output doesn’t need a human check
AI produces confident answers. It also produces confidently wrong ones. Anything consequential — anything going to a client, a regulator, or your board — needs a human to review it. Not skim it. Review it.
Let the pressure to “do AI” override good judgment
The board wants action. The investors want progress. The team wants to look modern. None of that is a good reason to rush a decision that will be hard and expensive to unpick later.
Treat a successful demo as a solved problem
A polished demo is a sales tool, not evidence of production readiness. The gap between “it looks great in a meeting room” and “it works reliably with real data at real scale” is exactly where projects quietly fail.
Think governance is someone else’s problem
GDPR obligations, the incoming AI Act, and your own liability exposure don’t go away because the technology is exciting. Understanding your obligations before you deploy is always cheaper than dealing with them afterwards.
You don’t need to be
a technology business.
Every sector is being reshaped by AI. The businesses that navigate it well are usually the ones that ask the right questions early.
Businesses feeling the pressure to “do AI”
You know AI is important. You are not sure what the right move is. You want a straight conversation and an honest view of where to start.
Boards that need to get on top of AI risk
Your leadership team is making commitments. You need to understand what is actually being approved — and have a clearer answer for what happens when something goes wrong.
Businesses that tried AI and it didn’t land
A failed project does not mean AI will not work for you. It usually means the decisions around it were weak at the start.
Regulated businesses with real compliance exposure
Financial services, healthcare, legal, and public sector organisations often need clearer governance before AI becomes a risk conversation.
Leaders who just want a straight answer
You have had the vendor briefings and read the articles. You still do not feel confident you are making the right call.
Teams inheriting an AI system they didn’t choose
If you have inherited a tool, supplier, or half-formed AI programme, we help establish what is running, who owns it, and what should happen next.
AI Readiness Review.
Two weeks. Plain English output.
A structured assessment of where your business actually stands with AI — what is worth pursuing, what to avoid, where the risks sit, and what leadership needs to know.
Don’t overpay for services you don’t need. We shape each engagement around your business, your current level of AI maturity, and the depth of support leadership actually needs.
Book a discovery callOngoing AI advisory
Ongoing senior counsel for leadership teams making live AI decisions, managing vendors, and putting workable governance around what is already in motion.
Best suited to
Businesses already evaluating, deploying, or inheriting AI and needing steady senior judgement rather than one-off commentary.