AI-Confident Procurement Is a Practice
Common-sense guardrails, a simple playbook, and four practical steps you can take this week
In Post 1, I argued that “AI-ready” isn’t a technical identity, more of a behavioral one. This post is the companion piece: how do we put these ideas into practice.
The point of this post is to be practical - not to present shiny tools or some grand transformation program. We have enough of those elsewhere.
My goal is to present a practical way to begin the move from AI-curious (sporadic experimentation) to AI-confident (repeatable, outcome-driven use), while keeping the two things that Procurement can’t outsource: judgment and accountability.
The discussion will be divided into four parts:
Common-sense guardrails
Practical playbook
Useful workflows
Getting started this week
First: Common-Sense Boundaries (AKA Don’t Do Something Dumb at Speed)
Let’s start with a reality check.
AI can accelerate your work but it can also accelerate your mistakes. It’s not tuned to do the ‘right things’ all the time, so the onus is on you.
So before we get going, let’s lay down some common-sense boundaries for ourselves. (I know, I know, this is like one of those disclaimers at the beginning of every self help book: consult your doctor/financial advisor/legal professional/etc.)
Here are the guardrails I’d apply at this stage, whether you’re a category manager, an analyst, or a CPO:
1) Treat public tools like glass conference rooms
If you wouldn’t say it on speakerphone in a crowded airport, don’t paste it into a public AI tool. Don’t put any of the following into the public AI tools you use:
No regulated data (PII, export-controlled, etc.)
No confidential supplier data
No non-public pricing, rate cards, rebate terms
No contract language covered by NDAs
No customer-sensitive info
No proprietary strategies or negotiation positions
I know plenty of AI tools have options to protect your data but, for now, I would still err on the side of caution. If in doubt, treat the data as confidential and don’t input it into the tool.
2) Follow Company Policy
Again, this goes without saying, but follow your company policy.
And if your company doesn’t have one, or has a very loose set of guidelines, then assume the strictest stance until your company actually does develop one.
A lack of (or even a loose) AI policy is not permission to do whatever you want, certainly not with company information.
Until you have clear rules, behave like you’re operating in a regulated environment:
Stick to approved tools only or, where this is no guidance, choose your tools carefully
Redact all inputs thoroughly and appropriately
Experiments only with “no risk” and “low stakes” work
Document everything you do
3) Don’t Treat AI Outputs as “Answers”
AI can be a very fast, very capable intern that displays very high confidence, even as it displays uneven judgment. It will reinforce what you want to hear and will sometimes even tell you things that just aren’t true or valid or right.
So, don’t take it for granted. Don’t outsource your thinking and judgement:
Verify facts
Sanity-check logic
Ask for sources, assumptions, and alternatives
Pressure-test the output the way you would a supplier claim
Converse with the tool, push back, question its ‘thinking’. At the end of the day, it’s your output and you will be on the hook for it.
4) Keep the Human in the Loop
This is related to point 3, but keep yourself in the loop, especially where it matters - on decision making, judgement issues, etc. If the output affects money, risk, reputation, or legal exposure, then the bar should be even higher.
AI can help you think, draft, and explore but you must still own:
Decisions
Communication
Accountability
Ask yourself: am I comfortable defending this output in front of my boss?
5) Build Muscle Safely
If you’re new to this, don’t start with the crown jewels. Start with non-sensitive use cases that show you the power and capability of the tools.
Start with:
Meeting prep
Stakeholder emails
First-pass research frameworks
Neutral summaries
Checklists and question banks
All of these should be more than enough to build confidence without creating risk.
How to Move Up the Curve (And Still Have a Life)
The point of this whole exercise is to get beyond ‘dabbling’ (AI-curious) to ‘standardize’ and ‘incorporate’ into your workflows (AI-confident).
The simplest way I can think of to get there is to make progress without getting overwhelmed:
Step 1: Pick One “Lane” for 30 Days
Choose one part of your job where you want leverage. For example:
Contracting support
Supplier intelligence
Supplier risk insights
Stakeholder management
Start small. Get results. Embed into your daily work. Expand later.
Step 2: Run Two Reps Per Week
Each rep can be no more than 15–30 minutes:
Try a prompt (not a one liner, imagine a conversation)
Produce an output you can actually use
Improve the prompt next time (”What could I have said/asked that would have given the tool more context/information to have been able to provision a better output?”)
That’s it. I know there are plenty of folks who will tell you to do more and immerse yourself even more deeply - and you can do that. But at least start here. Small reps compound.
Step 3: Keep an “AI Wins Log”
As the saying goes, “If you don’t track it, it never becomes a practice”.
Make a point of tracking what you’ve worked on, what the issues were, what value you saw, etc. You can do this as thoroughly as you like, for example:
Date / workflow lane
What I was trying to do
What I fed the tool (redacted)
Output I got
What I changed / validated
Time saved (or quality improved)
What I’ll reuse next time
OR just keep it simple: keep a note of what you did, what you learned, what value you recieved and how you could have done better. Make this a personal operating system of sorts.
The point is to capture insights and learn; from “I tried AI once” to “I work differently now”.
Step 4: Define “Better” in Procurement Terms
Stay focused on the practical, tangible, applicable value. Not just “this is really cool output”, but what it means for your work and how you could (and why you should) deploy this on an ongoing basis.
In other words, “Better” means:
Faster cycle time
Clearer stakeholder alignment
Sharper negotiation options
Fewer risk blind spots
Better supplier conversations
Anchor your practice to the stuff that you (Procurement) cares about. The more it ‘enables’ you, the better you will be.
Four Procurement workflows where AI can create real leverage
OK - let’s get started.
What follows are practical tasks and patterns you can use immediately, without pretending that AI is some magical, mythical tool.
For each workflow, I’ll suggest:
What AI is good for
What you must verify
A prompt you can reuse
NOTE: I have drafted the prompts to provide guidance for junior as well as senior folks. It goes without saying that if you already have some experience, then tailor this as appropriate to your experience level.
In addition, if any of detail in the prompts below run afoul of the common sense boundaries laid out above, then adjust/edit those, as appropriate.
1) Contracting:
The focus here is on faster comprehension, better questions and cleaner negotiation preparation. The core value is that AI can accelerate your first pass. What it can’t do is replace your counsel or your own scrutiny.
Where AI helps
Summarize long clauses quickly
Create a “risk heatmap” of key provisions
Draft redline questions and negotiation talking points
Generate fallback language options (as ideas, not legal advice)
What you must verify
Legal interpretations
Company- and Jurisdiction-specific implications
Defined terms and cross-references
Anything that affects liability, indemnity, termination, IP, data, compliance
Again, AI can accelerate your first pass. It cannot and should not replace counsel or your own scrutiny.
Reusable prompt:
You are a procurement contracts analyst.
My company is a [mid-size buyer] with [moderate] leverage. This is a 3-year agreement valued at approximately $X. We have [one/multiple] alternative suppliers.”
*I’m reviewing a contract for [category/service type] with a supplier. Here are the [redacted] clauses for your review. *****
Summarize each clause in plain English.
Identify the top risks for the buyer.
For each risk, propose questions to ask the supplier
Help me identify acceptable fallback positions for risks identified in 3 above.
Flag any ambiguous language and suggest how to clarify it.
Identify any standard clauses that are missing and explain why they matter
Note where any terms deviate significantly from market standard for this category. (If you don’t have market data, label as hypothesis)
Flag any clauses that should be reviewed by legal counsel rather than handled by procurement alone
Provide the output in plain-English summary.
Key Note: The goal, as I’ve said above, isn’t “AI reviewed the contract” but that you are able to walk into a legal/stakeholder review with a deeper comprehension and sharper questions.
2) Supplier Intelligence:
The goal here is to use AI to better prepare you for supplier conversations and moving your sourcing strategy forward.
Where AI helps
Structure and develop a supplier profile quickly
Turn scattered information into a coherent narrative
Draft supplier interview questions
Generate hypotheses about strengths/weaknesses and differentiators
Build an initial supplier landscape by segment
What you must verify
Factual claims (revenue, ownership, capabilities, certifications)
Marketing fluff vs actual valid insights
Anything that becomes part of a sourcing decision record
Reusable prompt:
You are supporting a sourcing initiative in [category].
Create a supplier intelligence brief for [Name of Supplier (ideally)] or [Supplier Type (less ideal but still workable)]. Do not assume facts. Provide citations (and if you can’t, say so). Label all assumptions.
Include:
What the supplier likely does well (hypotheses)
How differentiated these strengths are relative to its competition
The typical cost structure and where pricing leverage exists for the buyer.
Common risks in this supplier type
What creates dependency or switching costs with this supplier type, and how can we structure the engagement to minimize lock-in?
12 due diligence questions (commercial + operational + ESG + cyber/data)
What would make us not choose them
What should we look for and ask about when requesting customer references?
What to listen for in discovery calls:
Green flags (signals of a good partner)
Yellow flags (things that need follow-up)
Red flags (signals to walk away)
Keep it concise, bullet-based, and designed for a stakeholder readout. Include a one-paragraph executive summary at the top with a preliminary recommendation or stance.
Key Note: The point here is to use AI to generate structured thinking that you can then validate with real data and supplier calls.
3) Supplier Risk Insights:
AI tools can be great for helping identify early warning signals and develop sharper mitigation plans. The key, as always, is to use them thoughtfully and with your own judgement as central to the analysis.
Where AI helps
Create a risk taxonomy for your category
Develop “what could go wrong” scenarios
Draft monitoring questions and risk dashboard elements
Generate mitigation options you might not have considered
What you must verify
Company-specific qualifiers/disqualifiers
Real-world risk signals
Financial exposure
Operational dependencies
Any recommendation that affects supply continuity
Reusable prompt:
You are a procurement risk advisor.
For [category] with suppliers in [region(s)], create a risk assessment framework.
List major risk types (financial, operational, geopolitical, compliance, cyber, ESG, logistics).
Rank risk types by severity and likelihood for this specific category-region combination, and explain your reasoning. Not all risk types are equally relevant — deprioritize where appropriate.
For each risk type, define leading indicators we can monitor. Suggest specific free or low-cost data sources a procurement team could use to monitor each indicator
For each leading indicator, recommend a monitoring frequency (daily/weekly/monthly/quarterly).
Create a simple scoring model (1–5) with definitions for each score. For each score level, provide a concrete example relevant to this category so the user can calibrate their assessments.
For each risk type, define a threshold score that should trigger an escalation or action, and describe what that action looks like.
Provide mitigation strategies (dual source, inventory buffers, contractual protections, audit cadence, etc.). Note those strategies that are proportionate for a contract of [approximate value], and flag where the cost of mitigation may exceed the expected cost of the risk event.
In addition to any descriptive output for the points above, also provide a summary dashboard table (risk type, severity ranking, top indicator, primary mitigation).
Key Point: The point here is not to be exhaustive but to help you identify the breadth of the major risks. You will still need to provide judgment about what’s plausible, material, and actionable.
4) Stakeholder Management:
This is not a flashy use case, but it provides real value in the form of stronger communication, clearer alignment, fewer rework loops, and much faster (and more credible) decisions.
Where AI helps
Draft crisp stakeholder updates
Tailor messages to different stakeholder types
Prepare for tough conversations
Turn messy meetings into clean decision memos
Generate options and trade-offs summaries
What you must verify
Tone and political nuance
Commitments, timelines, and approvals
Anything that could be interpreted as binding
Reusable prompt:
You are helping me manage a stakeholder in [function].
Context: [short description].
Goal: [what I need from them].
Constraints: [timeline/budget/risk].
Considerations: [Any history with the stakeholder; key ideas and preferences]
Stakeholder’s influence level: [decision-maker/influencer/gatekeeper/end-user]
Stakeholder’s likely priority: [cost/speed/quality/risk/control].
Suggested tone of communication: [assertive/collaborative/deferential/urgent/relationship-building]
Draft:
A 6-sentence email that is clear, calm, and action-oriented.
A one-paragraph “decision memo” summary with options and recommended next step.
5 objections they might raise and how I should respond.
For each objection, provide the underlying concern driving it, your recommended response, and any phrases to avoid.
If I need to compromise, identify the one thing I should protect and the one thing I can concede.
Recommend whether this conversation is better handled via email, a brief call, or an in-person meeting, and explain why.
Key Point: The point here is that you’re using AI to remove friction and enhance credibility, so you can spend your energy on judgment and relationship development.
The Difference Between “Using AI” and “Becoming AI-Confident”
At this point, you might notice an underlying theme: none of this requires you to become technical. What it does require is:
Comfort in experimentation
Thoughtfulness (and appropriateness) in the prompt structure
Discipline to verify
A bias toward turning experiments into habits
The humility to treat outputs as drafts, not truth
A Short Note for Leaders
If you lead a team, your most impactful move is to make “responsible practice” the norm. You can do this by taking four simple actions:
Publish guardrails people can actually follow - what’s off-limits, what requires review, what’s fair game
Create a safe space for experimentation - off-limits categories (if any), anonymized data, etc.
Reward small, verified wins tied to outcomes - e.g. a better supplier question, a faster risk assessment, a key nugget of insight that moved a conversation or deal forward, etc. - and not “I used the AI tool”
Make sharing the norm - via regular forums where people show what worked, what didn’t, and what they learned
The point here is to give your people clarity, safety, and permission to practice.
What To Do This Week
If you’re reading this and thinking, “OK - where do I start?”, here’s one suggestion:
Pick one lane
Run two reps this week
Start your AI Wins Log
Share one safe win with someone on your team
That’s literally it - just take simple steps to start becoming the kind of practitioner who can work with these tools, and incorporate them into your workflow.
In a post-AI world, confidence isn’t a result of the tech, but rather your ability to work with it - safely, consistently and with (your) judgment.



