Q1
We can clearly describe where work starts, where it ends, and what "done" means.
Q2
Ownership is clear at each stage, with no ambiguous "someone should…" steps.
Q3
Most core processes are well documented, saved in a shared place, have a clear owner, and are updated regularly.
Q4
Exceptions are defined: what happens when something is blocked, urgent, or off-process.
Q5
Approvals have clear rules: who approves what, and by when.
Q6
Reviews are captured in one place, not scattered across email, Slack, and docs.
Q7
If an approval stalls, there is a consistent follow-up method: reminders, escalation, or agreed nudges.
Q8
Handoffs between people or teams happen in a defined way, not ad hoc messages.
Q9
The team has a reliable view of work in progress, blockers, and next steps, without status chasing.
Q10
Reporting is mostly repeatable and low-effort, not rebuilt manually every week or month.
Q11
Most people in the organisation use a project management tool as part of their day-to-day work.
Q12
The project management tool is used consistently enough that it reflects reality, not "half the work lives elsewhere".
Q13
People use more than the basics: templates, forms, automations, dashboards, integrations.
Q14
Key decisions and context live with the work (briefs, approvals, assets, links), rather than being scattered.
Q15
We can identify 1–3 recurring workflows that are stable enough to automate, with clear inputs, steps, and owners.
Q16
We know what "good output" looks like and how we would monitor accuracy over time: quality checks, error handling, maintenance.
Q17
AI is integrated into the workflow, embedded in the process, not used as a standalone side tool.
Q18
AI outputs are captured and stored in the same place as the work: trackable, reusable, reviewable.
Q19
There is an agreed human review step for AI outputs where it matters, with quality control built into the flow.
Q20
The organisation actively improves AI output over time: feedback loops, prompt/version updates, examples of good vs bad.