Research Methodology
What each dimension measures, what high and low scores mean, and how dimensions work together
A single composite score reduces a complex set of responses to one number. It is useful for comparison but it obscures the underlying pattern. Two people can have identical composite scores and completely different profiles because they achieved that score through different combinations of high and low dimensions.
Dimensions are the real unit of insight. They show which specific facets of your relationship with AI are most prominent, where your strengths are concentrated, and where the gaps lie. Archetypes are assigned from the combination of dimension scores, not from the composite. Reading your dimension scores alongside your archetype gives you a more complete picture than either alone.
Four dimensions capture different aspects of how vulnerable a role is to AI disruption. Each is independent: a strong score on one does not predict any particular score on another.
How much of your daily work overlaps with what AI can already do.
A low score indicates production-focused work: creating first drafts, running defined processes, generating outputs from templates or existing data. These tasks sit closest to AI's current capabilities.
A high score indicates curation-focused work: evaluating, selecting, synthesising, and making judgements about quality and relevance. AI can assist but cannot substitute for the underlying evaluation.
This is the dimension with the largest weight in the Vulnerability Index (30%), because overlap with AI capability is the most direct measure of displacement risk.
How easily your core skills could be automated.
A low score indicates routine, pattern-based skills: executing known processes, applying established methods, retrieving and formatting information. These skills are most accessible to AI replication.
A high score indicates novel, synthesis-based skills: connecting disparate knowledge, adapting to genuinely new situations, exercising contextual judgement. These skills are harder to replicate because they depend on understanding that goes beyond pattern recognition.
How quickly your role is evolving around AI.
A low score indicates individual execution focus: working independently on clearly defined tasks, where the value delivered is personal output.
A high score indicates coordination and enabling focus: orchestrating how others work, bridging between functions, unblocking progress for teams. These roles are reshaping faster because they sit at the intersections where AI-driven change is most visible.
How much your organisation's structure shields you from displacement.
A low score indicates explicit, documentable knowledge: information that exists in records, can be codified, and transferred. This type of knowledge is accessible to AI extraction and replication.
A high score indicates tacit, judgement-based knowledge: expertise that lives in experience and context, relational trust that accumulated over time, and institutional understanding that cannot be captured in documentation. This knowledge is the hardest for AI to replicate and creates a natural organisational buffer.
No single dimension tells the full story. Some combinations are particularly diagnostic:
Three dimensions measure the depth and reach of AI adoption, plus a fourth that captures future orientation. The first three contribute to the composite; Future Orientation is diagnostic.
How deeply AI is embedded in your core workflows.
A low score indicates embedded adoption: AI is delivered inside the tools and platforms your organisation has officially deployed. The tools were selected by IT or operations, and AI features are part of the existing workflow rather than additions to it.
A high score indicates autonomous adoption: you self-select AI tools independently, often outside officially sanctioned systems. The tools are standalone rather than built into platforms, and they extend beyond what the organisation has deployed.
Neither pole is inherently better. Embedded adoption indicates institutional momentum but may lag the frontier. Autonomous adoption indicates personal initiative but may lack the governance needed for team-level impact.
How widely AI tools are used across your activities.
A low score indicates individual impact only: AI improves your personal productivity, but its effects do not extend to how your team works together.
A high score indicates team-level impact: AI is changing how work is coordinated and shared across people, not just how individuals perform tasks.
This dimension is strongly connected to the Team Player and Power User categories. Power Users tend to score lower (individual impact); Team Players score higher (team-level reach). The gap between individual and team impact is one of the most important findings in the study: many skilled AI users have not extended their gains to their teams.
How well AI integrates with your existing work processes.
A low score indicates predictable integration: AI handles defined, structured tasks. Verification checklists, established use cases, known inputs and outputs. AI is a tool for acceleration within understood boundaries.
A high score indicates adaptive integration: AI is used for exploration, reasoning, and tasks where the output is not fully specified in advance. The person is comfortable applying AI to novel situations and calibrating when to trust it.
How you expect AI to reshape your work over time.
A low score indicates technology optimism: a belief that better AI tools will resolve current problems automatically, including coordination and consistency challenges.
A high score indicates structural awareness: a recognition that AI tools work best when workflows are deliberately designed around them, and that improvements in AI capability alone will not resolve the coordination problems that arise when teams use AI inconsistently.
Future Orientation is excluded from the composite because it measures expectation rather than current practice. It is used in archetype assignment for profiles that require structural awareness as a defining criterion (for example, The Bridge Builder and The Visionary Ahead).
Three friction dimensions measure different types of organisational barrier. Study 3 uses a different scoring logic from the other two studies: tradeoff pairs pit friction types against each other rather than comparing two poles of the same dimension. This means dimensions are not independent in Study 3. They share variance: a strong response toward one friction type in a given pair reduces the score for the competing friction type from that pair.
Barriers to getting started: the coordination overhead before work can begin.
A low score means starting work is straightforward: approvals are accessible, access to the people and information you need is available without significant overhead, and handoffs between people move quickly.
A high score means work regularly stalls before it begins: waiting for approvals, chasing people for responses, assembling the permissions and inputs needed before a task can start. Time is lost before any actual work happens.
Activation Friction is addressed in Chapter 1 of Proxy.Me under the concepts of coordination tax and activation energy. It is the friction type most directly addressed by AI agents that can handle handoffs and coordination autonomously.
Gaps in accessible knowledge: information and expertise that exist but cannot be reached.
A low score means information is well-organised and findable: documentation is current, institutional knowledge is accessible, and expertise is not locked in individuals.
A high score means critical knowledge is inaccessible: it exists either as undocumented expertise in specific people (the "Ask Maya" problem) or as scattered documentation with no coherent structure that makes information findable when needed.
Knowledge Friction is addressed in Chapter 2 of Proxy.Me. It is the friction type most directly amplified by AI: AI tools that cannot access well-structured knowledge cannot apply it, and an organisation with severe Knowledge Friction will find its AI deployments underperforming expectations.
Organisational constraints on decisions: reasoning that is not recorded and decisions that get revisited.
A low score means decisions are durable: the reasoning behind major choices is documented, stakeholders are included before decisions are made, and settled questions stay settled.
A high score means decisions disappear: reasoning is never recorded, decisions are reopened when new stakeholders appear or when the original decision-makers move on, and conflicting directions are common. Work done on the basis of a previous decision is regularly undone when that decision is relitigated.
Decision Friction is addressed in Chapter 3 of Proxy.Me. It is the friction type that AI is least equipped to resolve on its own: AI can help record decisions, but it cannot substitute for the organisational practices that make decisions durable.
Because tradeoff pairs in Study 3 pit friction types against each other, the dimensions are correlated. A person cannot easily score extremely high on all three simultaneously: each pair forces a relative comparison. However, it is common to have two dimensions elevated together:
Every archetype is defined by a specific pattern of dimension scores. Archetypes are not assigned from the composite: they are assigned from the combination of individual dimensions, often with specific threshold conditions on two or more dimensions simultaneously.
This is why the composite can be misleading in isolation. A person with a Study 1 Vulnerability Index of 55 could be The Efficient Amplifier (moderate-high vulnerability, task exposure below 50), The Acceleration Navigator (high vulnerability but focused on speed as the response), or The Judgment Concentrator (mid vulnerability with a strong Skill Replaceability score). Reading the composite alone misses this distinction.
For the full set of dimension conditions that define each archetype, see the Archetypes page, or explore individual archetype pages on the research platform.