Home » News and events » Managing through the AI mirror

Managing through the AI mirror

Dr Guy Bate and Dr Rhiannon Lloyd
University of Auckland Business School

In a recent book review for Postdigital Science and Education, we examined The AI Mirror by Shannon Vallor (Bate & Lloyd, 2025; Vallor, 2024). Vallor, a philosopher of technology, suggests that artificial intelligence functions not merely as a tool, but as a mirror that refracts and remakes our patterns of thought, decision-making, and self-understanding.

For managers, this is more than a philosophical observation. It has practical consequences. Generative AI (GenAI) is not a neutral add-on to existing workflows. It is a technology that intervenes in the space of thinking itself. The question is no longer just what GenAI can do, but what it makes us become.

Not a black box, but a relational device

Too often, GenAI is framed as a black box. It is seen as a powerful system whose workings we cannot see but whose outputs we are urged to trust or ignore. This either/or posture (trust versus suspicion) is misleading. More usefully, GenAI can be understood as a relational technology that shapes and is shaped by how we interact with it. When we frame it as a thinking partner, not an oracle, we open space for dialogue, testing, and reframing.

In this sense, GenAI is less a tool to be deployed and more a space to be inhabited. In management coaching, for example, AI can prompt alternative framings of a challenge, nudging reflection. In strategy workshops, it can help surface hidden assumptions or bring peripheral signals into view. In scenario work, it may provoke plausible futures that reorient the present.

These uses are not about removing the human. They are about reconfiguring the space in which human judgment is exercised.

Mirrors don’t just reflect; they direct

Vallor’s metaphor of the mirror is a double-edged one. AI reflects our pre-existing biases and historical patterns, but it also subtly directs how we see ourselves and others. This can be damaging when those reflections entrench narrow, exclusionary worldviews. But it can also be enabling when it creates pause, dissonance, or even discomfort.

Consider a team using GenAI to simulate stakeholder perspectives during policy development. When prompted carefully, the AI can amplify perspectives that may be underrepresented in the room. That amplification is conditioned by the prompt, the model’s training data, and the conversational flow. But it invites the team to sit, even if briefly, with unfamiliar framings.

These encounters act as disturbances. They are small, constructive frictions that interrupt habitual ways of thinking and open space for alternative positioning.

 

Dr Guy W Bate

From automation to augmentation

There is understandable concern that GenAI will deskill employees or flatten the textures of human work. This risk becomes acute when organisations pursue automation without reflection, treating it purely as a means of cost reduction. But that is not inevitable.

If we treat GenAI as a scaffold rather than a substitute, we can design use cases that heighten, rather than erode, human discernment. For instance, auto-drafted performance reviews can prompt deeper reflection on tone, alignment, and fairness. Draft reports generated from fragmented data can help surface inconsistencies or contradictions worth interrogating. These are prompts, not verdicts. Their value lies not in what they conclude, but in how they provoke reconsideration.

In short, the introduction of GenAI should be designed not to replace reflection but to stage it.

Moral agency cannot be outsourced

One of the most important lessons from The AI Mirror is that GenAI cannot make ethical decisions. It can mimic moral language, simulate deliberation, and output reasoned-sounding responses, but it cannot care, and it cannot be accountable in any human sense.

This puts new demands on managers. Ethical discernment becomes a live organisational capability, not a compliance tick-box. When GenAI is used to inform, for example, hiring, pricing, or customer segmentation, the ethical consequences are enacted in real time, often invisibly.

Managers must resist the temptation to treat AI outputs as objective. Instead, they must ask what logics are being reinforced, who is being centred, and who is being marginalised. They must remain alert to what assumptions are being smuggled in under the guise of prediction or optimisation.

This is not a return to pre-digital ethics. It is a recognition that moral reasoning is now entangled with technical systems. Management teams are accountable for that entanglement.

Autofabrication and the future of work

Vallor introduces the concept of autofabrication: the idea that technologies do not just support what we do; they participate in who we become. If GenAI is used only for efficiency, it will cultivate a culture of speed and superficiality. If used for reflection and creative provocation, it can support depth, imagination, and relational awareness.

In practice, this reframing could mean:

  • Integrating GenAI into management development not as a knowledge source but as a dialogic partner
  • Using GenAI to facilitate team reflection sessions that surface divergent interpretations
  • Embedding GenAI into innovation processes not to generate ideas wholesale but to stretch the perimeter of current thinking

These practices are not about avoiding GenAI’s risks. They are about designing our organisational relationships with it in ways that are intentional, reflexive, and oriented to growth.

Dr Rhiannon Lloyd

GenAI is not just a mirror. It is a medium. It reflects, but it also configures.

Five actions for management

To lead well in this emerging terrain, consider the following:

  1. Reframe GenAI as a space of reflection, not just function. Ask what it reveals about organisational norms and assumptions.
  2. Invest in prompting capability. How your teams ask matters as much as what the model can say.
  3. Reclaim ethical review as strategic. It is not a gatekeeper but a generator of foresight and care.
  4. Keep sensemaking human. GenAI can open up options, but it cannot decide what matters.
  5. Foster discernment. Not every manager needs to understand the model, but all need to know when to pause, question, and contextualise.

Conclusion: shaping the mirror

GenAI is not just a mirror. It is a medium. It reflects, but it also configures. For management, this means that the adoption of GenAI is not simply a technical decision or a productivity play. It is a design act. It shapes the conditions under which thought, care, and imagination unfold in organisations.

The mirror is never neutral, but it is malleable. The real question is not only what GenAI shows us, but how we respond, and who we become, as we look into it.

References

Bate, G. W., & Lloyd, R. (2025). Review of Shannon Vallor (2024). The AI mirror: How to reclaim our humanity in an age of machine thinking. Postdigital Science and Education. https://doi.org/10.1007/s42438-025-00546-z

Vallor, S. (2024). The AI mirror: How to reclaim our humanity in an age of machine thinking. Oxford: Oxford University Press. https://doi.org/10.1093/oso/9780197759066.003.0002.

About the authors

Dr Guy W Bate is Thematic Lead for Artificial Intelligence and a Professional Teaching Fellow in innovation and strategy at the University of Auckland Business School. A passionate advocate for the transformative power of AI in learning, Guy is also a member of the Board of Editors for Research-Technology Management (RTM) and the Chair of EdTechNZ’s AI in Education Technology Stewardship Group.

Dr Rhiannon Lloyd is the Director of the Kupe Scholarship Programme, a Senior Lecturer in Leadership, and a member of the Aotearoa Centre for Leadership and Governance at the University of Auckland Business School. Rhiannon’s research tends towards a critical perspective, and focuses on organisational theory, environmental and responsible leadership, and fun and play at work.