Three takeaways from DTX – how is Manchester’s tech scene approaching AI?

Last year at DTX, Manchester’s flagship tech event, AI unsurprisingly dominated the conversation: what it could do, how it might transform operations, and the scale of opportunity ahead.

This year, the tone had shifted. AI is no longer being discussed as something to adopt simply to keep up with the hype. Instead, the focus has moved towards intent and accountability: why it’s being used, what problem it solves, and whether it’s being deployed in a responsible and meaningful way.

Across public sector leaders, private sector organisations and academia, three themes consistently came up at the event: trust, context, and human judgement.

Article content

1. Trust is the foundation, not an afterthought

Trust is now the defining requirement for AI adoption. It can’t be added after deployment through governance or compliance checks; it needs to be designed into systems from the outset and maintained continuously.

A practical way to think about trust is transparency. Can you explain how an AI system reached a decision? Is there meaningful human oversight in place, even when systems are operating with autonomy? As AI agents take on more responsibility, these questions become central to safe adoption.

Governance and compliance therefore need to sit across the entire lifecycle of AI systems, rather than at the end of the process. Organisations are working within evolving regulatory frameworks, including the EU AI Act and emerging UK sector-specific approaches, but regulation alone isn’t enough. Security and ethics must be embedded into everyday decision-making, not treated as separate considerations.

Trust also depends on organisational culture. Open conversations about AI risk and ethics help surface issues early, and many organisations are developing internal advocates who can support responsible use across teams and encourage consistent critical thinking. There’s also growing recognition that involving wider stakeholders – whether employees or citizens in public sector contexts – helps shape clearer and more accountable guardrails from the start.

2. Context, data discipline and guardrails matter more than the AI models

One of the clearest shifts this year is that competitive advantage is no longer coming from AI models themselves. As performance converges and large language models (LLMs) are becoming commoditised, models are becoming interchangeable. The true differentiator now is context.

The organisations getting the most value from AI are those that can connect data, systems and intent in structured ways. Techniques such as knowledge graphs, combined with large language models, allow natural language to be converted into structured, usable information. This layered context enables systems to move beyond generic responses and operate more effectively within specific business environments.

However, this only works when the foundations are in place. A lot of organisations still underestimate the importance of understanding their existing processes before introducing AI. A phrase that resonated in one of the sessions at DTX was, “if you input chaos, your output is going to be chaos”. Mapping data ownership, access and quality is a critical first step. Without this, AI tends to amplify existing weaknesses instead of solving them.

This is where governance and consistency become essential. Fragmented tooling or inconsistent adoption can lead to duplication, risk and the emergence of shadow AI (AI platforms that run in the background without the team or sometimes even the individual knowing). A clear and unified approach to data and systems helps avoid this.

The shift towards agentic AI consolidates this issue. Unlike traditional robotic process automation, agentic systems are designed not only to automate tasks but to make decisions. They operate through layers of perception, reasoning and action. Without strong context and guardrails, these systems risk acting in ways that are misaligned with organisational intent.

Ultimately, AI delivers value when it is applied to real problems with strong foundations behind it.

Article content

3. Human judgement and critical thinking are the real differentiators

As AI becomes more capable, it’s changing the nature of work rather than replacing it outright. The expectation is that AI will take on repetitive and time-consuming tasks, allowing people to focus on higher-value work. This raises important questions about entry-level roles, where much of this foundational work has traditionally been concentrated. However, similar concerns have emerged in previous shifts, such as the introduction of Google Search, which is now fully embedded in everyday work.

The more significant shift is in skills. AI literacy is becoming essential, particularly in how problems are framed and how outputs are interpreted. Poorly structured prompts or unclear objectives consistently lead to weak results. As answers become easier to generate, the ability to ask better questions becomes more important than the answers themselves.

There’s also a growing risk of confirmation bias. If AI is used only to validate existing thinking, it can reinforce assumptions rather than challenge them. This makes critical thinking even more important. AI should always be treated as a tool, not a source of truth.

At the same time, generative AI tends to produce outputs that are “good enough”. While useful in many contexts, this creates a risk that organisations accept average quality and ‘AI slop’ without sufficient scrutiny. Human judgement is essential to recognise when higher standards are required and to refine outputs accordingly.

This is where distinctly human capabilities become more valuable. Intuition, empathy, context and meaning remain difficult for AI to interpret. As a result, creativity, ethical reasoning and imagination are becoming more important in decision-making roles. The balance is shifting away from purely technical execution towards judgement-based work in complex environments.

Article content

On to next year

This year’s event reinforced a clear direction of travel: the organisations that will see the most value from AI are those that use it to strengthen thinking, while continuing to rely on human judgement where it matters most. As Manchester’s tech community moves towards more considered and intentional adoption, there’s a real opportunity to embed AI in a way that’s responsible, ethical and genuinely useful in practice. It’ll be interesting to see how these priorities continue to evolve and where the conversation goes at DTX 2027!

Article content

More articles

Three takeaways from DTX – how is Manchester’s tech scene approaching AI?

Last year at DTX, Manchester’s flagship tech event, AI unsurprisingly dominated the conversation: what it could do, how it might transform operations, and the scale of opportunity

State of the City Region : projections and trends for Greater Manchester

Ella Broadbent, Senior PR Consultant at PR and strategic comms agency Petal & Co It’s Manchester, we do things differently here. That was the resounding message that

Expert Predictions For PR In 2026 – TechRound

Public relations (or PR) has massively changed over the past few years. Whilst it once involved big stunts, newspaper mentions and calling up journalist newsrooms to pitch ideas

Open AI’s Search tool ‘Atlas’ – gimmick or game changer? 

Ella Broadbent, Senior PR Consultant at Petal & Co As AI tools reshape how people find and filter information, traditional search engines face their biggest test yet.

Are you a talented comms professional looking for a new challenge and like the sound of working with us?