Artificial Intelligence or human intelligence, which is the priority?

By Russell Gammon, Chief Innovation Officer at Tax Systems.

  • Monday, 9th March 2026 Posted 2 hours ago in by Phil Alsop

Across the finance industry, generative AI is making waves, supporting tax professionals with its automated efficiency capabilities – but what impact does this have on human cognition?

Artificial Intelligence (AI) is not the first technology to revolutionise the tax profession, but perhaps only the introduction of the spreadsheet can rival its impact, particularly if we fast forward a couple of years. AI has the potential to completely redefine the world of work as teams across the accounting sector look to integrate a wider variety of AI-powered resources. Tax and finance teams are no different. Although typically more cautious when it comes to technology adoption, AI has already proven its benefits to tax professionals who are now ready to start driving true value from their investments.

In doing so, they are seeing AI reshape how work gets done by taking on the foundational and repetitive tasks that make up a large proportion of tax work. However, while this technology may facilitate a rise in productivity levels, research suggests unaided work drives higher cognitive engagement, hinting at a growing risk of “mental offshoring”. A study by MIT shows stronger neural connectivity when people work unaided than when they work with LLMs, while research from UPenn indicates that, when misused, generative AI can harm learning outcomes.

These insights raise an inevitable question – as AI increases task efficiency, is it also eroding the critical thinking that has come to define the modern tax professional?

The AI conundrum

The benefits of AI are clear, but it's up to businesses to determine if they are worth the risk. Organisations looking to implement AI systems must ensure key skills are not lost in the process.

Traditionally, tax professionals begin their careers performing tasks such as basic admin, drafting computations and transposing numbers. These repetitive, lower-complexity tasks have previously served as the training ground for junior staff to cut their teeth. Now that AI is growing more commonplace, much of this foundational workload is being offloaded to systems capable of accelerating mid-level tasks. While this can result in new workers quickly progressing to higher-value tasks, they can miss out on the gradual skill-building that comes with granular repetition. Historically, this process has been seen as fundamental to kickstarting professional development in the pre-AI era.

As AI takes on more responsibilities, the industry could face a decline in its capabilities to properly address complex issues.

The risk to industry skills Research proves that when workers take the time to tackle challenging mental tasks, their cognitive capability grows. As such, handing tasks to AI risks workers merely ‘skimming the surface’ of tasks, rather than engaging in a way that promotes engagement and greater learning. This is backed up by UPenn’s research which explains that while generative AI can support learning when used correctly, it can also diminish understanding if used as a shortcut. When it comes to tax,

such a knowledge rich industry faces a very real risk that professionals become overly reliant on AI-generated outputs without developing the judgment needed to validate them.

To ensure this approach does not become the norm, businesses must look to break workflows down into their component tasks rather than applying AI broadly or indiscriminately in a rush to innovate. With strong foundations in place, organisations can identify which activities can be automated, which need human oversight, and which tasks require strong human judgement. By taking this approach, businesses will prevent over-automation, ensuring AI is applied where it genuinely adds value, not where it can undermine learning or critical reasoning.

Not every task should be automated by default – even if it may be suited to automation. If it plays a fundamental role in development, replacing the human experience with AI will only serve to hinder businesses in the long run. Using AI as an assistant rather than a replacement aids with efficiency gains while maintaining cognitive engagement. This will provide junior workers with the opportunities to build their foundational understanding, whilst still reaping the benefits that AI has to offer.

Taking a structured approach to AI implementation can help reduce the risk of ‘blind spots’ where AI outputs are accepted without relevant context or human judgement. With this task-level orchestration, organisations can ensure professionals maintain depth of expertise without losing critical thinking capabilities.

Building the ideal framework

From the outset, experts must provide direction, context and key parameters to inform the AI on how to proceed with any given task. Once AI has carried it out, handling the structured, scalable and repeatable elements at speed, human specialists then review the output, addressing any issues or discrepancies to ensure they meet the required standards.

This keeps critical thinking at the heart of the workflow while still capturing the significant productivity gains AI brings. Providing a balanced and consistent framework is vital to ensuring structured support is available for junior development, enabling workers to engage with high-level reasoning tasks. Not only does this maintain the foundation upon which tax professionals build their cognitive capabilities, but it keeps them involved in key evaluative work that builds professional judgment.

This balanced framework provides organisations with the opportunity to embrace modern technology without the risk of losing the critical thinking that underpins the industry. It enables tax professionals to deliver modernised solutions without sacrificing their chance to continue developing the skills and capabilities of new workers.

By Lakshmi Hanspal, Chief Trust Officer at DigiCert.

AI in the SOC: why complete autonomy is the wrong goal

Posted 4 days ago by Sophie Milburn
By Dan Petrillo, VP of Product at BlueVoyant.

Preparing for the quantum threat

Posted 4 days ago by Phil Alsop
By David Spillane, Systems Engineering Director, Fortinet.
By Robin Smith, CTO of Perk.
By Karthik SJ, General Manager of AI at LogicMonitor.

Sovereignty is no longer about location

Posted 6 days ago by Phil Alsop
By Leonardo Boscaro, EMEA Sales Leader, Nutanix Database.
By Matthew Baynes, Vice President of Secure Power & Data Centres UK&I Schneider Electric.

How to prepare for the Cyber Security and Resilience Bill

Posted 1 week ago by Sophie Milburn
By Matt Middleton-Leal, Managing Director EMEA, Qualys