Domain experts should feel empowered to approach AI tools critically even if they do not fully understand the science behind them.
“Sometimes people can be really scared to appear critical about a new AI tool, because there’s always a fear that people will think you are a Luddite or that you don’t understand it or that you are anti-technology,” said Kerry McInerney, senior research fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. “But I think the best way to appreciate new technologies to engage meaningfully with them.”
Laying out a toolkit for firms looking to adopt AI products, McInerney said that firms should first ask whether AI is the right solution to the particular problem they are facing.
“AI, it can be great, but it can also be very expensive. It can be very unsustainable to run. It can cause a lot of problems,” she explained.
Examples of how companies might approach this include asking for case studies showing the successful use of AI-tech and for the vendor to be able to clearly explain how the technology works to a non-expert. McInerney added that the latter is increasingly a legal requirement in the EU.
McInerney is an AHRC/BBC New Generation Thinker and a part of a leading €1.9m (£1.5m) AI ethics project. She co-leads the Global Politics of AI Project on how AI is impacting international relations.
No comments yet