ACCA is urging the UK government to put ethics, transparency, and governance at the heart of its artificial intelligence policy.
As the UK government’s AI Safety Summit meets at Bletchley Park – one of the birthplaces of computer science – to discuss how AI benefits can be realised, ACCA says that the only way to ensure AI is viable for the long term and works for the whole of society is to ensure transparency around the way it is used and its impact.
The key ethical concerns identified by ACCA research are:
- A lack of transparency and trust hindering AI adoption.
- The ability to mitigate bias and discrimination in use.
- Privacy and security of data.
- An absence of a relevant legal and regulatory framework – for issues such as liability and copyright.
- Inaccuracy and misinformation.
- The Magnification Effect/unintended consequences. – one single AI error could be much more serious than human error.
Jamie Lyon, ACCA’s Head of Skills, Sectors and Technology, said: “To navigate this complex landscape, individuals and organisations must understand and proactively manage these risks. Transparency will only be achieved if the policies and strategies of the organisation are designed to ensure accountability and good governance. Transparency – including when AI is used – builds trust ensuring this technology can be used confidently and can be relied on.”