Women Leading in AI Manifesto Launch Event: why and how to legislate for AI
The launch of the Women Leading in AI (WLinAI) Manifesto packed Committee Room 17, in the Palace of Westminster. Hosted by Jo Stevens MP and led by WLinAI co-founders Ivana Bartoletti, Allison Gardner and Reema Patel, the event brought together leading (female) figures in the tech sector to welcome the 10 Principles of Responsible AI. The document outlines 10 actions on how AI should be legislated and adopted, in order to “build an AI that supports our human goals and is constrained by our human values”.
The stronger the presence of AI in our day-to-day lives, the more vulnerable we are to bias present in the algorithms, especially if algorithms are being used to approve or reject loans, screen job applications or to inform social workers. Algorithms rely on data, and when the decisions being made by algorithms is on human lives and interactions, the data used for predictions is a record of how people have interacted in the past and of how society is structured. And it is no surprise that power hierarchies and social dynamics work exist and that people are privileged for simply for having certain characteristics. The challenge becomes how to deconstruct bias in the data so that AI can be an effective tool to improve our lives.
The Manifesto suggests strict regulation, mirroring that of the pharmaceutical sector; an independent body to audit algorithms with the authority to issue fines against those found guilty of breaches, a Certificate of Fairness for AI Systems – with a ‘reduced liability’ incentive for companies that have obtained this – and a mandatory Algorithm Impact Assessment (AIA). There is also concern for the people impacted by implementation of AI, and a call for organisations to take responsibility over the consequences of their decisions: public sector organisations should inform citizens about the decisions made by a machine, and companies should publish impact assessments AI implementation, offer re-training to employees or contribute to a digital skills fund to support employees that had their roles automated. More widely, the report suggests a skills audit to identify the range of skills required to embrace the AI revolution, used as the base-line for developing an education and training programme with special attention given to encouraging young women and other underrepresented groups in the technology sector. As Ivana explained, a regulatory framework is necessary to avoid automated inequality, and transparency and accountability are fundamental when algorithms are being used to inform humans and make decisions that will impact one’s life.
If used correctly AI has the potential to improve and facilitate human lives, something we have covered across a number of articles. However, public trust is fundamental during this transition, especially if AI systems are being used to read medical scans, or introduced into our homes in the form of smart appliances. This means that the government and policy makers will play a leading role in the acceptance of AI. More and more, the line between physical and digital spaces is being blurred, and an awareness of this is fundamental in order to successfully navigate this new environment. Given the scope of where and how AI will impact the human experience, it is necessary for it to be viewed within a system, rather than in isolation. The mix of speakers and attendees at the event, including educators, policy makers, academics and social entrepreneurs gave insight into how many areas will be impacted by AI and argues for the need for networks such as WLiAI, which transcend the sector-division of society and the labour market and government.