The CAV industry has been somewhat shaken today by news that companies such as Tesla and Ford that design and manufacture driver-less vehicles and other artificial intelligence systems could in the future face multi-million pound fines and even jail sentences, if their technology harm workers in any way, according to the Department of Work and Pensions.
Government spokesperson Baroness Buscombe was contacted for a response through a written parliamentary question last week and indeed confirmed that existing health and safety law “applies to artificial intelligence and machine learning software” at this stage. This does indeed offer an answer to a valid debate around A.I. laws that has been going on for several years in the industry, legal, government and academic circles directly involved in the technology.
The Health and Safety at Work etc Act 1974 (also referred to as HSWA, the HSW Act, the 1974 Act or HASAWA) is the primary piece of legislation covering occupational health and safety in Great Britain. As we were previously aware and under the Health and Safety Act of 1974, directors found guilty of “consent or connivance” or neglect can face up to two years in prison amongst a variety of many other penal punishments.
This provision of the Health and Safety Act is “hard to prosecute,” in its current state as the director himself would have to have direct control of the system at all points to be “accused” of under the Act. Nonetheless, it most definitely has already been considered and is expected that companies, start ups or technologists that in future will create such software or systems, will then effectively also be directly linked and hence responsible for their actions.
Companies can also be prosecuted under the Act, with fines relative to the firm’s turnover which can indeed be detrimental to bigger companies such as Tesla and General Motors that also operate in many other industries. In addition, it is interesting to also note that if a company’s turnover exceeds the “£50 million mark”, then fines can be unlimited so the above is something companies and legislators need to carefully consider and work around.
In reality, and until 2018, the Health and Safety Act has never been applied to a case of artificial intelligence and machine learning software, so these provisions will need to be tested in court.
However, in our view, the true importance of this announcement is the responsibilities that ruling gives the Health and Safety Executive (HSE) which now obviously clearly becomes one of the numerous regulators of AI, a group that includes the Information Commissioner’s Office and the recently-opened Centre for Data Ethics and Innovation amongst others. This – for some – assumes that our existing laws and legal frameworks are already well equipped to cope with this new technology even though it has never been applied to it before.
This was reinforced by comments fro Neil Brown, Director of Legal technology firm, Legal. “There is nothing magical about AI or machine learning, and someone building or deploying it needs to comply with the relevant regulatory framework”.
However, many others questioned the Health and Safety Executive’s knowledge and ability to comprehend the complexity of the new technology which under the current regime is left to companies to test and is almost certain to cause confusion and prolonged court cases in the future if it remains in its current state.
When asked, Michael Veale a researched in Responsible Public Sector Machine Learning of the University College London he noted: “I’m sceptical both that industry’s own tests will be deep and comprehensive enough to catch important issues, and that the regulator is expert enough to meaningfully scrutinise them for rigour. Killer robots might be the first thing that comes to mind here,” Mr Veale added, “less flashy systems designed to manage workers, such as to track them around the factory or warehouse floor, set and supervise their tasks, and monitor their activities in detail, can have complex mental and physical effects that health and safety regulators need to grapple with.”
We believe that this is a quick fix for a bigger problem and the UK government have not thought of the implications well enough. By doing this, they are effectively placing the UK out of the industry, as start-ups in this field would not risk to deploy or test their products in UK as it would have a high risk. It would also slow the entry of new mobility and autonomous providers in the UK due to complex legislation and potential legal worries. Furthermore it would impact the plans the government has put in place to make UK the best “best bed” for autonomous vehicles. With this new legislation the large (only) OEM’s and Tier 1 suppliers would be the ones that can afford to take the risk of deploying and testing CAV in UK and sadly it would stop innovation in AI and autonomous driving for the start ups and SMEs.
What are your thoughts on the above? Any comments or suggestions of something we should add to our Blog?