An article from Data Science Central on regulating Artificial Intelligence and the US government's new "Federal Automated Vehicle Policy."
Written by William Vorhies.
Just today (9/6/17) the US House of Representatives released its 116 page “Federal Automated Vehicles Policy”. This still has to be reconciled and approved by the Senate but word is that shouldn’t take long. Equally as interesting is that just two weeks ago the German federal government published its guidelines for Highly Automated Vehicles (HAV being the new name of choice for these vehicles).
There are very few circumstances in which we would welcome government regulation of an emerging technology like AI, but this is one of those times.
Why We Welcome Regulation
First, of all the uses to which AI will be put, HAVs have the most potential to enhance or harm our lives. Industrial robots, chatbots, and robot vacuums don’t concern us. Perhaps the next most critical application will be drones for delivery or even transport but we’re not close enough there to yet be concerned.
HAVs done right will be a boon to productivity and cost reduction. Released too soon they may be more safety menace than means to reduce accidents and alienate potential future customers.
Given the level of regulation our normal cars and trucks receive at both the state and federal level regulation is inevitable. The balance to be achieved is a sufficiently light hand not to slow down innovation and deployment while giving the newly chauffeured public enough confidence to begin adopting.
HAV adoption is not a slam dunk. Gartner’s ‘Consumer Trends in Automotive’ reports that their recent survey in the US and Germany showed that 55% of their sample would not ride is a fully automated car. However 70% said they’d be willing if the car was only partially autonomous.
Frankly, my take is that the general public is rightly discounting a lot of the press hype and needs the reassurance some regulation can provide.
The US Approach
The policy published by the NHTSA is actually a breath of fresh air in the world of regulation. There are already at least 20 states that have regulations on their books to control HAVs and the impediment to innovation caused by this balkanization was on the verge of becoming overwhelming.
So the Fed has reserved for itself the issues of setting overall safety and performance criteria while leaving with the states those functions that were always theirs:
- Licensing (human) drivers and registering motor vehicles;
- Enacting and enforcing traffic laws and regulations;
- Conducting safety inspections, where States choose to do so; and
- Regulating motor vehicle insurance and liability.
If anything this leaves the states with plenty of room to diverge especially around who is to be licensed (manufacturer, owner, driver) when no licensed active driver is required to be aboard, and similarly who pays (liability) in the case of accidents.
What states are specifically prohibited from doing is regulating performance which is reserved for the Fed.
On the 6 point automation scale in which 0 is no automation and 5 is where the automated system can perform all driving tasks, under all conditions, the new policy applies to level 3 or higher (though the broad standards also apply to the partial automation in levels 1 and 2). Level 3 is roughly where Tesla currently operates (or is rapidly approaching) performing some of the tasks some of the time with the human on alert to take over.
The new policy document offers guidance in four areas:
- Vehicle Performance Guidance for Automated Vehicles
- Model State Policy
- NHTSA’s Current Regulatory Tools
- New Tools and Authorities
The Good News Is This
The Fed does not propose any specific performance standards though it reserves the possibility to do so in the future. The general guidance is that deployed HAVs driven by the public must meet or exceed current vehicle and safety standards.
Manufacturers self-certify compliance within a list of about 15 major areas.
- Data Recording and Sharing
- System Safety
- Vehicle Cybersecurity
- Human Machine Interface
- Consumer Education and Training
- Registration and Certification
- Post-Crash Behavior
- Federal, State and Local Laws
- Ethical Considerations
- Operational Design Domain
- Object and Event Detection and Response
- Fall Back (Minimal Risk Condition)
- Validation Methods
No specific performance requirements other than general safety greater than non-HAVs are specified. This allows for the greatest amount of flexibility for innovation as well as the constant stream of updates to deployed HAVs without prior government approval. Those upgrades are most likely to be delivered as software over the net making them much more akin to getting Windows Updates than going to your local mechanic for repairs.
In return manufacturers may each deploy 100,000 HAVs. Note that ‘deployed’ means driven by actual customers and not employees. The House version includes heavy trucks which the Senate version does not and the numerical limits are slightly different, but the balance of the policy is essentially the same in both versions.
There are on the order of about 35 companies (including some component suppliers) currently testing HAVs meaning that in just a few years we could have a test bed of 3 Million or more HAVs on which to perfect our AI.
The German Approach
So far the approach proposed by Germany has an even lighter hand (though that may change), but a different focus – the moral and ethical ramifications of HAV operation.
This has taken the form of a report from the Ethics Commission on Automated Driving presented by Federal Minister Alexander Dobrindt which was adopted by the Cabinet. Per Dobrindt:
The interaction between man and machine raises new ethical questions in the time of digitization and self-learning systems. Automated and networked driving is the latest innovation in which this interaction is applied in full. The ethics committee at the BMVI has done an absolute pioneering work and has developed the world's first guidelines for automated driving. We are now implementing these guidelines - and thus remain an international pioneer in mobility 4.0.
The full report covers 20 points of which these are the key:
- Automated and networked driving is ethically necessary if the systems cause fewer accidents than human drivers (positive risk assessment).
- Material damage is the result of personal injury: In the event of danger, the protection of human life always has top priority.
- In the case of unavoidable accidents, any qualification of people according to personal characteristics (age, sex, physical or mental constitution) is not permitted.
- In any driving situation, it is necessary to clearly define who is responsible for the driving task: the man or the computer.
- Anyone who drives must be documented (e.g. to clarify possible liability issues).
- In principle, the driver must be able to decide himself (data sovereignty) by passing on and using his vehicle data.
The core of this guidance is that in the event of unavoidable accident, priority must be given to humans as opposed to animals or property. Most importantly the HAV must not take into account any element of judgement based on any characteristics of the occupants including the number of passengers versus those potentially injured. Specifically called out are age, sex, physical or mental constitution.
Many readers will recognize this as the old Trolley Problem in which the operator must make a last second judgement about who and how many people will be injured versus those saved. As any first year philosophy student has discovered, there are no good answers. The Germans have taken the position that the AI must not be designed to make this decision. Unfortunately they are silent on who or how the decision will be made when the AI is in charge.
The full article is available on the Data Science Central website.