Written by 6:00 am Latest news

### Enhancing Autonomous Vehicles with Advanced AI Regulations

Recent Tesla Autopilot and Cruise robo-taxi news has raised public concern. Strong federal and stat…

Anticipations surrounding the capabilities of machine learning technology in automated driving have been heightened by the enthusiasm for artificial intelligence (AI). Nevertheless, there exist significant disparities between AI’s large language models (LLMs) that transform words into sentences and the machinery that navigates vehicles on public roads. The realm of automated travel carries profound safety implications for all road users, extending beyond the occupants of self-driving cars. In contrast to LLMs that support desktop or mobile applications, the software governing automated vehicles is subject to markedly higher standards of reliability and precision.

Despite valid concerns regarding human errors while driving, severe traffic incidents are presently relatively uncommon in the United States. According to the traffic data from the National Highway Traffic Safety Administration (NHTSA), fatal crashes occur approximately once every 3.6 million hours of driving, with injury-causing collisions happening every 61,000 hours. Over seven years of continuous operation, this translates to one fatal crash and one injury-causing crash. Achieving extended mean times between failures for intricate software-driven systems, particularly those produced on a large scale at competitive prices, poses a considerable challenge.

The health-related challenges faced by automated travel application systems are underscored by the issues encountered by autonomous vehicle companies such as Cruise and Tesla with California’s safety regulators and the NHTSA. These challenges transcend mere technological aspects, revealing the substantial risks associated with both companies’ endeavors to infuse Silicon Valley’s ethos of “move fast and break things” into an environment where safety must take precedence. The development of a secure system demands empathy and meticulous attention to detail, qualities that are incompatible with haste. Above all, our vehicles must not pose a threat to anything, especially human lives.

Hence, the United States necessitates a stringent regulatory framework for automated driving. Only after the technology undergoes rigorous evaluation by safety professionals and regulatory bodies can it unleash its potential to enhance safety and earn public trust in its reliability. Given the safety-critical nature of the technology powering vehicles, an unparalleled level of dependability is imperative. Both safety regulators and the general public will demand tangible and verifiable evidence that the technology enhances traffic safety rather than compromising it. Therefore, the application must incorporate explicit algorithmic safety measures in addition to relying entirely on AI-driven machine learning techniques. Tesla and Cruise offer insights into the necessity of such measures.

Level 2 limited driving technology systems, designed to manage vehicle speed and steering under continuous driver oversight within specific constrained road and traffic conditions, have come under scrutiny by the NHTSA in the case of Tesla. An agreement was reached with Tesla on December 12 last year for the recall of vehicles equipped with Autopilot functionality due to inadequate safeguards against driver misuse. Unlike comparable driving automation features from Ford and General Motors, Tesla’s Autopilot lacks direct (infrared) video monitoring to assess drivers’ attentiveness in supervising the system’s operation. Moreover, the software allows unrestricted usage irrespective of the designated restricted-access highways. Implementing straightforward modifications to provide drivers with clear alerts regarding attentiveness and restricting system usage to suitable road conditions could have mitigated safety risks. However, the company opted against this approach and is now introducing a few additional instructions through an over-the-air software update to Autopilot. Stricter regulatory measures are imperative to compel the implementation of “geofencing” the system, restricting its operation to proven safe areas and ensuring drivers are attentive based on camera monitoring.

Following an incident on October 2 where one of Cruise’s vehicles dragged and severely injured a pedestrian trapped underneath, the California Department of Motor Vehicles revoked Cruise’s license to offer autonomous ride-hailing services in San Francisco. This incident prompted a comprehensive internal review of Cruise’s operations, revealing significant deficiencies in the company’s safety culture and interactions with individuals and public agencies. Unlike other leading companies venturing into autonomous ride-hailing services, Cruise prioritized rapid development and expansion over safety, lacking a designated chief safety officer and an effective corporate safety management system. Despite frequent discussions on safety, it appears that safety was not a primary consideration in critical decisions with substantial safety implications.

In the foreseeable future, progress can be made by instituting fundamental safety requirements at either the state or ideally, national level to enhance safety and bolster public confidence, even as automated driving technology evolves and precise performance-based regulations remain underdeveloped due to limited data. Developers of automated driving systems (ADS) and fleet operators should be mandated to implement audited and regulated safety management systems, report all incidents including crashes, near misses, high-g maneuvers, and human control takeovers, and ensure that the ADS operate solely in proven safe environments. Prior to deployment, comprehensive safety cases reviewed and approved by state or federal authorities must be established. These safety cases should outline anticipated hazards, detailing risk reduction strategies backed by statistical evidence from real-world testing supervised by individuals.

The potential of automated driving technology to enhance safety will only be realized when public trust in the technology’s safety is established. Regulations may be indispensable to establish minimum standards for the creation and operation of secure systems, along with transparent disclosure of safety-related data for independent safety experts and regulators to scrutinize.

Visited 1 times, 1 visit(s) today
Last modified: January 11, 2024
Close Search Window