Chris Stokel, an editor at Walker Features A startup is developing an information company utilizing AI-generated anchors to deliver news. Will this innovation disrupt the long-standing parasocial connections between broadcast viewers and on-screen personalities?
Numerous news channels seamlessly integrated the footage without any noticeable discrepancies.
A diverse array of impeccably attired information presenters appear before the camera for 22 minutes, succinctly summarizing the day’s events in a video shared on social media. However, what sets them apart is their lack of authenticity—the anchors are not human but rather AI-generated entities.
The initiative is spearheaded by Channel 1, a Los Angeles-based startup established by entrepreneurs Adam Mosam and Scott Zabielski, aiming to launch AI-driven news reports on an online TV platform later this month. Mosam envisions leveraging AI to personalize content as a compelling opportunity to enhance the news consumer experience.
In a recent showcase released in December, Channel 1 unveiled the capabilities of AI systems, demonstrating their proficiency in translating scripts and conversations across different languages.
The latest instances of AI-powered news presenters can be witnessed on Channel 1. An AI persona named Fedha made appearances in Kuwait News articles, while Hermes delivered news to Greek broadcaster ERT in May 2023. SBS, an AI algorithm, assumed news presentation duties for Zae-In, a North Korean journalist, for five months this year. Similar AI-generated anchors are also operational in Taiwan and India.
The pivotal question remains: Will the audience trust news delivered by AI entities over human presenters?
Adam Mosam’s stance on this matter is clear—he believes that robots can execute tasks more efficiently than humans.
A Gallup study revealed a record decline in trust towards human news presenters, with only 42% of UK residents expressing confidence in TV broadcasters, marking a 16-point annual decrease. The skepticism towards traditional news anchors as impartial conveyors of truth reflects a contemporary trend, with some individuals opting to source news from individual influencers or content creators.
These social media influencers capitalize on the parasocial effect, fostering a connection with their followers. Parasocial relationships, initially defined by University of Chicago scholars in the 1950s, describe the perceived personal connection viewers felt with news anchors, as if they were directly addressing them through the screen. News presenters transitioned into familiar faces that graced living rooms night after night, evolving beyond mere messengers of information.
This direct engagement and perceived personal rapport have been effectively adopted by social media influencers to great success. Christine H. Tran, a researcher at the University of Toronto specializing in online platforms and labor, notes the evolution of the “parasocial” concept from its roots in newscaster-viewer affinity to a broader phenomenon. She cites examples like Twitch streamers, emphasizing that parasocial relationships can extend to various online personalities beyond traditional news anchors.
The question arises whether AI can replicate such interpersonal connections. While Mosam acknowledges the limitations of AI in fostering genuine relationships, he argues that objectivity is no longer a primary concern. He states, “We are not pursuing this avenue because we believe machines outperform humans.”
Alamy
Human involvement remains essential in filming footage and reporting stories in trials involving AI-generated presenters and digital clones by news outlets (Credit: Alamy)
Although the concept of AI-driven journalists may seem novel, the idea of non-human news readers is not unprecedented. Nic Newman, a senior research associate at the University of Oxford’s Reuters Institute for the Study of Journalism and former BBC journalist, recalls a time when actors were commonly employed to deliver the news. He believes that this experiment could succeed, albeit with certain limitations. Newman expresses skepticism regarding viewers forming parasocial bonds with AI anchors, suggesting that AI might be better suited for brief news segments. He emphasizes the significance of human involvement in delivering informative programs.
Tran shares Newman’s uncertainty regarding this development. She questions whether AI anchors can evoke genuine parasocial connections if clearly labeled as “AI content,” with viewers aware of the absence of a personal life beyond the screen. The classification of AI-generated content as such, akin to proposed measures on platforms like Instagram, may influence the perception of parasocial relationships.
The question of completely eliminating human involvement from news production is a focal point for Channel 1 and NewsGPT, the self-proclaimed first AI-created news channel in the nation.
At Channel 1, a limited number of employees currently oversee AI-generated scripts and curate stories for coverage. To mitigate the risks associated with AI-generated content, Channel 1 follows a meticulous 13-step process for each report, ensuring editorial oversight and accuracy. The organization plans to appoint an editor-in-chief in the coming months to uphold journalistic standards.
Mosam and Newman highlight the challenges AI may face in identifying significant events and reporting on them accurately. The success of Channel 1’s trial heavily relies on stories sourced and footage captured by human journalists. Newman questions the feasibility of AI functioning independently without these primary sources, emphasizing the necessity of human input.
While AI can handle certain aspects of the news production process, Mosam acknowledges its limitations in interpersonal interactions and information gathering. He asserts, “AI cannot establish personal connections and gather insights in the same manner as humans.” Despite AI’s capabilities, Channel 1’s current strategy does not entail news gathering solely through AI without human involvement.