Written by 4:01 am AI, Discussions, RelationalAI, Uncategorized

### Teens Acknowledge Relatives’ Superior AI Knowledge for Now

To see how families are grappling with generative AI, Kantar’s new survey looked at how ̶…

This season has sparked extensive debates about AI, focusing primarily on the perspectives of companies, authorities, and researchers. However, it is equally important to consider the viewpoints of children and parents in this discourse.

A recent study conducted by experts at Kantar and the Family Online Safety Institute delved into the shifting “habits, expectations, and fears” surrounding AI in not just the U.S. but also in Germany and Japan. The findings, unveiled last month, highlighted that while most parents expressed contentment with their adolescents utilizing relational AI, they also harbored concerns about associated risks.

Across the board, Germany exhibited the highest level of positive sentiment at 70%, followed by the United States at 66%, Japan at 59%, and Germany at 79%. Conversely, Japan displayed the highest percentage of individuals with a negative outlook, with 38% articulating such sentiments compared to 29% in the United States and 27% in Germany. A common priority identified by families in all three nations was the need for data transparency to tackle prevailing issues, a sentiment echoed by teenagers in the U.S. and Germany.

When questioned about their utilization of genAI tools, most families and teenagers mentioned employing them for scientific tasks. Teenagers leaned towards using these tools for efficiency-driven tasks like grammar checks, while families in the three countries leaned more towards creative pursuits. However, both adolescents and parents shared apprehensions regarding potential job displacement and the dissemination of misinformation. Teenagers also voiced concerns about the potential misuse of AI for fostering new forms of bullying and intimidation.

An unexpected revelation from Kantar’s report was that teenagers perceived their parents to be more knowledgeable about AI than themselves. This finding is particularly noteworthy in the context of the past two decades dominated by the rapid adoption of innovative technologies by children vis-a-vis their parents. Kara Sundby, the lead director of Kantar’s Future Practice, posited that parents may engage with AI more extensively in professional settings compared to social media platforms. Alternatively, this shift could signify families’ proactive efforts to enhance their proficiency in “responsible learning” amidst digital advancements.

Sundby noted, “Parents are leveraging this technology for their own benefit, unlike the swift adoption of platforms like TikTok and Snapchat.” It represents a paradigm shift towards practical utilization.

Despite the optimism unveiled by Kantar’s findings, a separate survey by Braze in the United States and United Kingdom revealed that approximately half of consumers harbor concerns about brands mishandling their data. Only 16% expressed complete or high confidence in brands’ responsible data usage. Furthermore, a mere 29% of respondents felt comfortable with brands leveraging AI to personalize experiences, with 38% remaining skeptical and 33% undecided.

Ashley Casovan, the managing director of the AI Governance Center at the International Association of Privacy Professionals, emphasized the importance of comprehending societal AI usage and its impacts. Drawing from her tenure at the Responsible AI Institute, where she conducted surveys among Canadians to gauge their perceptions of AI, Casovan highlighted the consensus among 71% of respondents advocating for collaborative AI development with “everyday people.” Moreover, 82% underscored the necessity of integrating ethical considerations to mitigate potential harms. Casovan’s experience in the Canadian government, where she contributed to formulating the nation’s inaugural policy on governmental AI deployment, further underscores the imperative of understanding diverse perspectives on AI applications.

Casovan emphasized the urgent need to grasp the nuances of AI deployment and its societal implications. This understanding is crucial at the governmental level, where literacy on AI practices is essential for informed decision-making.

Reports on AI:

  • OpenAI witnessed significant leadership changes, with CEO Sam Altman stepping down and CTO Mira Murati assuming the interim CEO role. Greg Brockman, the organization’s co-founder and president, announced his departure from the board in a blog post detailing the management reshuffle. Subsequently, Brockman also declared his resignation. The alterations were attributed to a lack of transparency on Altman’s part, impeding the committee’s oversight duties, as per an OpenAI blog post.

  • Noteworthy technology giants such as Google, Meta, IBM, and Shutterstock are implementing novel AI protocols and tools to bolster trust, mitigate risks, and ensure legal compliance.

  • A new bipartisan legislation introduced in the U.S. Senate, the Artificial Intelligence (AI) Research, Innovation, and Accountability Act, aims to enhance transparency and accountability in AI practices. Concurrently, the FTC launched an initiative addressing the “voice clone problem” to raise awareness about AI-related risks.

  • Major industry players like News Corp and IAC have expressed discontent with conceptual AI entities that extract content without appropriate compensation.

  • A key figure at Stability AI resigned over copyright concerns pertaining to the company’s AI model training practices. Ed Newton-Rex, who oversaw Stability’s audio initiatives, cited ethical qualms regarding the unauthorized use of copyrighted material. Stability AI has faced legal challenges, including lawsuits from Getty Images and artists.

  • Generative AI continues to feature prominently in the financial reports of diverse enterprises. Companies such as Getty Images, Visa, Chegg, Alibaba, and Tencent have highlighted the surge in demand for video advertising and the innovative application of generative AI tools in enhancing ad content.

Products and Updates:

  • Microsoft unveiled several AI updates during its Light function, including enhanced support for OpenAI’s GPT models, upgraded data protection features, additional plugins, and the introduction of Copilot Studio to aid users in developing personalized, low-code copilots. The latest iteration of Bing Chat is set for release on Dec. 1, accompanied by a comprehensive document outlining the advantages of Copilots in driving productivity and creativity. Microsoft also announced Baidu as its latest Talk Ads API collaborator.

  • Google is exploring novel music-centric conceptual AI resources, including a feature enabling users to create unique music videos for YouTube Shorts using text-based prompts. The company showcased collaborations with renowned musicians like Charlie Puth, Demi Lovato, T-Pain, John Legend, and Sia, who contributed their music to the project in a recent demo.

  • IBM introduced watsonx, a management application designed for its Watson portfolio, aimed at assisting AI clients in identifying potential risks, anticipating future challenges, and monitoring factors like bias, accuracy, fairness, and privacy.

  • Ally Financial released preliminary findings from a conceptual AI experiment leveraging its large language model, citing significant time savings in campaign production and operational efficiency.

  • In response to the burgeoning conceptual AI landscape, numerous companies are forging innovative partnerships. Getty Images announced the integration of its new AI image engine into Omnicom’s Omni data orchestration platform. Similarly, Stagwell unveiled a collaboration with Google to integrate the tech giant’s AI capabilities into the Marketing Cloud.

Visited 1 times, 1 visit(s) today
Last modified: February 7, 2024
Close Search Window
Close