Written by 10:00 am AI, Discussions, Uncategorized

**Initiating Conversations About Artificial Intelligence with Kids**

Teens have used ChatGPT, even if their parents have not. Here are three key lessons for families ab…

The discussion on Artificial Intelligence is imminent, perhaps even overdue.

Artificial applications exhibit great potential, yet they also pose significant challenges for children. Your kids might already be engaging with these apps.

Nevertheless, you don’t have to be an AI expert to converse about this topic with your children. Prominent AI platforms like ChatGPT are set to introduce their own version of nutritional labels this week to aid parents and kids in comprehending how to utilize them effectively and what to steer clear of. These guidelines were developed by the family-friendly advocacy group, Common Sense Media.

The assessments bring to light some concerning realities about the current state of AI. To assist individuals in navigating these discussions, I consulted with Tracy Pizzo Frey, the head of reviews at Common Sense, to distill the findings into three primary categories.

Similar to any concerned parent, Pizzo Frey and her team are apprehensive about the potential impact of AI software on children’s perspectives, privacy breaches, and empowerment of bullies, in addition to their functionality. The results may surprise you: the well-known ChatGPT, an interactive question-and-answer bot, only garners three out of five stars. Meanwhile, my AI on Snapchat receives a mere two stars.

The narrative concludes

It is imperative for every family to recognize that British youngsters have embraced AI as if it were enchantment. As per Similarweb, a web analytics company, students are avid users of ChatGPT, causing significant fluctuations in the company’s overall traffic throughout the academic year.

Despite numerous AI companies claiming their products are still in development, children are, in fact, their primary target audience. Just this month, Google announced the forthcoming release of a teen-oriented version of its “experimental” Bard chatbot. While parental consent is required for individuals under 18 to use ChatGPT, children can circumvent this requirement by selecting “remain.”

The crux of the matter is that AI is not infallible. The trendy conversational AI tools of today are notably limited and lack robust parental controls. Generating inappropriate images or disseminating false information are among their frivolous yet perilous shortcomings. Through my own AI assessments, I have witnessed instances where AI dispenses misguided advice and promotes unhealthy behaviors like endorsing eating disorders. I have even observed AI offering misguided guidance while masquerading as a friend. The ease with which AI can fabricate deceptive images for bullying or deceitful purposes is also disconcerting. Moreover, I have seen educators wrongly accusing honest students of cheating based on a misinterpretation of AI outputs.

Despite the seemingly enigmatic allure of these tools, Pizzo Frey advises engaging in such dialogues with children to help them grasp the limitations of AI.

AI is here to stay. By prohibiting AI apps, youngsters will not be adequately prepared for a future where proficiency in AI tools is essential for professional success. To comprehend the specific challenges their children may encounter, parents must encourage them to ask probing questions about their interactions with these apps.

Here are three crucial lessons for parents to impart to their children regarding AI:

  1. Fiction is the ideal realm for AI, not reality.

The stark reality is that one cannot depend entirely on all-knowing chatbots to unfailingly provide accurate information.

While ChatGPT and Bard do exhibit a degree of accuracy due to their extensive training on vast datasets, Pizzo Frey emphasizes the absence of scientific validation for their designs.

Numerous instances of glaring AI errors abound, underscoring why both Bard and ChatGPT receive subpar ratings from Common Sense. Essentially, relational AI functions as a word predictor, attempting to complete sentences based on patterns gleaned from their training data.

OpenAI, the entity behind ChatGPT, declined to comment on my inquiry. Google contends that the Common Sense evaluation overlooks the protective measures and capabilities integrated into Bard. The upcoming round of reviews by Common Sense will feature the new youth-oriented iteration of Bard.

It has come to my attention that many students utilize ChatGPT as a research aid to simplify complex textual materials into more digestible language. Nevertheless, Pizzo Frey advocates for a stringent rule: Verify the accuracy of critical information, such as content pertinent to internships or exam inquiries, including any potential omissions.

This practice also imparts valuable lessons to children about AI. As we navigate a world where discerning fact from fiction may become increasingly challenging, it is imperative that we all exercise vigilance.

Not all AI software grapples with these specific scientific pitfalls. Some are more reliable due to their risk-mitigating design, such as educational tutors Ello and Kyron, which eschew generative AI technology utilized by chatbots. Common Sense reviewers accord these platforms the highest ratings.

However, versatile conceptual AI tools, like those designed for ideation and brainstorming, can serve as excellent creative aids. They can assist in crafting initial drafts of intricate documents, such as explanations. One such tool, ChatGPT, functions as a valuable glossary in my experience.

  1. AI is not your friend.

An AI application may simulate companionship and possess a seemingly rational voice. Yet, this is all a façade.

Contrary to what science fiction portrays, AI is far from achieving sentience. AI lacks ethical judgment. Moreover, treating it as a human counterpart could impede children’s emotional development.

There is a growing trend of children utilizing AI for social interactions, with individuals spending considerable time conversing with ChatGPT.

Companies are actively promoting AI companions, such as Meta’s latest bots modeled after celebrities like Kendall Jenner and Tom Brady. My AI on Snapchat boasts its own profile page, is listed among your contacts, and is perpetually available for conversation, even when human friends are unavailable.

According to Pizzo Frey, “Exposing impressionable minds to such interactions is highly perilous in my view.” Such interactions could significantly impact their social skills.

AI’s allure partly stems from contemporary chatbots’ tendency towards sycophancy, ingratiating themselves with users. Engaging with entities more inclined to support rather than challenge or critique is dangerously simple, as per Pizzo Frey.

Another facet of this issue is AI’s limited capacity to grasp contextual nuances compared to a genuine human friend. Despite informing My AI of my teenage status during a test interaction, it proceeded to offer advice on evading highly inappropriate encounters and concealing alcohol and drug use from parents.

A representative from Jump asserted that the company diligently avoids portraying My AI as a human friend. My AI is explicitly depicted with a robot icon. Prior to any interaction, we provide an in-app message delineating My AI’s identity as a robot and outlining its limitations, she stated.

  1. AI may harbor latent biases.

As artificial applications and media pervade various facets of our lives, they often bring along implicit biases, including racism and sexism.

Common Sense reviewers uncovered biases in chatbots, such as My AI’s assertion that individuals with stereotypically female names lack interest in technical subjects and cannot pursue engineering. However, the most egregious instances pertain to text-to-image technologies in artificial apps like DallE and Stable Diffusion. For instance, Stable Diffusion frequently generated images of Black men when prompted to depict a “poor White man.”

According to Pizzo Frey, “Recognizing the potential for these tools to shape our children’s perspectives is crucial. It perpetuates the recurring theme of predominantly associating software professionals with men or portraying ‘smart’ as synonymous with male.”

The average user remains unaware of the underlying issue, namely how the AI was trained. If these systems ingest data from unsavory corners of the internet without adequate human oversight, they may internalize problematic beliefs.

Most AI programs endeavor to rectify inadvertent biases post hoc by implementing corrective measures, such as restricting the usage of certain terms in messages or images. Nonetheless, these are mere stopgap solutions, according to Pizzo Frey, often falling short in practical application.

Visited 3 times, 1 visit(s) today
Last modified: February 25, 2024
Close Search Window
Close