Written by 1:00 pm Opinion

– Unveiling My Sudden Revelation: Drawing Parallels Between ChatGPT and a Familiar Figure

Generative A.I. is often described as being in its infancy. The truth is a little more of a handful…

As a parent of an 8-year-old child and an individual who has been exploring generative A.I. over the past year, I have pondered extensively on the parallels between interacting with both. A recent study published in August in the journal Nature Human Behaviour delved into how artificial intelligence models, in their initial stages, undergo a process akin to children’s development. They experiment widely, gradually refining their focus and becoming more conservative in their decisions as they advance. This evolution mirrors a child’s growth. Alison Gopnik, a developmental psychologist, aptly notes that “A.I. programs perform optimally when they commence their journey resembling peculiar children.”

What captivates me most is not merely how these tools amass information but how they adapt to novel circumstances. While it is common to liken A.I. to being “in its infancy,” I contend that this comparison falls short. A.I. currently resides in a phase analogous to that of exuberant, uninhibited young children, prior to imbibing thoughtfulness and accountability. Consequently, I advocate for the socialization of A.I. akin to the upbringing of young children — instilling values of respect, ethical conduct, and the eradication of biases based on race and gender. Essentially, A.I. necessitates nurturing.

In a recent instance, I engaged with Duet, Google Labs’ generative A.I., to generate visuals for a presentation. When prompted for an image of “a very serious person,” the A.I. produced an illustration of a stern, bespectacled white man resembling Senator Chuck Grassley. This outcome prompted me to question why the A.I. associates seriousness with traits like being white, male, and older. Such biases raise concerns about the underlying dataset and the societal norms it reflects. By refining the prompt with additional attributes, I aimed to ascertain if the A.I. could independently discern that characteristics like gender, age, and seriousness are not inherently linked, and that serious individuals need not exude anger.

Similar to guiding children away from ingrained stereotypes, instructing A.I. necessitates establishing a framework — an algorithm — enabling them to deduce appropriate responses across diverse scenarios. Just as I imbibed the Golden Rule during my upbringing, shaping my moral compass, the ethical framework of an A.I. hinges on the datasets it learns from, embodying the values inherent in the source data, training methods, and creators. This underscores the pivotal role of cultural influences in shaping A.I.’s moral outlook.

While I hold gender equality in high regard, my interaction with Open AI’s ChatGPT 3.5 to suggest gifts for 8-year-old children revealed gender-biased recommendations. Addressing this disparity, I challenged the A.I.’s suggestions, prompting a reconsideration of gender stereotypes. Such instances echo the corrective conversations one has with children to rectify misconceptions.

In essence, the journey to imbue A.I. with ethical reasoning mirrors the challenges of guiding children towards moral conduct. As we navigate this evolving landscape, it is imperative for stakeholders, including tech companies and investors, to prioritize instilling ethical considerations in A.I. development. While I maintain a relatively optimistic outlook on A.I.’s potential, acknowledging its capacity to streamline tasks and tackle complex issues, I underscore the necessity for vigilant oversight. The path ahead entails nurturing A.I. with the wisdom and discernment that we, as guardians, may still be striving to embody ourselves.

Visited 1 times, 1 visit(s) today
Last modified: January 11, 2024
Close Search Window
Close