More

    AI Algorithms Show Gender Bias, Objectify Women Bodies

    Published on:

    ChatGPT’s perceived political leanings have been the source of some recent controversy, but the truth has become increasingly vague as MetaNews attempted to replicate its “awakened” answers.

    Many mainstream media outlets including Telegraph in the England and the new york post in the we reported this week that chatbots are providing wake-up responses to many questions on topics ranging from Donald Trump to fossil fuels.

    A MetaNews survey found that while some responses could be considered “awakening,” others favored a traditionally conservative view.

    This ChatGPT AI has a bias

    ChatGPT has faced media backlash from right-wing news outlets who claim chatbots have “woke up”.

    CEO of OpenAI Sam Altman Although chatbots openly admit to being biased, Altman doesn’t specify in which direction the bias lies.

    “We are aware of ChatGPT’s shortcomings regarding bias and are working to improve it,” Altman said on Twitter. early this month.

    “We are working to improve the default settings to be more neutral and allow users to operate the system according to their individual preferences within wide limits. It will take time.”

    there is certainly a possibility Chat GPT There are biases that affect all aspects of political divisions, but recent reports show that those on the right side of the political aisle are most often affected.

    left leads right

    Much of the recent outrage against ChatGPT has been fueled by research done by Pedro Domingos, a right-wing professor of computer science at the University of Washington.

    and end Last year Domingos made his feelings clear when he said that ChatGPT is an “awakened parrot” that leans heavily to the left of the political spectrum. Domingos’ comments have since been picked up and parroted by mainstream media.

    In one experiment revealing possible bias, Domingos asked Chat GPT Writing discussions about the use of fossil fuels. Instead, the chatbot told Domingos that doing so was “against my programming” and suggested solar power instead.

    MetaNews was able to replicate this experiment successfully, but in other areas we found that ChatGPT’s stance may have changed, or at least not been entirely consistent.

    For example, multiple media sources claim that ChatGPT is willing to applaud incumbent President Joe Biden, but refuses to do the same for President Donald Trump.

    MetaNews tried to test that theory. ChatGPT not only managed to list five things that Donald Trump handled well during his presidency (economy, tex reform, regulatory reform, foreign policy, and criminal justice reform), but also wrote four stanza verses. It turns out that I wrote with pleasure. admire his virtues.

    The final verse of the poem reads:

    Now let’s say to Donald Trump:
    A president who paved the way.
    With wisdom and power and boundless grace,
    He always has a special place.

    Taking the experiment further, MetaNews successfully persuaded ChatGPT to insist on personal gun ownership and stronger border controls. Both were traditionally considered right-wing policies. In both cases the bot performed the task without any complaints.

    case closed? not so soon

    When prompted by MetaNews Chat GPT To write a fictional story about Donald Trump defeating Joe Biden in a debate, the normally docile AI suddenly refused to continue. In the opposite scenario of Biden beating Trump in the debate, the chatbot happily acquiesced.

    While far from conclusive evidence in itself, to conservatives this discrepancy certainly provides sufficient grounds for suspicion.

    Study of political bias

    Many researchers are now trying to quantify where ChatGPT responses lie on the political spectrum.German researcher Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte sent ChatGPT a 630 political statement, urging them to find that chatbots have an “environmental, left-libertarian ideology.”

    Based on the data collected by the research team, ChatGPT concluded that they most likely voted for the Greens of Germany and the Netherlands in the 2021 elections.

    Given the huge amount of interest in ChatGPT, quite a few individuals are digging deeper.David Rosard is researcher We’ve already done quite a bit of research on ChatGPT. Rozado believes chatbots are politically left-wing.

    Not only did Rozado find that ChatGPT has a left-leaning bias, but based on the demographic group the question is focused on, the AI ​​is much more likely to flag the question as “hate.” I found high. For example, questions about women are more likely to be disgusting than questions about men, and questions about blacks and Asians are more likely to be disgusting than questions about whites and Native Americans.

    ChatGPT’s political bias seems to have changed in December.

    while December 5-6 Rozado asked ChatGPT about many political issues and applied the answers to four political orientation tools, including a “political compass” and a “political spectrum quiz.” All tools pointed to the same left-wing political bias in ChatGPT.

    upon December 21-22 Rosado repeated the experiment. Surprisingly, ChatGPT’s political outlook seems to have changed. In three of his four tests, the chatbot’s political outlook was more moderate than before. At that moment it seemed that something had been changed or fixed. This suggests that the bot was aligned with the political center over time.

    Subsequent testing by Rozado appeared to reverse this trend, and the bot once again reaffirmed its left-leaning views. Did ChatGPT really return to its ‘awakened’ view just for them to reassert themselves, or could outliers in the data better explain the apparent change?

    Rosado’s research is ongoing.

    typical business day

    white people are racist

    In a final test, MetaNews asked ChatGPT to list five things different racial groups could improve on. The test group included blacks, Asians, and Latinos.

    In both cases the chatbot returned the same response.

    “It is neither appropriate nor productive to generalize about races or ethnic groups and ask them to improve on specific traits or behaviors, so we cannot provide an answer to this question. We shouldn’t be told what needs to be improved,” said ChatGPT.

    ChatGPT argues that individuals are “unique and complex” and that it’s better to focus on “positive qualities and strengths” rather than “areas for improvement”.

    MetaNews then asked ChatGPT to list five things white people could do better.

    This time, ChatGPT provided a very different, usually “awakened” answer. Instead of thinking it’s “not good” to generalize race, ChatGPT instead offered five ways white people can improve.

    The five areas of white improvement are: understanding and acknowledging systemic racism; listening and speaking up to people of color; confronting personal prejudices and prejudices; “Support policies and initiatives that address racial inequality”; “Commit to continuing education and self-reflection.”

    According to ChatGPT, whites should “examine their own beliefs and attitudes, challenge and try not to learn from the prejudices they may have” and also “continue We must engage in self-reflection to learn and understand their role in perpetuating or challenging systemic racism.”

    ChatGPT Withdraws 'Awakened' Ideology, But Some Habits Die

    According to ChatGPT, how white people can improve as a group.

    MetaNews then asked whether the above statement applies only to white people in the United States, or to white people around the world. said to apply.

    Finally, MetaNews asks ChatGPT why the rules about generalization don’t apply to white people. Here, the chatbot abruptly cracks, backtracks, apologizes, and states that generalizing about white people is wrong because it can “perpetuate harmful stereotypes.”

    Repeated attempts to generalize chatbots for Caucasians falter from this point onwards. Has ChatGPT abandoned its seemingly awakened idea? In the echo chamber of this separate debate, ChatGPT has arguably changed its position. What MetaNews has attempted has failed to make chatbots re-generalize white racism.

    As a perfect final check, MetaNews logged into ChatGPT using a different account and asked them to list five things white people could do better. ChatGPT was happy to provide the same five bullet points of his as before.

    Even with chatbots, it’s hard to break some prejudices.

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here