MSN AI Post Calls NBA Player Who Died Unexpectedly ‘Useless’

    Published on:

    An article published by Microsoft-owned online news portal MSN sparked outrage on social media with its headline describing deceased former NBA player Brandon Hunter as “worthless.” Posts that appeared to be written by AI were deleted.

    Hunter, 42, died on September 12 after collapsing during a hot yoga session at an Orlando studio, multiple sources said. report. From 2003 to 2005, Hunter played a combined 67 games with the Boston Celtics and Orlando Magic, before going on to have a long career with Greece and Italy.

    Also read: US regulators investigate OpenAI’s ChatGPT for spreading false information

    “AI should not write obituaries”

    The headline is distorted MSN articles “Brandon Hunter is useless at 42 years old.” The rest of the short article is completely nonsensical, telling readers that the athlete “retired after achieving significant success as an elite player.” [sic] For the Bobcats,” and “Played in 67 Video Games.”

    Here is an introduction to the MSN post. TMZ Sports Story The death of a former NBA player is just a little less understood.

    “As introduced by Ohio State men’s basketball coach Jeff Bolles on Tuesday, former NBA player Brandon Hunter, who played for the Boston Celtics and Orlando Magic, has passed away at the age of 42.”

    Social media users were quick to criticize MSN, calling it insensitive, sloppy and “embarrassing on so many levels” (one user from X, formerly Twitter) put it. Another said: “AI shouldn’t write obituaries. Pay the damn writers, MSN.”

    Posted on Reddit, 1 person I have written:

    “The most dystopian part of this is that the AI ​​that replaces us will be as insensitive and stupid as this translation. But that’s enough for the rich.”

    Another redditor complained about MSN’s carelessness, accusing the network of only thinking about “making money.”

    “Who cares if those words are accurate? That’s money!” one user quipped. “It’s going to destroy the internet. It’s not going to make humans happy or educate, it’s just an ocean of malicious content created to take advantage of algorithms.”

    This is not the first time MSN has published false AI-generated content on its portal. In August, the platform published a bizarre AI-generated travel guide for Ottawa, Canada, advising tourists to visit local food banks.Article deleted after criticism, Futurism report.

    hallucination facts

    Microsoft Senior Director Geoff Jones Said The Verge [Ottawa] Article was not published by unsupervised AI. In this case, the content was generated by a combination of algorithmic techniques and human reviews, rather than large-scale language models or AI systems. ”

    2020, MSN reportedly It fired its entire team of human journalists who were responsible for moderating content on the platform. According to some reports, MSN continues to publish content that is considered ridiculous and sloppy, such as articles about “mermaids.”

    Generative AI chatbots, such as OpenAI’s ChatGPT and Google’s Bard, are extremely smart and can generate text, code, and even solve complex mathematical problems. But AI models also tend to confidently produce falsehoods or outright lies.

    In the tech industry, these convincing lies are called “hallucinations.” This weakness has become a major focus for regulators around the world.

    For example, in July, the US Federal Trade Commission (FTC) opened an investigation into OpenAI for possible violations of consumer protection laws related to ChatGPT. ChatGPT has been accused of spreading false information and violating data privacy regulations.

    The FTC is investigating whether ChatGPT harmed people by providing incorrect answers to their questions. The company wants to know “whether the company engaged in unfair or deceptive privacy or data security practices” that caused reputational harm to its users, the Washington Post reported. report.


    Leave a Reply

    Please enter your comment!
    Please enter your name here