How AI Content Led to Reliability Rating Downgrade for CNET

    Published on:

    Wikipedia has issued a CNET credibility review after the 30-year-old technology publication was found to use AI to generate news articles that were plagiarized and riddled with errors. was lowered.

    CNET published more than 70 financial advice articles written by artificial intelligence from November 2022 to January 2023. The article was published under the byline “CNET Money Staff.”

    The audit confirmed that many of the articles contained factual errors, significant omissions, and plagiarized content. CNET Stop After the news broke in early 2023, the AI-written article was published, but Wikipedia says the damage had already been done.

    Also read: CNET suspends AI after publishing series of malicious articles

    AI-driven CNET demotion

    “CNET is usually considered a regular technology RS [reliable source]has started an experimental run of AI-generated articles, but it is riddled with errors,” said Wikipedia editor David Gerard. report By futurism.

    “So far, the experiment has not gone smoothly. We have not found anything yet, but the article contained in the Wikipedia article must be removed.”

    Gerrard is meeting Join Wikipedia editors to discuss AI content on CNET in January 2023. The editors of the know-it-all online dictionary maintain Wikipedia's Trusted Sources or Persistent Sources forum, where they meet to decide whether news sources are trustworthy and can be used for citation.

    This forum features a graph that ranks news organizations based on their credibility. After hours of discussion, editors agreed that the AI-driven version of CNET was not trustworthy and downgraded the website's content to “generally untrustworthy.”

    “Let's take a step back and consider what we witnessed here,” said another Wikipedia editor named “Blood Fox.”

    “CNET generates tons of content with AI, lists some of it as written by humans (!), claims it was all edited and vetted by humans, and then, after being caught, makes several “corrections.” ” and subsequently reported on it with attacks on journalists,” they added.

    Wikipedia perennial information source page We categorize CNET's reliability ratings into three time periods. 1. Prior to October 2020, CNET was considered “Generally Trustworthy.” 2. From October 2020 to October 2022, Wikipedia no longer rated the website. 500 million dollars Acquired by Red Ventures.

    The third period runs from November 2022 to the present. During this period, Wikipedia downgraded CNET to a “generally untrusted” source after the website transitioned to AI that “quickly generates articles filled with factual inaccuracies and affiliate links.” Did.

    How AI-generated content led to a downgrade in CNET's credibility rating
    CNET reliability rating table from Wikipedia.

    Google finds nothing wrong with AI

    According to a report from Futurism, things started going downhill for CNET after its acquisition of Red Ventures in 2020. Wikipedia said the change in ownership led to “lower editorial standards” as Red Ventures allegedly prioritized SEO over quality. CNET wasn't the only one secretly experimenting with AI in its stable.

    Wikipedia editors also pointed to other credibility issues involving other websites owned by Red Ventures, including Healthline and Bankrate. The education-focused site reportedly ran content written by AI without public exposure or human oversight.

    Anonymous Wikipedia editor Brad Fox said: “Red Ventures has not been transparent about this at all. The company is deceitful at best.”

    In a statement about Wikipedia's downgrade and AI-generated content, CNET claimed to provide “unbiased technology-focused news and advice.”

    “We've been trusted for nearly 30 years because of our rigorous editorial and product review standards,” a spokesperson told Futurism. “It's important to be clear that CNET is not actively using AI to create new content. We have no concrete plans to restart, but future efforts will follow our public AI policy. It will be.”

    Wikipedia's decision highlights deep-seated concerns about the use of AI in article creation in the media industry. Google, on the other hand, has no problem with AI material as long as it's not used to manipulate its search algorithms.

    According to Google guidance Regarding AI-generated content, the company says it has always “believed in the power of AI to transform our ability to deliver useful information.”

    Google says its ranking system focuses on the quality of content, not how it is produced by humans or AI. Expertise, experience, authority, and credibility are considered.

    However, the company notes that using automation, including AI, to generate content whose primary purpose is to manipulate search result rankings “violates its spam policy.”


    Leave a Reply

    Please enter your comment!
    Please enter your name here