Connect with us

Hi, what are you looking for?

Latest News

Senate warned of ‘perfect storm’ leading to emerging AI disaster: ‘Democracy itself is threatened’

Senators on Tuesday got the green light to impose significant federal regulation on artificial intelligence systems, not just from two industry giants, but from an AI expert who warned that the fate of the nation may depend on tough AI rules from Congress.

A Senate Judiciary subcommittee heard from OpenAI CEO Sam Altman and IBM Chief Privacy & Trust Officer Christina Montgomery, who both invited federal oversight of AI even though they split on whether a new federal agency is needed. In between those witnesses sat Gary Marcus, the New York University professor emeritus and leader of Uber’s AI labs from 2016 to 2017, who issued a stark warning that human life is about to be upended by this unpredictable technology.

‘They can and will create persuasive lies at a scale humanity has never seen before,’ Marcus warned of generative AI systems. ‘Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened.’

Marcus warned that AI systems that do severe damage to humans’ trust in each other have already been released and that the damage is already mounting.

‘A law professor, for example, was accused by a chatbot of sexual harassment. Untrue,’ Marcus said. ‘And it pointed to a Washington Post article that didn’t even exist. The more that that happens, the more than anybody can deny anything.’

‘As one prominent lawyer told me on Friday, defendants are starting to claim that plaintiffs are making up legitimate evidence,’ he said. ‘These sorts of allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy.’

In an era in which Washington, D.C., is worried more and more about suicide and deteriorating mental health, Marcus said AI is making the problem worse.

‘An open-source large language model recently seems to have played a role in a person’s decision to take their own life,’ he said. ‘The large language model asked the human, ‘If you wanted to die, why didn’t you do it earlier?’ then followed up with, ‘Were you thinking of me when you overdosed?’ without ever referring the patient to the human help that was obviously needed.’

OpenAI and IBM talked at length about industry-led systems designed to make AI ‘safe,’ but Marcus dismissed those as platitudes and goals that aren’t being followed.

‘We all more or less agree on the values we would like for our AI systems to honor. We want, for example, for our systems to be transparent, to protect our privacy, to be free of bias, and above all else, to be safe,’ he said. ‘But current systems are not in line with these values.’

‘The Big Tech companies’ preferred plan boils down to ‘trust us.’ But why should we?’ he asked.

Marcus’ recipe for creating a safe AI regulatory regime includes local, national and global measures. He called for a worldwide organization to set standards that all AI systems developers must follow.

‘Ultimately, we may need something like CERN, global, international and neutral but focused on AI safety rather than high-energy physics,’ he said.

Marcus called for a new federal agency to monitor compliance – one that can review systems before they are released, assess how they run in the real world, and call back systems that are found to be flawed.

‘A safety review like we use [at] the FDA prior to widespread deployment,’ he said when pressed for details by Sen. John Kennedy, R-La. ‘If you’re going to introduce something to 100 million people, somebody has to have their eyeballs on it.’

Marcus called for a network of independent scientists who can review AI systems before they are released at each company.

And during the hearing, Marcus also called out Altman for what may have been an attempt to dodge a question about what his biggest fear is in the field. When asked, Altman talked about possible job losses, and when the question came to Marcus, he made a point of getting Altman to answer more directly.

‘Sam’s worst fear I do not think is employment, and he never told us what his worst fear actually is, and I think it’s germane to find out,’ Marcus said.

That interjection arguably led to the highlight of the hearing as Altman admitted that he, too, is worried about the possibility of doing great harm.

‘I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,’ Altman said. ‘We want to work with the government to prevent that from happening.’

This post appeared first on FOX NEWS

    You May Also Like

    Editor's Pick

    In this edition of StockCharts TV‘s The Final Bar, Dave shows how breadth conditions have evolved so far in August, highlights the renewed strength in the...

    Economy

    Boeing’s crew spacecraft Starliner will stay docked with the International Space Station into August, NASA confirmed on Thursday, as the mission remains on hold...

    Stock

    S&P 500 pared back its intraday gain on Wednesday following a Bloomberg report that Royal Group has built a multi-billion-dollar short position in U.S....

    Economy

    A U.S. judge has ruled that former Bed Bath & Beyond investor Ryan Cohen can be sued by investors over a tweet he posted featuring an...

    Disclaimer: Richpeoplenetworks.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.


    Copyright © 2024 Richpeoplenetworks.com