endkvm.blogg.se

The alignment problem machine learning and human values
The alignment problem machine learning and human values







Even Sam Altman, the CEO of OpenAI-the company behind ChatGPT- testified to Congress this week that AI could “cause significant harm to the world.” He noted: “If this technology goes wrong, it can go quite wrong,” manipulating people or even controlling armed drones. Given access to the internet, they can already accomplish complex goals, enlisting humans to help them along the way. Chatbots like GPT-4 are now testing at the 90th percentile or above in a range of standardized tests, and are, according to a Microsoft team (which runs a version of ChatGPT on its search site, Bing), beginning to approach human-level intelligence.

the alignment problem machine learning and human values

There are innumerable concerns about this rapidly evolving technology going horribly, terribly awry. This ambition introduces yet another vexing facet of trying to foretell-and direct-the future of AI: Can, or should, chatbots have a monopoly on truth?ĪI chatbots present the antithesis of transparency. Can artificial intelligence be trained to seek-and speak-only the truth? The idea seems enticing, seductive even.Īnd earlier this spring, billionaire business magnate Elon Musk announced that he intends to create “TruthGPT,” an AI chatbot designed to rival GPT-4 not just economically, but in the domain of distilling and presenting only “truth.” A few days later, Musk purchased about 10,000 GPUs, likely to begin building, what he called, a “maximum truth-seeking AI” through his new company X.AI.









The alignment problem machine learning and human values