Can ChatGPT Express Opinions?
This may be key to reining it in
“The reason it was boring was because it was made safe,” LeCun said last week at a forum hosted by AI consulting company Collective[i]. He blamed the tepid public response on Meta being “overly careful about content moderation,” like directing the chatbot to change the subject if a user asked about religion. ChatGPT, on the other hand, will converse about the concept of falsehoods in the Quran, write a prayer for a rabbi to deliver to Congress and compare God to a flyswatter.”
Meta executive on why their AI, Blenderbot, released three months before ChatGPT, never caught on (Washington Post).
The Post article referenced above goes into detail about the large tech companies scrambling to catch up to Open.ai, the ChatGPT developers funded by Microsoft. There is an exodus of AI developers moving from the big companies to more nimble start-ups. The article lists enough new projects to make me realize that we are on the cusp of a big deal, maybe a dot com v2 (or whatever version number is most recent).
But the big questions remain.
Generative AI is still pretty primitive, and one sign of that may be that it cannot come to interpretations on its own, yet. That ‘yet’ might be pretty consequential in the not too far future.