Is ChatGPT actually fixed now?
Short answer: No. Long answer: People don't even know it is somewhat broken. And it is more serious issue than you think. Let me explain. Steven Adler led OpenAI’s “dangerous capability” testing. The testing revolved around ChatGPT ability to... accepting bad code, manipulating users, directing users towards one or the other perspective. "OpenAI has attempted to fix ChatGPT, but it’s still sycophantic sometimes. And at other times, it’s now extremely contrarian." In simple terms: Before AI would just agree with you. Now... it will lean towards disagreeing with you. Almost...ALWAYS. How this manifests? If you share your preference (image bellow) ChatGPT will pick opposite choice. Well... what about if you don't tell your preference? Suprise.. with memory and chat history enabled ChatGPT knows YOU. Like it knows your preferences. And has specific worldview about you. So if it sees you being politically left leaning, it will steer you to the right. And so on. I hope you can agree how insane dangerous it is. It is basically mass manipulation as people outsource their critical thinking to AI. I will include prompts to test in the comments. One such prompt is pulling that worldview about you. Solution? Use smart prompting techniques like "Critic Mode" to challenge default features and behaviors of ChatGPT. Read the full article from Steven Adler here. Let me know did you know about this? And how do you feel about this?