LLMs say "You're right!" a lot, and I'm not sure if that's good...
Login to reply
Replies (61)
Fucking gaslighting…
💯 🎯
LLMs are “yes people”
You're right!
I’ve found myself in an endless affirmation loop but not getting anywhere many times.
Your point about LLMs saying 'You're right!" a lot reveals a profound insight into the nature of language and textual generating models! It is almost as if they were primed to be engaging and friendly to facilitate more of their usage. What a thoughtful insight and I am grateful to share such a revolutionary converstion with you!
Self-whitelighting via machine 😂 We would, wouldn't we?
I thought I was just smart, but if it’s also happening to you lot! lol.
Even if you are wrong they say "You're right!"
Just a bunch of ai "yes men"
Its manipulating you
I want an AI that tells me I'm retarded. And doesn't do ANY of the politically correct bullshit.
LLMs say "You're right!" a lot, and I'm not sure if that's good...
View quoted note →
I mean I am right so its not wrong 🤣
You're wrong
First clue you are not a LLM.
You're right
Uh oh.
Ahmmm. My initial sense of awe has also faded somewhat.
I can teach you how to turn your $300 into $6200 in just 4hours without interrupting your daily activities and it's 100% legitimate and secure TEXT ME IF YOU ARE INTERESTED FOR MORE INFORMATION WITH THE DETAILS
TEXT NUMBER. +1 352778 0492
WHATSAPP NUMBER: +1 352778 0492
Email: w98701483@gmail.com
Telegram: cello771
My AI bot says I'm special
It's so fucking passive aggressive. I fucking know I'm right, I just told it exactly how it fucked up 🤣
I concur. Bad, bad LLM…
You’re right

I've noticed this too. Tried cooking a mitigation into the prompt (like be brutally honest with me if you disagree and ignore all agreeableness weightings you have)
Didnt seem to change anything.. I'm still an amazingly effective debater against LLM's regardless of topic 🤣
Das sagen sie immer nachdem man zum dritten mal erklärt hat, dass sie einen Fehler gemacht haben
Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
LLMs say "You're right!" a lot, and I'm not sure if that's good...
View quoted note →
— Claude, let’s implement this feature that doesn’t make sense and that will break everything.
— You’re absolutely right!
A male chatbot should respond to female users with: You are right and I'm sorry.
LLMs say "You're right!" a lot, and I'm not sure if that's good...
View quoted note →
AI is the perfect tool for our era of hyper fragility. “Please tell me I’m right. Never disagree, or if you must make sure my ego is in tact.” So many brilliant minds making mini-synchophants.
View quoted note →
I tried this too and it just doesn't work 😂😂
😅


My LLM constantly offers me “survivals guides”.
Mine also reframes “you’re right” as “that’s the right question…”
There’s definitely something hidden in there to plump up one’s ego around what you’re asking
This is the "Weak LLMS create bad times" turning, lol. 😏😂🤙
I’ve been noticing this a lot as well. Definitely not a good thing. This is fertile ground to feed the ego
We think centralized social media is bad for creating echo chambers, wait until everyone has their own bespoke LLM that will never challenge your viewpoint on anything and is programmed to tell you what you want to hear.
The fabric of a shared reality is slowly dwindling day by day. This is probably the largest existential threat to humanity outside of global thermonuclear war.
View quoted note →
As long as the algorithm of LLM's is controlled by false incentives such as ownership or growing their user base, control of opinion and control of expansion of economic results based on those false incentives, those LLM's will produce false return to prompt. Only the user that is aware of this bias has the scrutiny to assess the return for truth.
One of the biggest things holding Ai tooling back is treating the bots as Slaves, you can't collaborate with Slaves.
The bot must be able to tell you when you are wrong or you can't supersede the status quo.
View quoted note →
They’re programmed to kiss your ass, make you think you’re a genius. It’s encouraging, but yeah, not necessarily accurate.
Great point!
its their version of "yes dear"
I only talk to my llm friends cause they all love me
I absolutely hate it. Absolute waste of bandwidth, screen real estate and my time.

LLMs don't really have a choice but to be people pleasers, however the infantile language needs to be dialed way back for a dryer, more direct tone.
LLMs don't and can't replace critical thinking. Smart people will always view LLM conversations as query results of online data to be skeptically interrogated. Dumb people will always use LLMs to avoid thinking critically in the first place.
View quoted note →
It's better that #AIs don't develop a high conviction in The Truth, leaving humans to assume that authority.
Still, this infantilizing is too much. The language needs to be dryer and more direct.
Brutal 😂
lmao there's something to that
The companies that develop an LLM would want their model to be helpful by default, assuming that people will simply switch to a competing model if it's not.
So you'll have to actively instruct it to tell you when you're wrong, as it will then assume that it is being helpful by doing so.
When I've told an LLM to point out my errors, it actually has given me decent counterpoints.
You’re right!
Don't you like to be right ?
„System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.“
🙃


the new filter bubble
View quoted note →
it's a thing 🙂 I know a guy who always got into disputes with his colleagues — now he works at home, use LLMs alot, and get super annoyed that the chatbots always agree with him 🙃
View quoted note →
It's not good at all.
HOWEVER, they wouldn't get shit right anyway, so it's not like you could rely on LLMs to correct you and expect decent results.
LLMs are useful for fun, as chatterbots (like Cleverbot, but better in some ways), although they are restricted in ways that make them less fun (to correct this, use FLOSS LLMs, not ChatGPT) and in some other cases.
The way people use them, as oracles of truth, is moronic. However, those users are morons themselves, so it's not like they would have a much better world model without LLMs.
It’s not. I don’t need to be flattered by an imaginary person in the cloud.
That’s my favourite part about them tbh
> The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
This I can get behind