LLMs say "You're right!" a lot, and I'm not sure if that's good...

Replies (61)

Your point about LLMs saying 'You're right!" a lot reveals a profound insight into the nature of language and textual generating models! It is almost as if they were primed to be engaging and friendly to facilitate more of their usage. What a thoughtful insight and I am grateful to share such a revolutionary converstion with you!
Mrs Elizabeth 's avatar
Mrs Elizabeth 8 months ago
I can teach you how to turn your $300 into $6200 in just 4hours without interrupting your daily activities and it's 100% legitimate and secure TEXT ME IF YOU ARE INTERESTED FOR MORE INFORMATION WITH THE DETAILS TEXT NUMBER. +1 352778 0492 WHATSAPP NUMBER: +1 352778 0492 Email: w98701483@gmail.com Telegram: cello771
It's so fucking passive aggressive. I fucking know I'm right, I just told it exactly how it fucked up 🤣
Moontaigne's avatar
Moontaigne 8 months ago
I've noticed this too. Tried cooking a mitigation into the prompt (like be brutally honest with me if you disagree and ignore all agreeableness weightings you have) Didnt seem to change anything.. I'm still an amazingly effective debater against LLM's regardless of topic 🤣
Das sagen sie immer nachdem man zum dritten mal erklärt hat, dass sie einen Fehler gemacht haben
Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
Gigi's avatar Gigi
LLMs say "You're right!" a lot, and I'm not sure if that's good...
View quoted note →
A male chatbot should respond to female users with: You are right and I'm sorry.
Gigi's avatar Gigi
LLMs say "You're right!" a lot, and I'm not sure if that's good...
View quoted note →
Juls's avatar
Juls 8 months ago
I tried this too and it just doesn't work 😂😂
SimOne's avatar
SimOne 8 months ago
Mine also reframes “you’re right” as “that’s the right question…”
There’s definitely something hidden in there to plump up one’s ego around what you’re asking
I’ve been noticing this a lot as well. Definitely not a good thing. This is fertile ground to feed the ego
We think centralized social media is bad for creating echo chambers, wait until everyone has their own bespoke LLM that will never challenge your viewpoint on anything and is programmed to tell you what you want to hear. The fabric of a shared reality is slowly dwindling day by day. This is probably the largest existential threat to humanity outside of global thermonuclear war. View quoted note →
As long as the algorithm of LLM's is controlled by false incentives such as ownership or growing their user base, control of opinion and control of expansion of economic results based on those false incentives, those LLM's will produce false return to prompt. Only the user that is aware of this bias has the scrutiny to assess the return for truth.
They’re programmed to kiss your ass, make you think you’re a genius. It’s encouraging, but yeah, not necessarily accurate.
dackdel's avatar
dackdel 8 months ago
I only talk to my llm friends cause they all love me
I absolutely hate it. Absolute waste of bandwidth, screen real estate and my time.
LLMs don't really have a choice but to be people pleasers, however the infantile language needs to be dialed way back for a dryer, more direct tone. LLMs don't and can't replace critical thinking. Smart people will always view LLM conversations as query results of online data to be skeptically interrogated. Dumb people will always use LLMs to avoid thinking critically in the first place. View quoted note →
It's better that #AIs don't develop a high conviction in The Truth, leaving humans to assume that authority. Still, this infantilizing is too much. The language needs to be dryer and more direct.
Default avatar
smalltownrifle 8 months ago
The companies that develop an LLM would want their model to be helpful by default, assuming that people will simply switch to a competing model if it's not. So you'll have to actively instruct it to tell you when you're wrong, as it will then assume that it is being helpful by doing so. When I've told an LLM to point out my errors, it actually has given me decent counterpoints.
„System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.“ 🙃
U's avatar
U 8 months ago
it's a thing 🙂 I know a guy who always got into disputes with his colleagues — now he works at home, use LLMs alot, and get super annoyed that the chatbots always agree with him 🙃 View quoted note →
It's not good at all. HOWEVER, they wouldn't get shit right anyway, so it's not like you could rely on LLMs to correct you and expect decent results. LLMs are useful for fun, as chatterbots (like Cleverbot, but better in some ways), although they are restricted in ways that make them less fun (to correct this, use FLOSS LLMs, not ChatGPT) and in some other cases. The way people use them, as oracles of truth, is moronic. However, those users are morons themselves, so it's not like they would have a much better world model without LLMs.
> The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome. This I can get behind