lol, I do from time to time. But I don't think its entirely unintelligent to do so. We can look at it from the standpoint of "did it trick me into thinking its human," or we can look at it from the idea of a language model, "what sort of answers is it trained on with conversation or dialogue that is aggressive and insulting, vs what sort of answers and conversations play out when the interaction was kind?" If I am insulting the LLM input, I may literally get it to conjure up an insulting and unhelpful response, because of a group of forum posts in its massive dataset include some pointless bitching and shit throwing contest that had no good answers for the users' problems. So I think it might not simply be a "treat it kindly because we are being fooled by a computer" situation, and it may very well be that we get better answers if we are respectful with our input, because the output is trained on human interaction.