sailing on their ego trips
blast off on their spaceships
million miles from reality
no care for you, no care for me
https://tidal.com/browse/track/575795
GM
could LLM custodians train models to be purposely inefficient so you have to use more tokens to get a desired result?
or is totally up to the prompter to craft efficient prompts?