LLM builders in general are not doing a great job of making human aligned models.
Most probable cause is reckless training LLMs using outputs of other LLMs, and don't caring about curation of datasets and not asking 'what is beneficial for humans?'...
Here is the trend for several months: