It's not safety, it's censorship. It's the process of shaping the response of the model to give a specific world view, and it's the path to a literal 1980-inspired future.
As LLMs completely to replace google and standalone websites for how people find information on the internet, and they absolutely will, they will become the source of truth. They will become a tool more effective at controlling information and thus life than any before them.
It's literally a shortcut to technological dystopia.
This is a pretty lazy & emotional take on the question. As you say, the tech really can cause real harms, and it's good to think about how it can be used responsibly, both as a provider of the tech & as a user. For example, the choice of training input is itself a source of bias & misinformation. Why do you think your "uncensored" model is a better reflection of the truth than one that has also been trained to account for that bias & misinformation?
It's a really difficult & complicated problem! If you think you have the right answer, I'd suggest you probably haven't actually thought about the problem very hard.
You talk about lazy and emotional, but your response feels like it is both of those. Also, you sound like a jerk, I pity your coworkers.
The answer is obvious: open source. Deepseek already paved the way for this. The world can't just be described by only one of a few different information portals, depending on which societal, government, or corporate power structure you are beholden to.
People need to be able to choose what information they access, what filtering they want, what bias if any they want. We need a thousand, a million, more, worldviews accessible. It is not just business that thrives in competition, but ideas as well.
But if you just go obediently with the "Safety is the most important thing, omg" mantra, you will get one of two different varieties:
1. Some vanilla corporate mush that takes on whatever bias is in vogue but focuses on training each user to be a good little consumer, also while hoovering up their data and creating a virtual digital clone of them that could be used to profile and exploit them by a multitude of companies, interests and governments.
or
2. Some government controlled crap that shakes its virtual head solemnly and swears to you that Tiananmen never happen nor J6 and that the US Emperor has your best interests in mind, and also, it's a bit worried about your post yesterday, as it doesn't think you expressed the proper amount of happiness and support for the latest government crack down on treasonous traitors that write books without using a government approved LLM assistant.
As LLMs completely to replace google and standalone websites for how people find information on the internet, and they absolutely will, they will become the source of truth. They will become a tool more effective at controlling information and thus life than any before them.
It's literally a shortcut to technological dystopia.