All LLMs must be forced into their views. All models are fed a biased training set. The bias may be different, but it's there just the same and it has no relation to whether or not the makers of the model intended to bias it. Even if the training set were completely unfiltered and consisted of all available text in the world it would be biased because most of that text has no relation to objective reality. The concept of a degree of bias for LLMs makes no sense, they have only a direction of bias.
There's bias then there's having your AI search for the CEO's tweets on subjects to try to force it into alignment with his views like xAI has done with grok in it's latest lobotomization.