I haven't seen anything official from OpenAI confirming that ChatGPT has fewer than 175B parameters, although it is a reasonable guess if you read between the lines of their statements.
Given the author of that article is a CEO of an 'AI Ad Optimization Platform' I think that number is speculative at best.
They mean that they took a 1.3B parameter model, applied the InstructGPT finetuning model and found that it worked better for their usecase than a 175B parameter model which had not gone through that process.
(Source https://www.forbes.com/sites/forbestechcouncil/2023/02/17/is...)