This might be technically correct, in the sense that I think these companies have their own LLMs they're pushing? They're not literally using OpenAI's GPT model. But all LLMs are vulnerable to this, so it doesn't practically matter if they're using specifically GPT vs something in-house, the threat model is the same.
Given the allure of using AI in the military for unmanned systems it’s not that far off.
With a lesser danger level, similar adversarial dynamics exist in other places where AI might be useful. E.g dating, fraud detection, recruitment