As some other users have pointed out, the reason for stifling these commercially-available models is likely just anti-competitive behavior parading as wokeness. I tend to employ Hanlon's Razor wherever possible, but I'm not sure ignorance can be claimed here.
That said, I do believe the discussions about being mindful about how you train your models arose from legitimate concerns, but I feel those concerns are more valid for "back-of-house" models. Basically, you should avoid training a model on demographics or credit scores or the like, lest you accidentally create a model that automates a bias against a group of people.
That said, I do believe the discussions about being mindful about how you train your models arose from legitimate concerns, but I feel those concerns are more valid for "back-of-house" models. Basically, you should avoid training a model on demographics or credit scores or the like, lest you accidentally create a model that automates a bias against a group of people.
But I don't think that's what's happening here.