What about S3 didn't meet your use case? I don't work for AWS. I don't care if they lose business, I am interested in how different companies parse their requirements into manage vs. rent.
One aspect is that we have a lot of data that has PII in it, and we feel safer if we anonymise that locally before sending it into the cloud. Once the data is cleaned up it's actually sent to GCS for consumption in our product). Another aspect is that this data has to be accessible as windows fileshares (i.e. SMB) to our data processing team. The datasets are in the range of several 100's of GB to several TB, each of the team members works on several of those datasets per day. This would strain our uplink too and maybe the bandwidth would be costly as well.
If you are writing a ton of small files (we have billions of audit blobs we write) the API put costs can quickly creep up on your. We pay much more for those than on the actual storage costs. If you want to use tags on your objects, they charge you per tag per object per month - again, another huge cost. We missed that when pricing S3 out, and needed to do a project to pull out all of the tags we had, and are currently working on batching up multiple blobs into one larger blobs to hopefully reduce our API costs by an order of magnitude. This is purely a cost decision for us, adding complexity to our application and its operation. S3 seems better suited for fewer larger files. Our backups and other use cases like that work perfectly.