Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Protobuf can typically be about 70-80% smaller than the equivalent JSON payloads. If you care about Network I/O costs (at a large scale), you'd probably want to realize a benefit in cost savings like that.

Additionally, I think people put a lot of trust into JSON parsers across ecosystems "just working", and I think that's something more people should look into (it's worse than you think): https://seriot.ch/projects/parsing_json.html



Let's say I wanted to transfer a movie in MKV container format. Its binary and large at about 4gb. Would I use JSON for that? No. Would I use gRPC/protobuf for that? No.

I would open a dedicated TCP socket and a file system stream. I would then pipe the file system stream to the network socket. No matter what you still have to deal with packet assembly because if you are using TLS you have small packets (max size varies by TLS revision). If you are using WebSockets you have control frames and continuity frames and frame head assembly. Even with that administrative overhead its still a fast and simple approach.

When it comes to application instructions, data from some data store, any kind of primitive data types, and so forth I would continue to use JSON over WebSockets.


I agree, there's a lot to gain from getting away from JSON, but gRPC needs HTTP/1 support and better tooling to make that happen.


You probably want to check this out: https://connectrpc.com/


Thanks. I've got a little project that needs to use protobufs, and if my DIY approach of sending either application/octet-stream or application/json turns out to be too sketchy, I'll give Connect a try. Only reason I'm not jumping for it is it involves more dependencies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: