Thats what I thought at first as well. But as I started working deeper and using more frameworks (had to write some as well) that made the complexity go away. Adding data sources can be very easy and the frameworks, like Apollo, take care of all the things you had to think about previously. I know that there are of course frameworks that handle cacheing and query routing for REST apis, but the flexibility of GraphQL is unparalleled. I find them to be best suited to backing UI's that may get rethought and reworked every 6mo to a year. As well as for data analytics ad hoc queries.
My point is the can be very complex, but they don't have to be. I have a number of them deployed right now that are far more hands off for me than our other APIs.
This is also widely considered a bad idea now. Making liberal consumers allows for sloppy producers. Over time this requires new consumers to conform to these sloppy producers to maintain compatibility.
Just look at the clusterfuck that HTML5 has become. You need to have extremely deep pockets to enter that market.
> Just look at the clusterfuck that HTML5 has become.
Ouch. I feel like this is kind of unfair. XML, HTML1-4, and HTML5 all differ in how they treat Postel's law. XML rejects it at the spec level; if you send garbage to a parser it bails immediately, which is nice. HTML5 embraces Postel's law at the spec level. If you send garbage to an HTML5 parser, there's an agreed-on way to deal with it gracefully. Also nice. The problem was rather with HTML1-4, which embraced Postel's law promiscuously, at the implementation level. There were specs, but mainstream implementations largely ignored them and all handled garbage input slightly differently. This is what created the afore-mentioned clusterfuck.
HTML5 only provides the "be liberal in what you accept" error handling, they have never seen fit to write a "be conservative in what you send" grammar for authors and validators.
Do you have a survey or other citation for it being a bad idea? I get that it enables bad behavior, per see. However, the idea of rejecting a customer/client because they did not form their request perfectly seems rather anti customer.
Ideally, you'd both accept and correct. But that is the idea, just reworded.
The discussion was fun. And seems evenly split, at a quick reading.
More, I think it split on how you read it. If you view it as an absolute maxim to excuse poor implementations, it is panned. If you view it as a good faith behavior not to choke on the first mistake, you probably like it.
This is akin to grammar police. In life encounters, there is no real place for grammar policing. However, you should try to be grammatically correct.
> This is akin to grammar police. In life encounters, there is no real place for grammar policing. However, you should try to be grammatically correct.
That's because most humans have feelings. But most machines don't. So that's not comparable.
It makes difficult to codify inconsistencies. Most aren't that difficult to learn, oddly. Especially if you are just trying to be conversational.
Edit: I'm specifically going off evidence of teaching my kids. They have basically picked up language completely by talking to us. Even pronouns, adjectives, adverbs, etc. What they have not learned, is the reasons some words are used when another could have worked.
The problem with this idea is that different consumers might have a different subset of what they accept and correct.
If some of those become dominant, produces might start depending on that behavior and it becomes a de facto standard. This is literally what has happened to HTML, but holds true for many other Internet protocols.
If you're looking for some external reading, I found at least this:
You'll also find few protocol designers designing anything as robust as the old protocols. :)
I mean, don't go out of your way to under specify input. But relatively nobody is going back to the heavy schema of xml over simple json. Even if they probably should.
I feel this is an anti fragile position. Try not to encourage poor input. But more importantly, be resilient to it. Not dismissive of it.
Fair. I view them as what they grew into. Not as what they were initially designed as. Probably not a straight forward comparison.
I've just gotten weary of so many replacement protocols that get dreamed up and go nowhere. Often because they didn't actually learn all of the lessons from predecessors.
"Accept and correct" in the absence of ECC is just delusion if not hubris. The sender could be in a corrupted state and could have sent data it wasn't supposed to send. Or the data could have been corrupted during transfer, accidentally or deliberately. You can't know unless you have a second communication channel (usually an email to the author of the offending piece of software), and what you actually do is literally "guess" the data. How can it go wrong?
In the world of signed requests, but flips are less of a concern. If the signature doesn't match, reject the call. Which implies I clearly don't mean accept literally everything. Just work in your confines and try to move the ball forward, if you can. This is especially true if you are near the user. Consider search engines with the "did you mean?" prompts. Not always correct, but a good feature when few results are found.
For system to system, things are obviously a but different. Don't just guess at what was intended. But, ideally, if you take a date in, be like the gnu date utility and try to accept many formats. But be clear in what you will return.
And, typically, have a defined behavior. That could be to crash. Doesn't have to be, though. Context of the system will be the guide.
Or they will pick a service that works with them. This is literally how Google won most of their market share. Sure, there used to be a bit of syntax on the search, but Google always had a single input field and did not barf requests back to users because they put a field in the wrong input.
This is about protocols and data formats not user input text box (the original post mentioned JSON). It's interesting that you bring Google that lead efforts to replace more "liberal" HTTP1.1 with binary, strict HTTP2.
My point is often that those that are pushing for stricter formats have good intentions. Strong arguments, even. However, what is required to grow the adoption of something is different than hardening it. And typically hardening something makes it brittle is some way. (Which is super risky at the adoption stage.)
And, of course, most people don't actually understand why they succeeded at something. It is easy to understand failure from a specific cause. It is much more difficult to understand success from a combination of many causes.
The HTML5 clusterfuck comes from having the biggest players being allowed to adjust the goal as they see fit, when they see fit (aka "living document").
The good news is that Firefox and chromium have pretty open licenses so you only need to change what you want. Of course you need to grok it, which isn't trivial. But writing a browser hasn't been easy since... Cello?
Say hypothetically if a got my Drupal site for cheap and now I'm hacked. I pay someone another $100 to install the patch and get rid of the malware, I still come out ahead.
This is the catch. Depending on what was done to your site, there's no "getting rid of the malware" for $100.
Those cheap hosting providers don't provide automated backups of your database. You probably have a backup of your site's files -- that's probably not perfectly current, but close enough maybe -- but I bet you haven't got regular backups of your database.
Drupal is one of those many CMSs that stores tons and tons of code in the database, including executable PHP. So how do you go about ensuring that all of that is clean, and changing all the passwords that may've been compromised, and making sure there are no other backdoors or shells left behind, for $100?
As with the majority of security issues it was done for convenience. Not every user has access to the hosting provider so it was done out of convenience.
Thankfully they removed this option in Drupal 8, the latest version. You could also restrict users from accessing the functionality so it wasn't that terrible. In practice few sites actually use the option, but when they do it can make troubleshooting a giant pain in the ass.
The fact that php.module ever existed in the codebase is a downright travesty. As soon as any privileged user was compromised (i.e. someone with "administer users" or "administer site configuration" permissions) the attacker had arbitrary remote code execution.
My projects had a patch to remove that entire module from core on each build.
One example I did many years ago: I want to make this download page vary what it serves by user agent, providing instructions for any things they’ll need to install first (because .NET stuff used to go in the user agent string), and providing the appropriate download link for their platform. Most of this could be done in JavaScript, but there was at least one part of the mix that couldn’t be, but could still be detected from the server.
Now which is easier—make a new module to serve this page or filter the output of that page, or just enable PHP code for this page, and write it directly in PHP on this page only?
I've never seen someone make an informed trade-off decision like that and acknowledge it.
Instead they just believe that they are 100% secure then when they get hacked they act all surprised and with great hypocrisy say "security is our number one priority at shitshow.com. we take security extremely seriously.".
Otherwise I'm not suggesting that there's some great incentive. As we've seen with huge hacks like Equifax and many more companies we know right now they just get a slap on the wrist and so they continue to try to use "we are sorry" PR statements after the fact as their strategy.
I also think that's a mischaracterisation. It's statistically unlikely for me to transform the world. I just want to carve out a tiny piece of it and make it better.
I know this might be too personal for yourself, but I'm throwing this question to those who are still reading this:
Does it matter which tiny piece?
Is making the world better for the top 1% the same as the bottom 1%?
Does the 'importance' matter?
Sorry if this doesn't make sense, just something that bothers me personally (because I do believe in carving out a tiny piece of the world and making it better).