Agreed. Recently I was discussing the same point with a non-technical friend who was explaining that his CTO had decided to move from Digital Ocean to AWS, after DO experienced some outage. Apparently the CEO is furious at him and has assumed that DO are the worst service provider because their services were down for almost an entire business day. The CTO probably knows that AWS could also fail in a similar fashion, but by moving to AWS it becomes more or less an Act of God type of situation and he can wash his hands of it.
If you have a couple of hours to spare, I recommend listening to the episode of the Songhai Empire by the Fall of Civilizations podcast. They go into the fascinating historical background of the Timbuktu scholars and libraries. Link: <https://www.youtube.com/watch?v=GfUT6LhBBYs>
Actually it's pretty weird. This is a sports blog and content engine. And the author is someone who started this publication. I'm not sure what the motivation/context is at all here.
I've generally had a little bit more success with mocking when I'm hiding that dependency behind my own interface. So for example in Java, instead of trying to mock the AWS provided class, I write my own class (like a facade or repository pattern) which has a very simple interface of a success case and maybe a couple of relevant failure cases. It's calling the AWS library within it. But my mocks are at the level of my facade class which I find easier. The drawback is I'm not sure if there's a good general strategy to test the implementation of that facade. Most of the times the implementation is simple enough that I can do some simple integration tests for the most relevant cases, but there's always a risk that I am missing out some weird edge cases and I don't know how to properly deal with that.
I think the nice thing about this is that the facades usually don’t change: you might add methods, but, once the code is written, it only changes if you’re doing something major like switching databases. Code that is written and then never changed tends to be less buggy than code that changes, so this sort of pattern tends to reduce bugs at integration points.
Unfortunately, I must agree with you that there are definitely some very troubling implications to how WebAssembly might be used to build more effective walled gardens on the web.
I can easily imagine a "platform" WASM module which acts as a runtime for other WASM modules built by "app" developers. This Platform module can be easily cached by FAANG or other big commercial interests, similar to AMP by Google (maybe even be pre-bundled into the browser?). The only way to discover, download and run these other apps is through this curated Platform module. All this could be rendered through something like the Canvas API instead of the DOM, which again is managed on a low level by the Platform and in turn exposes higher level API's for the Apps. The Platform also has built in support for Ad networks, tracking, etc., which cannot be disabled without disabling the whole ecosystem of apps. And of course, like any good play/app store, it is completely incompatible with anything else, leading to new levels of Balkanization of the web.
I hope that this isn't the case, and I'm completely wrong about this. But I just can't shake the feeling that as a community, we are championing WebAssembly as purely a performance win, without considering how big commercial interests might seek to exploit this new technology.
This platform wasm will come. And it does not You do not need to be evil, especially in the beginning. Java or .NET Runtime and Libraries are a bit heavy for a small web page.
And from there it is just the next step to run a base platform like Android user space or a simple runtime like blazor.
In the end wasm is a runtime. There were physical Java Processors and I am pretty sure there will be webassembly processors.
There were not physical Java processors, because it’s too hard to emulate jvm in hardware. But there were processors with instructions optimized for interpreting jvm bytecode. These would be pretty useless nowadays (for either jvm or wasm) because baseline compiler which translates vm bytecode to native bytecode is easy and more efficient than direct vm support in hardware.
Thanks for dooming us all, hombre. I'm holding you personally responsible when it happens.
In fact I think it may already have. Took a look at a web based learning portal for my niece the other day, and after poking around in devtoolsfound it was pretty similar to what you described.
As a fellow web developer, I want to offer a different viewpoint. Firstly, the pros that you listed for the web are equally valid today as they were 7 years ago. But additionally, I want to push back a bit against the "app-ifiction" of the web. I see the web as its own platform, and trying to implement "native" features on it seems counter productive. The web has features which are completely native to it, which are either very difficult, or downright impossible to replicate on Android and iOS apps (proper URL's, standardisations etc., just to give some examples off the top of my head). And yet no one seems to be complaining about that. I understand that this argument comes across as a what-about-ism, but I do genuinely think that the web platform is different, and should be approached on its own terms, rather than trying to mimic an Android or an iOS app experience.
On a higher level, as web developers, I think it is worth remembering that we don't have to build these half broken app experiences. We can still build web sites. We can still open a simple index.html file, put in some markup, and voila, it renders on the browser. We can choose not to implement all the trackers, invasive ads, broken "personalisation" algorithms etc. The fact that these things are implemented on all websites nevertheless is not because of some technical failure of the web platform. I believe it's a human problem of organisational politics, where the business users want these features, and overrule any objections that the developers might have regarding privacy or performance. Additionally, Android and iOS apps also implement these tracking features, with numerous SDK's available to create these marketing funnels that you mentioned. They may not be visible as an ugly badge icon in the top right corner showing how many were blocked, but they are definitely there. At the end of the day, the business/client that we are working for is asking for these things, and we can either try to educate them against these features, cave in and implement these functionalities, or move to a different organization/team/project more in line with our view points. But we cannot solve this with a technical solution like changing platforms.
Finally, regarding the perception that the web is a free and open democratiser, isn't it still one? I can still spin up a web server not hosted on one of the cloud platforms to host and serve my content. Yes, discoverability, scaling etc., are problems, but that's true for practically everything from social media, to e-commerce. Again, from a technical perspective, nothing is stopping us from developing websites or even web apps which are not beholden to commercial and centralised interests. Again, we don't do it, because of "IRL" reasons which don't really have any good technical solutions. In fact, going into the highly controlled and regulated environments of Apple's iOS, and to lesser extent Google's Android, sounds like the exact opposite of free and open.
That being said, if you find that you like working on iOS or Android apps, or you want to build apps which leverage the strengths of those platforms, then go ahead and I hope enjoy it!
So I work with a company where we provide online nutrition consultation to employees from other organisations as part of a larger corporate medical program. Our system has a scheduling system developed in Rails to deal with these appointments. This system was developed by a person who quit the company some time back.
A few weeks back, we got some complaints from a client who said some of their employees weren't getting allotted an appointment slot, despite the fact that the slot is supposedly free. I dived into the codebase to try and figure out the problem. There were some minor bugs which I spotted first, and could fix without using a debugger. But the appointments were still getting dropped occasionally. So I started tracing the control flow more carefully.
That’s when I found one of the strangest pieces of code I had ever seen. To figure out what the next available appointment slot was, there was a strange function which got the last occupied slot from the database as a DateTime object, converted it to a string, manipulate it using only string operations, and finally wrote it back to the database after parsing it back to a DateTime object, before returning the response! This included some timezone conversions as well! Rails has some very good support for manipulating DateTimes and timezones. And yet, the function's author had written the operation in one of the most confounding ways possible.
Now, I could have sat there and understood the function without a debugger as the article recommends. And then, having understood the function, I could have then rewritten the function using proper DateTime operations. But with a client and my mangers desperately waiting for a fix, I used a debugger to step through the code, line by line, just understanding the issue locally, and fixed the bug which was buried in one of the string operations. That solved the problem temporarily, and everyone was happy.
A week later, when I had more time, I went back and again used the debugger to do some exploratory analysis, and create a state machine model of the function, observing all the ways it was manipulating that string. I added a bunch of extra tests, and finally rewrote the function in a cleaner way.
Instead of romanticising the process of developing software by advocating the use or disuse of certain tools, we should be using every tool available to simplify our work, and achieve our tasks more efficiently.
"we should be using every tool available to simplify our work, and achieve our tasks more efficiently"
Yep. I advocate for some "micro managing" within a team for this type of stuff (detecting things your people do that you know there are faster ways to do it). Everybody learns from that process.
An interesting observation in the article is the data structures DSL that the author goes into quite a bit of detail, which is essentially what most Clojure/Lisp developers also talk about.
Basically the author points out a lot of benefits of using the native serialisable data structures of the language itself to model the domain, instead of creating a separate representation which has to be parsed, optimised, evaluated, etc. Lisp users just take the same idea one step further by having the "host" language also be represented using the same data structures, instead of just the DSL's.
As Alan Perlis said, "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures."