I had an epiphany about the software industry when I stayed at my parent's place and used a microwave that had the worse UX of any machine I had ever seen. Basically there was no start button, there was no way to increment the timer after you started, there was no '10 second or 1 minute preset' like every other brand and the only way I could figure out to make it 'work' would turn on a super loud fan which would keep running even after the Microwave had been stopped; I had to pull the plug on the thing to make it stop.
It was a popular brand and I suspect it probably sold well. The mind-boggling dysfunction may not have been obvious at a glance when the consumer made the purchasing decision. The UX was so bad, I still have nightmares about it.
As I was trying to use the damn thing as a user and kept running into one hurdle after another, it triggered a flashback of my experience of debugging complex software as a software engineer and I thought to myself "F***, I chose the wrong career. I'm cooked. The user doesn't care. The user doesn't care AT ALL." In that moment, I understood that getting replaced by AI was the least of my problems. Far bigger problems had been there since the beginning. I just didn't notice them.
I just thought about the software engineer who had to implement this retarded UX... I imagine they would put on their resume "Wrote the firmware for <popular electronics company>" and it would sound really good. The worst part is that it's probably not even their fault that their work sucks.
Anyway it just made me realize how unmeritocratic this industry is. We could do a great job or a horrible job and most of the time it has nothing to do with career progression and opportunities.
Maybe it's a good time to start promoting my 5 year old, lightweight, hand-crafted, battle-tested, quantum-resistant blockchain: https://capitalisk.com/
It's about 5000 lines of custom code. Crypto signature library written from scratch.
It's a very simple signature algorithm. They're welcome to try and crack it. If there is an issue with it, it shouldn't be hard to identify within those few hundred lines. Nobody found any issues in the last 5 years though.
Isn't it a good thing that there exists at least one blockchain in the world which isn't based on the same crypto library used by every other project? What if those handful of libraries have a backdoor? What if the narrative that "you shouldn't roll out your own crypto" is a psyop to get every project to depend on the same library in order to backdoor them all at once at some future date?
Strange how finance people always talk about hedging but in tech, nobody is hedging tech.
To be (an actual) hedge, something needs to be very solidly understood (by the purchaser), a very solid investment in its own right, and either reverse correlated or independently correlated specifically with a particular asset being hedged.
And not based on analysis of one "hedging" scenario, because both are going to be owned over a huge distribution of scenarios.
Probably the worst indicator of an investment being credible, is a promoter who has to stoop to the floor to ask "What's wrong with hedging?", as if that manipulative bon mot was ever in question, or was the relevant question.
If a motivated promoter can only make a very bad case, believe them.
And, if an "expert" attempts to get respect for their work from non-experts, instead of from other experts, there is something very wrong. Because the former makes no sense.
--
If you don't know how to get respect from experts, study more, and figure out how to trash what you have. Counterintuitive. But if you have anything original right, thats how to find it. Identify it. Purify it. And be in a better position to build again, with just a little more leverage, and repeat. Or communicate it clearly to someone qualified to judge it.
You won't have to persuade anyone.
If you have to persuade someone, either you don't have something, or you don't understand what you have well enough to properly identify and communicate it.
You have ambition. You have motivation. You have interest. You follow through and build. That is it. Don't stop. Ego derails ambition. Kill your darlings. Keep going.
Why would experts care about my product? There's no big money behind it. The big money has to come in first, then the experts come later to tell the big money whatever they want to hear. Maybe they want to hear the truth maybe not... Either way the paymaster always hears what they want.
Besides, I am an expert. I studied cryptography at university as part of my degree. I have 15 years of experience as a software engineer including 2 years leading a major part of a $300 million dollar cryptocurrency project which never got hacked... I know why the experts were not interested in my project and after careful analysis, I believe it has nothing to do with flaws in my work.
If anything, it might be because my project doesn't have enough flaws...
At this stage, I hope you're right. I hope I will find the flaws in my projects that I've been looking for after 5 years.
You are leaving something out then. Which you allude to.
Bravo on five years! I recently solved a problem that took me over 30. I originally thought, 3-5 months maybe, then 3-5 years, ... I am happy it didn't take 50. I have killed a lot of my own darlings.
Well apparently you know what you are doing, I am sure you have something.
I have found the best language models are great at attacking things. You may have already done that, but if not its worth a try. Free brutality.
The crypto dev community has a strange idea that working with binary is superior. For many algorithms, it's not. It just obfuscates what's happening and the performance advantage is negligible... Especially in the context of all the other logic in the system which uses far more resources.
I didn't know that Protobuf wasn't canonical but even without this knowledge, there are many other factors which make it an inferior format to JSON.
Also, on a related topic; it seems unwise that essentially all the cryptographic primitives that everyone is using are often distributed as compiled binaries. I cannot think of anything more antithetical to security than that.
I implemented my own stateful signature algorithm for my blockchain project from scratch using utf8 as the base format and HMAC-SHA256 for key derivation. It makes it so much easier to understand and implement correctly. It uses Lamport OTS with Merkel MSS. The whole thing including all dependencies is like 4000 lines of easy-to-read JavaScript code. About 300 lines of code for MSS and 300 lines for Lamport OTS... The rest are just generic utility functions. You don't need to trust anyone else to "do it right" when the logic is simple and you can read it and verify it yourself! Simplicity of implementation and verification of the code is a critical feature IMO.
If your perfect crypto library is so complex that only 10 people in the world can understand it, that's not very secure! There is massive centralization and supply chain risk. You're hoping that some of these 10 people will regularly review the code and dependencies... Will they? Can you even trust them?
Choosing to use a popular cryptographic library which distributes binaries is basically trading off the risk of implementation mistake for the risk of supply chain attack... Which seems like a greater risk.
Anyway it's kind of wild to now be reading this and seeing people finally coming round to this approach. I've been saying this for years. You can check out https://www.npmjs.com/package/lite-merkle feedback welcome.
While protobuf comes with the strict parser built in, it's certainly possible to work with JSON in such a way that it is effectively strictly typed and versioned. These factors aren't really a "key difference" between the two formats, so much as an ergonomic one, imo
I've been talking about 'complexity' for years but business people just didn't get it. Now software development has become almost entirely about complexity management so now I'm hoping that they will finally understand what software engineering actually is. I hope they will finally start valuing engineers who can reduce complexity.
I was always into software architecture and I was dreaming to be a software architect but after completing university, the position was on the way out.
I've been advocating to minimize the number of dependencies for some time now. Once you've maintained an open source project for several years, you start to understand the true cost of dependencies and you actually start to pay attention to the people behind the libraries. Popularity doesn't necessarily mean reliable or trustworthy or secure.
I'll agree with that, and you would think it's common sense for any competent engineer, but for many people it's just an afterthought. Including senior and lead engineers. General matters like security are a political liaison, how can you raise the alarm/advocate for critical security improvements as an IC without making people around and above you look bad? How can you justify time invested into these things without raising the alarm? At the same time, you can't just ignore glaring security holes. It's a fine line to walk, and actually being realistic about the possibilities has found me nothing but enmity from peers and superiors due to it seeming like I'm throwing them under the bus.
In general, management was to see progress. I've come to find that technical details like these are an afterthought for most engineers, so far as the deadlines are being met.
It's one of these things that are under the water, tech side jobs. Everyone has to be on board, if your peers don't give a fuck you're just an annoyance and will be swimming counter-current.
Can relate. It's like; if you're less rigorous than the CTO, they would think you're incompetent. If you're more rigorous than the CTO, they would think you're overly pedantic; not pragmatic enough.
Agree. This is one of the major takeaways I've had from writing Go over the years -- which is even a Go proverb [0], "a little copying is better than a little dependency." Fortunately, LLMs make writing your own implementations of little dependencies super easy too.
I love the simplicity of Node.js that each process or child process can have its own CPU core with essentially no context switching (assuming you have enough CPU cores).
Most other ways are just hiding the context switching costs and complicating monitoring IMO.
I like Node.js' simple and fully isolated concurrency model. You shouldn't be blocking the main event loop for 30 seconds! The main event loop is not intended to be used for heavy processing.
You can just set up a separate child process for that. The main event loop which handles connections should just co-ordinate and delegate work to other programs and processes. It can await for them to complete asynchronously; that way the event loop is not blocked.
I recall people have been able to get up to around a million (idle) WebSocket connections handled by a single process.
I was able to comfortably get 20k concurrent sockets per process each churning out 1 outbound message every 3 to 5 seconds (randomized to spread out the load).
It is a good thing that Node.js forces developers to think about this because most other engines which try to hide this complexity tend to impose a significant hidden cost on the server in the form of context switching... With Node.js, there is no such cost, your process can basically have a whole CPU core for itself and it can orchestrate other processes in a maximally efficient way if you write your code correctly... Which Node.js makes very easy to do. Spawning child processes and communicating with them in Node.js is a breeze.
> You shouldn't be blocking the main event loop for 30 seconds! The main event loop is not intended to be used for heavy processing.
This article is talking about an SDK that runs in users' apps. Users can run whatever code they want, so the SDK has to find a way to keep sending the outgoing heartbeats
My current project is the culmination 15 years of software development.
I started out building a full stack framework like Meteor framework (though I started before Meteor framework was created in 2012 and long before Next.js).
Then I ported it to to Node.js because I saw an advantage to having the same language on the frontend and backend.
Then I noticed that developers like to mix and match different libraries/modules and this was a necessity. The whole idea of a cohesive full stack framework didn't make sense for most software. So I extracted the most essential part of it that people liked and this became SocketCluster. It got a fair amount of traction in the early days.
At the time, some people might have thought SocketCluster was trying to be a more scalable copycat of Socket.io but actually I had been working on it for several years by that point. I just made the API similar when I extracted it for better compatibility with Socket.io but it had some additional features.
A few years ago, I ended up building a serverless low-code/no-code CRUD platform which removes the need for a custom backend and it can be used with LLMs directly (you can give them the API key to access the control panel). It can define the whole data schema for you. I've built some complex apps with it to fully prove the concept with advanced search functionality (including indexing with a million records).
I've made some technical decisions which will look insane to most developers but are crucial and based on 15 years of experience, carefully evaluating tradeoffs and actual testing with complex applications. For example my platform only has 3 data types. String, Number and Boolean. The string type supports some additional constraints to allow it to be used to store any kind of data like lists, binary files (as base64)... Having just 3 types greatly simplifies spam prevention and schema validation. Makes it much easier for the user (or LLM) to reason about and produce a working, stable, bug-free solution.
That said I've been struggling to sell it because there are some popular well funded solutions on the market which look superficially similar or better. Of course they can't handle all the scenarios, they're more complex, less secure, don't scale, require far more LLM tokens, lead to constant regressions when used with AI. It's just impossible to communicate those benefits to people because they will value a one-shotted pretty UI over all these other aspects.
> I've been struggling to sell it ... they will value a one-shotted pretty UI over all these other aspects.
I do not think that it's your competitors' pretty one-shotted UIs that are losing you your leads. FWIW your website:
* Does not explain the benefits nor the feature-set of what your platform offers
* Has a Docs page which seems to be a link to 6 sets of reference APIs (or similar) in disparate Github pages - that's worse than most open source products.
* Uses cute animated characters which does not give off the "we are a serious enterprise and you can rely on us" vibe
* Has a novel pricing scheme ("implements a techno-feudalist pricing model") that requires a 4 paragraph explanation and somehow tries to frame introducing additional uncertainty for potential buyers as a feature (it's a good thing that _my_ pricing tiers could be adjusted by other customers' votes!?)
* Additionally prominently features cryptocurrency in the topnav, which is for many people a yellow or red flag (regardless of what seems to be your good intentions behind it)
Doesn't have any demo apps I can click around in to smoke test the platform's functionality, let alone fiddle with the backend
* Has almost zero information on basic business-y compliance-y things - no info on security, availability, SSO support, etc let alone more hardcore things like compliance standards your platform meets.
Thanks for the feedback. The part about the voting aspect of the pricing model is a good point. I will remove this and try to use a bit more conventional language whilst maintaining a kind of exclusive licence model.
Also I'll try to explain like "Estimated to be X times faster to develop and requires 70% fewer tokens than alternative stacks and easier to configure security."
That's actually realistic based on some recent analysis I did. I didn't have this info before.
Saasufy itself isn't open source. I'm planning to sell licenses of the code (a limited number of them to make it scarce). SocketCluster is a core component of Saasufy. The goal did evolve slightly; originally, it was to make it easier to build full stack applications. Now it actually lets you build entire full stack apps without code. That bigger goal has been achieved. I have some videos linked from the Docs page showing how it works.
But yes, I'm a bit paranoid about my situation. I do feel like my work is suppressed by algorithms. Things feel very different for me now than they did before in terms of finding users. It's really hard to find people to try my work. Difficult even to convince them to watch a 10 minute video. Though I guess many people are in the same boat right?
That makes more sense but I don't see what pub/sub has to do with a no-code full-stack framework. Other than that some of them might want a chat widget?
It's real-time by default. Done in a cheap (and efficient) way. The views update automatically when relevant data changes, no possibility of overriding conflicts when editing concurrently. Changes are only distributed to users/clients who are looking at the affected data.
It's not just for chat. Anytime people update data collaboratively, for any data-driven software, you run into issues of people overwriting each other's changes... Sadly users got really used to poor experiences like having to 'refresh the page', or 'locking down resources' during edits to avoid concurrent editing altogether.
It's been kind of disturbing for me to realize how few people care about data integrity in collaborative scenarios. It's a kind of hidden issue which can cause major problems but users tend to tolerate. I've had this experience first-hand with colleagues using various other platforms where someone else had the same resource open for a while in the browser, we updated different fields and my field was overwritten with its old value but we didn't realise for some time until it caused a bug with one of the workflows. Then it was like "I'm sure I updated that value."
SC's pub/sub channels are part of the secret sauce to make this work. SocketCluster makes some unique tradeoffs with its pub/sub mechanism; the channels are extremely cheap to create and are cleaned up automatically (and cheaply)... And at any scale with sharding. This was an intentional design decision which is different from message queues like Kafka and RabbitMQ which prioritise throughput and are typically used in the backend as opposed to being end-to-end.
I find that skills work very well. The main SKILL file has an overview of all the capabilities of my platform at a high level and each section links to a more specific file which contains the full information with all possible parameters for that particular capability.
Then I have a troubleshooting file (also linked from the main SKILL file) which basically lists out all the 'gotchas' that are unique to my platform and thus the LLM may struggle with in complex scenarios.
After a lot of testing, I identified just 5 gotchas and wrote a short section for each one. The title of each section describes the issue and lists out possible causes with a brief explanation of the underlying mechanism and an example solution.
Adding the troubleshooting file was a game changer.
If it runs into a tricky issue, it checks that troubleshooting file. It's highly effective. It made the whole experience seamless and foolproof.
My platform was designed to reduce applications down to HTML tags which stream data to each other so the goal is low token count and no-debugging.
I basically replaced debugging with troubleshooting; the 5 cases I mentioned are literally all that was left. It seems to be able to quickly assemble any app without bugs now.
The 'gotchas' are not exactly bugs but more like "Why doesn't this value update in realtime?" kind of issues. They involve performance/scalability optimizations that the LLM needs to be aware of.
It was a popular brand and I suspect it probably sold well. The mind-boggling dysfunction may not have been obvious at a glance when the consumer made the purchasing decision. The UX was so bad, I still have nightmares about it.
As I was trying to use the damn thing as a user and kept running into one hurdle after another, it triggered a flashback of my experience of debugging complex software as a software engineer and I thought to myself "F***, I chose the wrong career. I'm cooked. The user doesn't care. The user doesn't care AT ALL." In that moment, I understood that getting replaced by AI was the least of my problems. Far bigger problems had been there since the beginning. I just didn't notice them.
I just thought about the software engineer who had to implement this retarded UX... I imagine they would put on their resume "Wrote the firmware for <popular electronics company>" and it would sound really good. The worst part is that it's probably not even their fault that their work sucks.
Anyway it just made me realize how unmeritocratic this industry is. We could do a great job or a horrible job and most of the time it has nothing to do with career progression and opportunities.
reply