Hacker Newsnew | past | comments | ask | show | jobs | submit | GuB-42's commentslogin

I think JSON has the opposite problem, it is too simple, the lack of comments in particular is particularly bad for many common usages of the format today.

I know some implementations of JSON support comments and other things, but is is not true JSON, in the same way that most simple XML implementations are not true XML. That's what I say "opposite problem", XML is too complex, and most practical uses of XML use incomplete implementations, while many practical uses of JSON use extended implementations.

By the way, this is not a problem for what JSON was designed for: a text interchange format, with JS being the language of choice, but it has gone beyond its design: configuration files, data stores, etc...


A lot of people dislike that decision not to include comments in JSON, but I think while shocking it was and is totally correct.

In a programming language it's usually free to have comments because the comment is erased before the program runs; we usually render comments in grey text because they can't change the meaning of the program.

In a data language you have no such luxury. In a data language there's no comment erasure happening between the producer and the consumer, so comments are just dangerous as they would without doubt evolve into a system of annotations -- an additional layer of communication which would then not be standardized at all and which then would grow into a wild west of nonstandard features and compatibility workarounds.


I don't dislike the decision at all, FWIW! For data interchange it's totally reasonable. But it does make JSON ill-suited for a bunch of applications that JSON has been forcefully and unfortunately applied to.

Could you imagine hitting a rest api and like 25% of the bytes are comments? lol

Worse than that - people will start tagging "this value is a Date" via comments, and you'll need to parse ad-hoc tags in the comments to decode the data. People already do tagging in-band, but at least it's in-band and you don't have to write a custom parser.

See also: postscript. The document structure extensions being comments always bothered me. I mean surely, surely in a turing complete language there is somewhere to fit document structure information. Adobe: nah, we will jam it in the comments.

https://dn790008.ca.archive.org/0/items/ps-doc-struc-conv-3/...


Not sure it's a fair comparison. The spec says:

"Use of the document structuring conventions... allows PostScript language programs to communicate their document structure and printing requirements to document managers in a way that does not affect the PostScript language page description"

The idea being that those document managers did not themselves have to be PostScript interpreters in order to do useful things with PostScript documents given to them. Much simpler.

For example, a page imposition program, which extracts pages from a document and places them effectively on a much larger sheet, arranged in the way they need to be for printing 8- or 16- or 32-up on a commercial printing press, can operate strictly on the basis of the DSC comments.

To it, each page of PostScript is essentially an opaque blob that it does not need to interpret or understand in the least. It is just a chunk of text between %%BeginPage and %%EndPage comments.

This is tremendously useful. A smaller scale of two-up printing is explicitly mentioned as an example on p. 9 of the spec.


Reminds me how old versions of .net used to serialize dates as "\/Date(1198908717056)\/".

HTML and JS both have comments, I don't see the problem

And both are poor interchange formats. When things stay in their lane, there is no "problem." When you try to make an interchange format using a language with too many features, or comments that people abuse to add parsable information (e.g. "type information") then there is a BIG problem.

« HTML is a poor interchange format. » - quote of the century -

It caused all kinds of problems, though those tend to be more directly traceable to the "be liberal in what you accept" ethos than to the format per se.

> Could you imagine hitting a rest api and like 25% of the bytes are comments? lol

That's pretty much what already happens. Getting a numeric value like "120" by serializing it through JSON takes three bytes. Getting the same value through a less flagrantly wasteful format would take one.

I guess that's more than 25%. In the abstract ASCII integers are about 50% waste. ASCII labels for the values you're transferring are 100% waste; those labels literally are comments.

If you're worried about wasting bandwidth on comments, JSON shouldn't be a format you ever consider, for any purpose.

lol


> so comments are just dangerous as they would without doubt evolve into a system of annotations -- an additional layer of communication which would then not be standardized at all and which then would grow into a wild west of nonstandard features and compatibility workarounds

IIRC Douglas Crockford explicitly stated that he saw people initially using comments for a purpose like ad hoc preprocessor directives.


> In a programming language it's usually free to have comments because the comment is erased before the program runs

That's inherent to the language specification, but it isn't inherent to the document. You have to have a system with rules that require that erasure.

Nothing prevents one from mandating a system that strips those comments out of JSON. You could even "compile" JSON to, I don't know, BSON or msgpack or something.

Just as nothing prevents one from creating tooling to, say, extract type annotations from comments in a dynamically typed language.


> while shocking it was and is totally correct

Agreed —— consider how comments have been abused in HTML, XML, and RSS.

Any solution or technology that can be abused will be abused if there are no constraints.


> In a data language there's no comment erasure happening between the producer and the consumer, so comments are just dangerous as they would without doubt evolve into a system of annotations -- an additional layer of communication which would then not be standardized at all and which then would grow into a wild west of nonstandard features and compatibility workarounds.

But there's nothing stopping you from commenting your JSON now. There's no obligation to use every field. There can't be, because the transfer format is independent of the use to which the transferred data is put after transfer.

And an unused field is a comment.

    {
      "customerUUID": "3"
      "comment": "it has to be called a 'UUID' for historical reasons"
    }
If this would 'without doubt' evolve into a system of annotations, JSON would already have a system of annotations.

> that decision not to include comments in JSON, but I think while shocking it was and is totally correct.

Yaml is fugly, but it emerged from JSON being unsupportive of comments. Now we’re stuck with two languages for configuration of infrastructure, a beautiful one without comments so unusable, the other where I can never format a list correctly on the first try, but comments are ok.


YAML also expanded to add arbitrary scripting via a pile of bolt-on capabilities so that it's now a serialisation language that's Turing-complete, or that includes Turing-complete capabilities within it, everything from:

  command:
    - /bin/sh    
    - -c
    - rm -rf $HOME
to:

  state: >
    {% set foo = states('...') %}
    {% set bar = states('...') %}
    {% if foo == FOO and bar == BAZ %} 
    ...
This makes it damn annoying to work with because everyone's way of doing it is different and since it's not a first-class element you have to rethink everything you want to do into strange patterns to work with how YAML does things.

This scripting is not a part of YAML. It could be done in JSON as well:

  {"command": [
    "/bin/sh",
    "-c",
    "rm -rf $HOME"
  ]}
In fact, this is completely equivalent to your YAML.

JSON is obviously perfectly usable, given how widely it's used. Even Douglas Crockford suggested just using a JSON interpreter that strips out comments, if you need them.

And if you want something like JSON that allows comments, and you aren't working on the web, Lua tables are fine.


Many years ago I worked for a company that did EDI software. When XML was introduced they had to add support for that, just the primitive XML 0.1 that was around at the time with none of the modern complexities. With the same backend code, just switching the parsing, they found either a 100x slowdown in parsing and a 10x increase in memory use or the other way around (so 10x slower, 100x the memory). The functionality was identical, all they did was switch the frontend from EDI to XML.

Since EDI is meant for processing large numbers of transactions as quickly as possible, I hate to think what the move to XML did to that. I moved on years ago so I don't now whether they just threw more hardware at the problem to achieve the same thing that EDI already gave them but now with angle brackets, or whether the industry gave up on XML because of its poor performance.

Come to think of it I'm pretty sure they would have tried blockchain when that got trendy as well.


No, it was obviously and flagrantly incorrect, as evidenced by the success of interchange formats that do allow for comments, including many real world systems that pragmatically allow comments even when JSON says they shouldn't. This is Stockholm Syndrome.

But what can we expect from a spec that somehow deems comments bad but can't define what a number is?


How do you feel numbers are ill defined in json? The syntactical definition is clear and seems to yield a unique and obvious interpretation of json numbers as mathematical rational numbers.

A given programming language may not have a built in representation for rational numbers in general. That isn't the fault of json.


I can't really tell what you're trying to say; JSON also has no representation for rational numbers in general. The only numeric format it allows is the standard floating point "2.01e+25" format. Try representing 1/3 that way.

The usual complaint about numbers not being well-defined in JSON is that you have to provide all numbers as strings; 13682916732413492 is ill-advised JSON, but "13682916732413492" is fine. That isn't technically a problem in JSON; it's a problem in Javascript, but JSON parsers that handle literals the same way Javascript would turn out to be common.

Your "defense", on the other hand, actually is a lack in JSON itself. There is no way to represent rational numbers numerically.


I didn't say that json can represent all rational numbers. I said that all json numbers have an obvious interpretation as a rational number.

So far you haven't really shown an example of a json number which has an ambiguous or ill defined interpretation.

Maybe you mean that json numbers may not fit into 32 bit integers or double floats. That's certainly true but I don't see it as a deficiency in the standard. There is no limit on the size of strings in json, so why have a limit on numbers?


>> A given programming language may not have a built in representation for rational numbers in general.

Why did you say this?


As long as they stay comments there's no harm. As soon as they become struct tags and stripping comments affects the document's meaning you lose the plot.

I've said it before, but I maintain that XML has only two real problems:

1. Attributes should not exist. They make the document suddenly have two dimensions instead of one, which significantly increases complexity. Anything that could be an attribute should actually be a child element.

2. There should be one close tag: `</>` which closes the last element, which burns a significant amount of space with useless syntax. Other than that and the self-closing `<tag />` (which itself is less useful without attributes) there isn't much that you need. Maybe a document close tag like `<///>`

You'll notice that, yes, JSON solves both of those things. That's a part of why it's so popular. The other is just that a lot more effort was put into maximizing the performance of JavaScript than shredding XML, and XSLT, the intended solution to this problem, is infamous at this point.

The problem of comments is kind of a non-issue in practice, IMO. You can just add a `"_COMMENT"` element or similar. Sure, yes, it will get parsed. But you shouldn't have that many comments that it will cause a genuine performance issue.

However, JSON still has two problems:

1. Schema support. You can't validate that a file before de-serializing it in your application. JSON Schema does exist, but it's support is still thin, IMX.

2. Many serializers are pretty bad with tabular data, and nearly all of them are bad with tabular data by default. So sometimes it's a data serialization format that's bad at serializing bulk data. Yeah, XML is worse at this. Yeah, you can use the `"colNames": ["id", ...], "rows": [ [1,...],[2,...] ]` method or go columnar with `"id": [1,2,...], "name": [...], "createDate": [...]`, but you had better be sure both ends can support that format.

In both cases, it seems like there is an attempt to resolve both of those issues. OpenAPI 3.1 has JSON schema included in it. The most popular JSON parsers seem to be adding tabular data support. I guess we'll see.


XML is a Markup Language. The text is what is being marked up, and the attributes are how to mark it up. Try writing the equivalent of <font family="Arial">Hello world</font> without attributes. I'll wait.

Using XML as a structured data interchange format is abuse. Of course the square peg doesn't fit in the round hole. You propose filing off the corners of the square, making it an octagon, so it will fit the round hole better.


While XML/XHTML aren't spec'ed/evolved to support your fun font sans attribute challenge, certainly modern html does ...

  <p>
  <style>
  @scope { font-family: "Arial" ; }
  </style>
  Prospero: Where in the world is my teapot? Hello? I'm waiting! 
  </p>
I know one could argue that that css rule property is essentially an attribute, but it illustrates, like XML plists[1], that one can define the tags arbitrarily to have their content be meta upon sibling/nested content, subsuming attributes' role.

To wit, it seems to me a style issue.

[1] Apple has long used XML plists for data ~ interchange or even archival storage such as .webarchive (ie just a plist flavor). Of course they soon added a simple binary version to compress out some redundancy and encoding waste.

They used an XML nested tag approach, not attributes. Maybe not well rounded pegs and holes but it has worked for them on a large scale over a long time.


I disagree on several points here:

1. I think attributes absolutely should exist. They're great for describing metadata related to the tag: e.g. element ID, language, datatype, source annotation, namespacing. They add little in complexity.

2. The point of a close tag with a name is to make it unambiguous what it's trying to close off.

It sounds to me like what you want is not a better XML, but just s-exprs. Which is fine, but not quite solving the same problem.

3. As far as schema support, it seems to me that JSON Schema is well-established and perfectly cromulent – so much so that YAML authors are trying to use it to validate their stuff (the poor bastards) – and XML schema validation, while robust, is a complex and fragmented landscape around DTD, XSD, RELAX-NG, and Schematron. So although XML might have the edge, it's a more nuanced picture than XML proponents are claiming.

4. As far as tabular data, neither XML nor JSON were built for efficient tabular data representation, so it shouldn't be a surprise that they're clunky at this. Use the right tool for the job.


> 1. I think attributes absolutely should exist. They're great for describing metadata related to the tag: e.g. element ID, language, datatype, source annotation, namespacing. They add little in complexity.

No, they're barely adequate for those purposes. And you could (and if you have a XSD you probably should) still replace them with elements. If you argue that you can't, then you're arguing that JSON does not function. You can just inline metadata along side data. That works just fine. That's the thing about metadata. It's data!

You don't need attributes. Having worked in information systems for 25 years now, they are the most heavily, heavily, heavily misused feature of XML and they are essentially always wrong.

Because when someone represents data like this:

  <Person>  
    <ID>90034</ID>  
    <FirstName>Anthony</FirstName>  
    <MiddleName />
    <LastName>Perkins</LastName>  
    <Site>4302</Site>  
  </Person>  
You can write a XSD with the full set of rules for schema validation.

On the other hand, if you do this:

  <Person ID="90034"  
    FirstName="Anthony"  
    MiddleName=""
    LastName="Perkins"  
    Site="4302" />
Well, now you're a bit stuck. You can make the XSD look at basic data types, and that's it. You can never use complex types. You can never use multiple values if you need it, or if you do you'll have to make your attribute a delimited string. You can never use complex types. You can't use order. You're limiting your ability to extend or advance things.

That's the problem with XML. It's so flexible it lets developers be stupid, while also claiming strictness and correctness as goals.

> 2. The point of a close tag with a name is to make it unambiguous what it's trying to close off.

Sure, but the fact that closing tags in the proper order is is mandatory, you're not actually including anything at all. The only thing you're doing is introducing trivial syntax errors.

Because the truth is that this is 100% unambiguous in XML because the rules changed:

  <Person>  
    <ID>90034</>  
    <FirstName>Anthony</>  
    <MiddleName />
    <LastName>Perkins</>  
    <Site>4302</>  
  </>  
The reason SGML had a problem with the generic close tag was because SGML didn't require a closing tag at all. That was a problem It didn't have `<tag />`. It let you say `<tag1><tag2>...</tag1>` or `<tag1><tag2>...</>`.

Named closing tags had more of a point when we were actually writing XML by hand and didn't have text editors that could find the open and close tags for you, but that is solved. And now we have syntax highlighting and hierarchical code folding on any text editor, nevermind dedicated XML editors.

> 3. As far as schema support, it seems to me that JSON Schema is well-established and perfectly cromulent

Then my guess is that you have worked exclusively in the tech industry for customers that are also exclusively in the tech industry. If you have worked in any other business with any other group of organizations, you would know that the rest of the world is absolute chaos. I think I've seen 3 examples of a published JSON Schema, and hundreds that do not.

> 4. As far as tabular data, neither XML nor JSON were built for efficient tabular data representation, so it shouldn't be a surprise that they're clunky at this. Use the right tool for the job.

No, I think you're looking at what the format was intended to do 25 years ago and trying to claim that that should not be extended or improved ever. You're ignoring what it's actually being used for.

Unless you're going to make data queries return large tabular data sets to the user interface as more or less SQLite or DuckDB databases so the browser can freely manipulate them for the user... you're kind of stuck with XML or JSON or CSV. All of which suck for different reasons.


1. I don't disagree that attributes have been abused – so have elements – but you yourself identified the right way to use them. Yes, you can inline attributes, but that also leads to a document that's harder to use in some cases. So long as you use them judiciously, it's fine. In actual text markup cases, they're indispensable, as HTML illustrates.

2. As far as JSON Schema, you're wrong on all acounts – wrong that I haven't seen Some Stuff, wrong that JSON schema doesn't get used (see Swagger/OpenAPI), and wrong that XML Schema doesn't also get underitilized when a group of developers get lackadaisical.

3. As far as what historical use has been, I'm less interested in exhuming historical practice than simply observing which of the many use cases over the last 20 years worked well (and still work) and which didn't. The answer isn't that none of them worked, and it certainly isn't that XML users had a better bead on how to use it 20 years ago – it went through a massive hype curve just like a lot of techs do.

4. Regarding tabular data exchange, I stand by my statement. Use XML or JSON if you must, and sometimes you must, but there are better tools for the job.


Attributes exist due to it's origin as a markup language. XML is actually (big surprise) a pretty good markup language. Where the tags are sort of like function calls and the attributes are args. With little to no information to be gleaned out of the text. The big sin was to say "hey the tooling is getting pretty good for for these sgml like markup languages. Lets use it as a structured data interchange format. It's almost the same thing". Now all the data is in the text and the attributes are not just superfluous but actively harmful as there is a weird extra data axis that people will aggressively use.

Hard disagree about attributes, each tag should be a complete object and attributes describe the object.

    <myobject foo="bar"/>
    // means roughly
    new MyObject(foo="bar")
But objects can also be containers and that's what nesting is for. There shouldn't ever be two dimensions in the way you're describing. The pattern of

    <myobject>
      <foo>bar</foo>
    </myobject>
is the root of most XML evil. Now you have to know if myobject is a container or a franken-object with a strict sub-schema in order to parse it. The biggest win of JSON is that .loads/.dump make it really obvious that it's for serializing complete objects where a lot of tooling surrounding XML makes you poke at the document tree.

The thing is, he is not working in open source.

He only released his software as open source when there was no more money to be made with it. The idea being that even if it is of no use for him, is could be of use to someone else. In a sense, it is crazy to think of such actions as generous when it is what everyone should have done, but since being an asshole is the rule, then breaking that rule is indeed generous.

To me, working in open source means that your work goes to open source projects right now, not 10 years later when your software is obsolete and have been amortized. The difference matters because you are actually trying to make money here, and the protection offered by the licence you picked may be important to your business model.

John Carmack is making gifts, which is nice, but he wasn't paid to make gifts, he was paid to write proprietary software, so he worked in proprietary software, not open source. On one occasion, he gave away one of his Ferraris, which is, again, nice, but that doesn't make him a car dealer.


First reaction: How come the source code is not public in the first place, accessible to every Swedish citizen? They paid for it!

But it turns out that more than the source code was leaked.


To me, Instagram is a public platform at its core, where people publish things for the whole world to see. Private messages are just a secondary feature. It is like having a conversation in a restaurant, where the guy at the next table can listen to everything, but usually doesn't. Good enough for planning a surprise party, not for truly sensitive information. Kind of like private messages in Reddit, Discord, etc... a convenient feature, but don't expect real privacy.

Messenger has a higher expectation of privacy, Facebook is more at the "group of friends" level. While Instagram is a public restaurant, Facebook is more like a house party. WhatsApp has the highest expectation of privacy as it is designed for private, often one-to-one conversations first.


Sure, but if you already have e2ee, it takes work to remove it... why invest the time to do that?

It also takes work to keep it working and it may have a lot of bugs already, that are hard to fix because of it. A non-E2EE chat app is very easy to make.

I didn't notice any link with the iPhone, except maybe a vague coincidence in timing. Online banking existed before the iPhone, it worked using websites, on personal computers. And it took some time before smartphones were taken seriously by banks.

What I noticed however is a noticeable decrease in service quality in bank branches while online (desktop browser) options became better. Banks pushed customers out of their branches progressively. In the early 2010s tellers couldn't do anything you couldn't do online by yourself. For services like dealing with large quantities of cash, or coins, they made it so that you couldn't do more than what the ATMs allowed you to do, limiting the amount of cash the branch had access to and increasing how much you could withdrew from ATMs.

They didn't get the idea to fire all their tellers when Steve Jobs announced the iPhone. It was a decision at least a decade in the making. It is just that people tend to resist change so it happens slowly, especially for big, serious business like banking. And I don't think it is a bad thing.


That's a really good point. They forced the adoption of these services by kneecapping the tellers, in terms of what they had access to.

I think it will become interesting when AI will be able to decompile binaries.

Decompiling binaries is easy when they are C# or Java, even before AI. C# is a Microsoft language, and C# games have thriving mod communities with deep hooks into the core game, and detailed documentation reverse-engineered from the binary.

I wonder about the total energy cost of apps like Teams, Slack, Discord, etc... Hundreds of millions of users, an app running constantly in the background. I wouldn't be surprised if the global power consumption on the clients side reached the gigawatt. Add the increased wear on the components, the cost of hardware upgrades, etc...

All that to avoid hiring a few developers to make optimized native clients on the most popular platforms. Popular apps and websites should lose or get carbon credits on optimization. What is negligible for a small project becomes important when millions of users get involved, and especially background apps.


If we go by Microsofts 2020 account of 1 billion devices running Windows 10 [0], and assume all those are running some kind of electron app (or multiple?) you easily get your gigawatt by just saving 1 watt across each device (on average). I suspect you'd probably go higher than 1 gigawatt, but I'm not sure as far as making another order of magnitude. I also think the noisy fan on my notebook begs to differ and maybe the 10 GW mark could be doable...

[0] https://news.microsoft.com/apac/2020/03/17/windows-10-poweri...


There are 30,000 different x-platform GUI frameworks and they all share one attribute: (1) they look embarrassingly bad compared to Electron or Native apps and they mostly (2) are terrible to program for.

I feel like I never wasting my time when I learn how to do things with the web platform because it turns out the app I made for desktop and tablet works on my VR headset. Sure if you are going to pay me 2x the market rate and it is a sure thing you might interest me in learning Swift and how to write iOS apps but I am not going to do it for a personal project or even a moneymaking project where I am taking some financial risk no way. The price of learning how to write apps for Android is that I have to also learn how to write apps for iOS and write apps for Windows and write apps for MacOS and decide what's the least-bad widget set for Linux and learn to program for it to.

Every time I do a shoot-out of Electron alternatives Electron wins and it is not even close -- the only real competitor is a plain ordinary web application with or without PWA features.


> Every time I do a shoot-out of Electron alternatives Electron wins and it is not even close

Only if you're ok with giving your users a badly performing application. If you actually care about the user experience, then Electron loses and it's not even close.


Name something specific. Note for two x-platform UI toolkits I have some familiarity with:

Python + tkinter == about the same size as electron

Java + JavaFX == about the same size as electron

Sure there are people who write little applets for software developers that are 20k Win32 applications still but that is really out of the mainstream.


Many times this. Native path is the path of infinite churn, ALL the time. With web you might find some framework bro who takes pride in knowing all the intricacies of React hooks who'll grill you for not dreaming in React/Vue/framework of the day, but fundamental web skills (JS/HTML/CSS) are universal. And you can pretty much apply them on any platform:

- iOS? React Native, Ionic, Web app via Safari

- Android? Same thing

- Mac, Windows, Linux – Tauri, Electron, serve it yourself

Native? Oh boy, here we fucking go: you've spent last decade honing your Android skills? Too bad, son, time to learn Android jerkpad. XML, styles, Java? What's that, gramps? You didn't hear that everything is Kotlin now? Dagger? That's so 2025, it's Hilt/Metro/Koin now. Oh wow, you learned Compose on Android? Man, was your brain frozen for 50 years? It's KMM now, oh wait, KMM is rebranded! It's KMP now! Haha, you think you know Compost? We're going to release half baked Compost multiplatform now, which is kinda the same, but not quite. Shitty toolchain and performance worse than Electron? Can't fucking hear you over jet engine sounds of my laptop exhaust, get on my level, boy!


Qt does exist. It's not difficult.

Qt costs serious money if you go commercial. That might not be important for a hobby project, but lowers the enthusiasm for using the stack since the big players won't use it unless other considerations compel them.

Depends on the modules and features you use, or where you're deploying, otherwise it's free if you can adhere to the LGPL. Just make it so users can drop in their own Qt libs.

QT only costs money if you want access to their custom tooling or insist on static linking. We're comparing to electron here. Why do you need to static link? And why can't you write QML in your text editor of choice and get on with life?

Some widgets and modules, like Qt Charts (or Graphs, I forget), are dual GPL and commercially licensed, so it's a bit more complicated than that. You also need a commercial license for automotive and embedded deployments.

Right but it's a perfectly functional (even remarkably feature complete) UI toolkit without the copyleft addons.

> You also need a commercial license for automotive and embedded deployments.

How does that work? The LGPL (really any OSI license) isn't compatible with additional usage restrictions.


You generally can't adhere to the LGPL in automotive or embedded deployments: the user can't link their own Qt libs in their auto/embedded device.

Slint has a similar license


> You generally can't adhere to the LGPL in automotive

"Can't" or "won't"? The UI process is not usually the part that need certification.

> Slint has a similar license

Indeed, but Slint's open source license is the GPL and not the LGPL. And its more permissive license is made for desktop apps and explicitly forbid embedded (so automotive)


I'm guessing some parts of code are needed to make it run on those platforms and aren't LGPL.

I'm sure microsoft and slack have sufficient funds for a commercial Qt license.

...which is the same as Flutter. Both don't use native UI toolkits (though Qt doesn't use Skia, I'll give you that (Flutter has Impeller engine in the works)). And Qt has much worse developer experience and costs money.

Qt costs money if you for some reason insist on static linking AND use all the fancy components, the core stuff is all LGPL.

Anyway it does look native and it is way faster than electron, which also doesn't look native so I don't understand why it's a problem for Qt but not for electron.


I actually built this analysis while I worked at Microsoft so I 100% agree. Doing the work at the platform level is the way to go and you can actually make a significant impact with this kind of approach. The other value of this that's not obvious is that doing it client side ends up touching all the grids/generators in the world outside of the market based accounting that tends to drive the datacenter carbon impact analysis.

> if Wikipedia vanished what would it mean …

That someone would need to restore some backups, and in the meantime, use mirrors.

Seriously, not that big of a deal. I don't know how many copies of Wikipedia are lying around but considering that archives are free to download, I guess a lot. And if you count text-only versions of the English Wikipedia without history and talk pages, it is literally everywhere as it is a common dataset for natural language processing tasks. It is likely to be the most resilient piece of data of that scale in existence today.

The only difficulty in the worst case scenario would be rebuilding a new central location and restarting the machinery with trusted admins, editors, etc... Any of the tech giants could probably make a Wikipedia replacement in days, with all data restored, but it won't be Wikipedia.


Did you try charging an e-bike with your contraption?

I don't know what you can take of this, maybe you can see it as advance pedaling, or to get a feel for energy conversion losses. Anyways, it is the kind of harmlessly stupid idea that I would want to try just because I could.


What a ridiculous idea, I love it.

I think the correct answer would be to ask "why are they doing that and not using Google Sheets?".

There are a lot of good reasons for not using Google Sheets. Maybe the spreadsheet is using features that Google Sheet doesn't support, maybe one of the parties is in China, where Google services are blocked, maybe it is against company policy to use Google Docs, maybe they have limited connectivity.

It is good to acknowledge the obvious, off the shelf solutions, but if you are given a job, that's either because the customer did their homework and found out that no, it is indeed not appropriate, or, for some reason, they have money burning their pockets and they want a custom solution, just because. In both cases that's how you are getting paid. So, I don't consider "use Google Sheets, you idiot" to be an appropriate answer. Understand the customer specific needs, that's your job, even more so in the age of AI.

And maybe the interviewer will be honest and say "just assume you can't, this is just an exercise in software architecture".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: