Hacker Newsnew | past | comments | ask | show | jobs | submit | AnthonyMouse's commentslogin

> Sure, but an attacker could still overwrite your kernel which your untouched bootloader would then happily run.

Except that it's on the encrypted partition and the attacker doesn't have the key to unlock it since that's on the removable media with the boot loader.

They could write garbage to it, but then it's just going to crash, and if all they want is to destroy the data they could just use a hammer.


The attacker does this when the drive is already unlocked & the OS is running.

Backdooring your kernel is much, much more difficult to recover from than a typical user-mode malware infection.


> The attacker does this when the drive is already unlocked & the OS is running.

But then you're screwed regardless. They could extract the FDE key from memory, re-encrypt the unlocked drive with a new one, disable secureboot and replace the kernel with one that doesn't care about it, copy all the data to another machine of the same model with compromised firmware, etc.


> Full disk encryption protects from somebody yanking a hard drive from running server (actually happens) or stealing a laptop.

Both of these are super easy to solve without secure boot: The device uses FDE and the key is provided over the network during boot, in the laptop case after the user provides a password. Doing it this way is significantly more secure than using a TPM because the network can stop providing the key as soon as the device is stolen and then the key was never in non-volatile storage anywhere on the device and can't be extracted from a powered off device even with physical access and specialized equipment.


> the device uses FDE and the key is provided over the network during boot

An example of such an implementation, since well before TPMs were commonplace: https://www.recompile.se/mandos


> The device uses FDE and they key is provided over the network during boot, in the laptop case after the user provides a password.

Sounds nice on paper, has issues in practice:

1. no internet (e.g. something like Iran)? Your device is effectively bricked.

2. heavily monitored internet (e.g. China, USA)? It's probably easy enough for the government to snoop your connection metadata and seize the physical server.

3. no security at all against hardware implants / base firmware modification. Secure Boot can cryptographically prove to the OS that your BIOS, your ACPI tables and your bootloader didn't get manipulated.


> no internet (e.g. something like Iran)? Your device is effectively bricked.

If your threat model is Iran and you want the device to boot with no internet then you memorize the long passphrase.

> heavily monitored internet (e.g. China, USA)? It's probably easy enough for the government to snoop your connection metadata and seize the physical server.

The server doesn't have to be in their jurisdiction. It can also use FDE itself and then the key for that is stored offline in an undisclosed location.

> no security at all against hardware implants / base firmware modification. Secure Boot can cryptographically prove to the OS that your BIOS, your ACPI tables and your bootloader didn't get manipulated.

If your BIOS or bootloader is compromised then so is your OS.


> If your threat model is Iran

Well... they wouldn't be the first ones to black out the Internet either. And I'm not just talking about threats specific to oneself here because that is a much different threat model, but the effects of being collateral damage as well. Say, your country's leader says something that makes the US President cry - who's to say he doesn't order SpaceX to disable Starlink for your country? Or that Russia decides to invade yet another country and disables internet satellites [1]?

And it doesn't have to be politically related either, say that a natural disaster in your area takes out everything smarter than a toaster for days if not weeks [2].

> If your BIOS or bootloader is compromised then so is your OS.

well, that's the point of the TPM design and Secure Boot: that is not true any more. The OS can verify everything being executed prior to its startup back to a trusted root. You'd need 0-day exploits - while these are available including unpatchable hardware issues (iOS checkm8 [3]), they are incredibly rare and expensive.

[1] https://en.wikipedia.org/wiki/Viasat_hack

[2] https://www.telekom.com/de/blog/netz/artikel/lost-place-und-...

[3] https://theapplewiki.com/wiki/Checkm8_Exploit


> Say, your country's leader says something that makes the US President cry - who's to say he doesn't order SpaceX to disable Starlink for your country?

Then you tether to your phone or visit the local library or coffee shop and use the WiFi, or call into the system using an acoustic coupler on an analog phone line or find a radio or build a telegraph or stand on a tall hill and use flag semaphore in your country that has zero cell towers or libraries, because you only have to transfer a few hundred bytes of protocol overhead and 32 bytes of actual data.

At which point you could unlock your laptop, assuming it wasn't already on when you lost internet, but it still wouldn't have internet.

> The OS can verify everything being executed prior to its startup back to a trusted root.

Code that asks for the hashes and verifies them can do that, but that part of your OS was replaced with "return true;" by the attacker's compromised firmware.


The boot verification code wasn't replaced, because it sits in the encrypted partition.

That's premised on the attacker never having write access to the encrypted partition, which is the thing storing the FDE key on a remote system or removable media does better than a TPM. If the key is in a TPM and they can extract it using a TPM vulnerability or specialized equipment. Or boot up the system and unlock the partition by running the original signed boot chain, giving the attacker the opportunity to compromise the now-running OS using DMA attacks, cold-boot attacks, etc. Or they can stick it in a drawer without network access to receive updates until someone publishes a relevant vulnerability in the version of the OS that was on it when it was stolen.

Notice that if they can modify/replace the device without you noticing then they can leave you one that displays the same unlock screen as the original but sends any credentials you enter to the attacker. Once they've had physical access to the device you can't trust it. The main advantage of FDE is that they can't read what was on a powered off device they blatantly steal, and then the last thing you want is for the FDE key to be somewhere on the device that they could potentially extract instead of on a remote system or removable media that they don't have access to.


they said network, not internet :)

There is no real advantage of a central signing authority. If you use Debian the packages are signed by Debian, if you use Arch they're signed by Arch, etc. And then if one of them gets compromised, the scope of compromise is correspondingly limited.

You also have the verification happening in the right place. The person who maintains the Arch curl package knows where they got it and what changes they made to it. Some central signing authority knows what, that the Arch guy sent them some code they don't have the resources to audit? But then you have two different ways to get pwned, because you get signed malicious code if a compromised maintainer sends it to the central authority be signed or if the central authority gets compromised and signs whatever they want.


All PKI topologies have tradeoffs. The main benefit to a centralized certification/signing authority is that you don't have to delegate the complexity of trust to peers in the system: a peer knows that a signature is valid because it can chain it back to a pre-established root of trust, rather than having to establish a new degree of trust in a previously unknown party.

The downside to a centralized authority is that they're a single point of failure. PKIs like the Web PKI mediate this by having multiple central authorities (each issuing CA) and forcing them to engage in cryptographically verifiable audibility schemes that keep them honest (certificate transparency).

It's worth noting that the kind of "small trusted keyring" topology used by Debian, Arch, etc. is a form of centralized signing. It's just an ad-hoc one.


> a peer knows that a signature is valid because it can chain it back to a pre-established root of trust, rather than having to establish a new degree of trust in a previously unknown party.

So the apt binary on your system comes with the public keys of the Debian packagers and then verifies that packages are signed by them, or by someone else whose keys you've chosen to add for a third party repository. They are the pre-established root of trust. What is obtained by further centralization? It's just useless indirection; all they can do is certify the packages the Debian maintainers submit, which is the same thing that happens when they sign them directly and include their own keys with the package management system instead of the central authority's, except that now there isn't a central authority to compromise everyone at once or otherwise introduce additional complexity and attack surface.

> PKIs like the Web PKI mediate this by having multiple central authorities (each issuing CA) and forcing them to engage in cryptographically verifiable audibility schemes that keep them honest (certificate transparency).

Web PKI is the worst of both worlds omnishambles. You have multiple independent single points of failure. Compromising any of them allows you to sign anything. Its only redeeming quality is that the CAs have to compete with each other and CAA records nominally allow you to exclude CAs you don't use from issuing certificates for your own domain, but end users can't exclude CAs they don't trust themselves, most domain owners don't even use CAA records and a compromised CA could ignore the CAA record and issue a certificate for any domain regardless.

> It's worth noting that the kind of "small trusted keyring" topology used by Debian, Arch, etc. is a form of centralized signing. It's just an ad-hoc one.

Only it isn't really centralized at all. Each package manager uses its own independent root of trust. The user can not only choose a distribution (apt signed by Debian vs. apt signed by Ubuntu), they can use different package management systems on the same distribution (apt, flatpak, snap, etc.) and can add third party repositories with their own signing keys. One user can use the amdgpu driver which is signed by their distribution and not trust the ones distributed directly by AMD, another can add the vendor's third party repository to get the bleeding edge ones.

This works extremely well. There are plenty of large trustworthy repositories like the official ones of the major distributions for grandma to feel safe in using, but no one is required to trust any specific one nor are people who know what they're doing or have a higher risk tolerance inhibited from using alternate sources or experimental software.


> What is obtained by further centralization?

Nothing, I can’t think of a reason why you would want to centralize further. But that doesn’t mean it isn’t already centralized; the fact that every Debian ISO comes with the keyring baked into it demonstrates the value of centralization.

> Each package manager uses its own independent root of trust.

Yes, each is an independent PKI, each of which is independently centralized. Centralization doesn’t mean one authority; it’s just the way you distribute trust, and it’s the natural (and arguably only meaningful) way to distribute trust in a single-source packaging ecosystem like most Linux distros have.


> Centralization doesn’t mean one authority

That literally is what centralization means:

> cen·tral·i·zation: the concentration of control of an activity or organization under a single authority.

I mean people try to motte and bailey this all the time. You have someone proposing or defending a monopoly by putting it up against the false dichotomy alternative where no party trusts any other party whatsoever and then everyone is required to do everything on their own because no delegation is possible.

There is an alternate which is neither of those things, and it's a competitive market. You have neither a single authority nor the total absence of trust. Instead there are numerous alternatives that each try to maintain a good reputation for themselves because people can choose freely among them without their choice being coerced by tying it to numerous otherwise-unrelated factors.

Notice how this is importantly different. If you have a PC, you can install Debian or Arch or Windows; if you install Debian, you can install software with apt or flatpak or snap; if you use apt, you can use the official repositories or numerous third party ones. If you have an iPhone, you get iOS and you get Apple's store and everything else is anti-competitively excluded.


> We are truly in the Information Age now, and I suspect a similar thing will play out for the digital realm.

The analogy seems to be backwards though. It would be as if we previously had a scarcity of land and because of that divided it up into private property so markets could maximize crop yield etc. and then someone came up with a way to grow food on asteroids using robots, and that food is only at the 20th percentile of quality but it's far cheaper. Suddenly food becomes much more abundant and the people who had been selling the 20th percentile food for $5 are completely out of the market because the new thing can do that for $0.05, and the people providing the 50th percentile food for $10 are also taking a hit because the price difference between what they're providing and the 20th percentile stuff just doubled.

The existing plantation owners then want to put a stop to this somehow, or find a way to tax it, but arguments like this have a problem:

> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?

This was already the status quo as a result of the internet. Newspapers were slowly dying for 20 years before there was ever a ChatGPT, because they had been predicated on the scarcity of printing presses. If you published a story in 1975 it would take 24 hours for relevant competitors to have it in their printed publication and in the meantime it was your exclusive. The customer who wants it today gets it from you. On top of that, there weren't that many competitors covering local news, because how many local outlets are there with a printing press?

Then blogs, Facebook, Reddit and Twitter come and anyone who can set up WordPress can report the news five minutes after you do -- or five hours before, because now everyone has an internet-connected camera in their pocket so the first news of something happening now comes in seconds from whoever happened to be there at the time instead of the next morning after a media company sent a reporter there to cover it.

The biggest problem we have yet to solve from this is how to trust reports from randos. The local paper had a reputation to uphold that you now can't rely on when the first reports are expected to come from people with no previous history of reporting because it's just whoever was there. But that's the same thing AI can't do either -- it's a notorious confabulist.

And it's the media outlets shooting themselves in the foot with this one, because too many of them have gotten far too sloppy in the race to be first or pander to partisans that they're eroding the one advantage they would have been able to keep. Damn fools to erode the public's trust in their ability to get the facts right when it's the one thing people would otherwise still have to get from them in particular.


This assumes the limiting factor is content generation, not ability to read and verify.

You make the point later in your comment, but consider it a minor issue. “Randos”

the actual limits are verification, and then attention. Verification is always more expensive than generation.

However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.


> This assumes the limiting factor is content generation, not ability to read and verify.

Content generation is the thing copyright applies to. If you want to create a reward system for verification, it's not going to look anything like that.

It mostly looks like things we already have, like laws against pretending you're someone else to trade on their reputation so that people can built a reputation as trustworthy and make money from subscriptions or ads by being the one people to turn to when they want trustworthy information.

> However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.

I suspect the real problem here is the voting thing. When people derive significant value from information they're quite willing to pay for it. Wall St. pays a lot of money for Bloomberg terminals, companies pay to do R&D or market research, individuals often pay for financial software or games and entertainment content etc.

But voting is a collective action problem. Your vote isn't very likely to change the outcome so are you personally going to spend a lot of money to make sure it's informed? For most people the answer is going to be no, so we need something that gives them access to high quality information at minimal cost if we want them to be informed.

Annoyingly one of the common methods of mitigating collective action problems (government funding) has a huge perverse incentive here because the primary thing we want people to be informed about is political issues and official misconduct, so you can't give the incumbent politicians the purse strings for the same reason the First Amendment proscribes them from governing speech.

So you need a way to fund quality reporting the public can access for free. Advertising kind of fit but it never really aligned the incentives. You can often get more views by being entertaining or inflammatory than factual.

The question is basically, who can you get to supply money to fund factual reporting for everyone, whose interest is for it to be accurate rather than biased in favor of the funder's interests? Or, if that's not a thing, whose interests are fairly aligned with those of the general public? Because with that you can use a patronage model, i.e. the content is free to everyone but patrons choose to pay money because they want the work to be done more than they want to not pay.

The obvious answer for "who" is then "the middle class" because they're not so poor they can't pay a few bucks while still consisting of a large diverse group that won't collectively refuse to fund many classes of important reporting. But then we need two things. The first is for the middle class to not get hollowed out, which we're not doing a great job with right now.

And the second is to have a cultural norm where doing this is a thing, i.e. stop teaching people illiterate false dichotomy nonsense where the only two economic camps are "Soviet Communism" in which the government is required to solve everything through central planning and "greed is good" where being altruistic makes you a doofus for not spending all your money on blackjack and cocaine. People rather need to be encouraged to notice that once their basic needs are met, wanting to live in a better world is just as valid a use for free time and disposable income as golfing or designer shoes.


> what if the US would use actual physical gold coins instead of dollars?

The problem here is, what if the demand for dollars increases?

In principle the US would get more gold and mint more currency, but gold is a finite resource. "All the gold ever mined" is around 200,000 metric tons, ~32k troy ounces per metric ton is ~6.4B troy ounces.

In 2022 (just before the recent gold rally) the price was ~$2000 per troy ounce, i.e. "all the gold" was worth ~$13T. Meanwhile the M3 money supply in the same year was ~$20T. What happens if you try to buy $20T worth of gold to mint currency when only $13T worth has ever been mined, and not all of that is even on the market? The answer is that you can't, so instead the result is deflation, which is bad.

Or to put it a different way, what do you think the economic effect of the recent gold rally would be for a country whose currency was still pegged to gold? It just got way cheaper to import foreign products than buy domestic ones, and way more expensive for foreign countries to buy your exports, so how's the unemployment rate looking? The amount everyone owes on their mortgage hasn't changed but the nominal value of their houses just got cut in half so now they've lost their jobs and are underwater. What happens when they start to default and foreclosures don't allow the banks to recover the principal?


> The problem here is, what if the demand for dollars increases?

The price for gold and dollar would rise, until the worth of dollars is as high as the demand is?


> The price for gold and dollar would rise, until the worth of dollars is as high as the demand is?

The name for that is deflation.


To explain why de/inflation is bad:

The primary function of money is its trade value, to "lubricate" the real economy to let goods and services flow. When the value is unstable, people are inclined to not spend or not accept that currency, which contradicts the free flow of it and in severe cases harms the economy.

Crypto'currencies' have the same problem. By nature, they are no currency but investment for which instability is required. No crypto bro would hype their 'currency' because there would be no pumping. Arbitrage trades are considered being for fools or insiders.


> Crypto'currencies' have the same problem.

Bitcoin has the same problem. There is no inherent reason you can't have a cryptocurrency where there is no maximum number of coins to ever be mined and instead the limit is that mining them requires a fixed amount of computation.

That would give you the characteristics you want from a medium of exchange, because there is a rate limit on how much can be created (doing so requires e.g. electricity). Then the value is relatively stable, if you accept it as payment on Monday it would still be worth around the same amount on Friday, but the long-term result is a slow reduction in value on multi-year timescales as compute gets cheaper, so you don't get the speculation that results in high volatility and it doesn't strongly compete with real economic activity for investment resources.

The argument you'll get from goldbugs and whatever is that nobody would want a currency which is inherently inflationary like that, but that's clearly contrary to evidence. Most government currencies are inflationary, even on purpose, and it doesn't matter as long as the rate of inflation isn't so high that people holding it transiently for use as a medium of exchange are losing a significant amount in that short period of time. Especially when the rate of inflation is predictable (the rate at which computers get faster is reasonably consistent) so that anyone entering into a long-term contract denominated in that currency can reasonably predict its future value on the delivery date. Or people could just use it as a medium of exchange and denominate their contracts in something else.


Programs adding entries to the hosts file is still pretty normal, e.g. if something that uses a local webserver as its UI and wants you to be able to access it by name even if you don't have an internet connection or may be stuck behind a DNS server that mangles entries in the public DNS that resolve to localhost.

Programs like that should just be shipped with good documentation. And applications built to be used by normies should almost certainly never be built that way in the first place.

That seems like a pretty reasonable way to write a small cross-platform application which is intended to be a background service to begin with. You only have to create the UI once without needing to link in a heavy cross-platform UI framework and can then just put an HTTP shortcut to the local name in the start menu or equivalent. Normies can easily figure that out specifically because you're not telling them to read documentation to manually edit their hosts file.

There are also other reasons to do it, like if you want a device on the local network to be accessible via HTTPS. Getting the certificate these days is pretty easy (have the device generate a random hostname for itself and get a real certificate by having the developer's servers do a DNS challenge for that hostname under one of the developer's public domain names), but now you need the local client device to resolve the name to the LAN IP address even if the local DNS refuses to resolve to those addresses or you don't want the LAN IP leaked into the public DNS.

Or the app being installed is some kind of VPN that only provides access to a fixed set of devices and then you want some names to resolve to those VPN addresses, which the app can update if you change the VPN configuration.


> If people truly want something and it can be done profitably, just start a company and do it yourselves.

There is a specific problem with last mile services: It costs approximately the same amount to install fiber down every street whether you have 5% of the customers or 95%.

So you have an incumbent with no competitors and therefore no incentive to invest in infrastructure instead of just charging the monopoly price for the existing bad service forever. If no one new enters the market, that never changes.

However, if there is a new entrant that installs fiber, the incumbent has to do the same thing or they're going to lose all their customers. So then they do it.

Recall that it costs the same to do that regardless of what percent of the customers you have, but they currently have 100% of the customers. Now no matter what price you charge, if it's enough to recover your costs then it's enough to recover their costs, so they just match your price. Then you're offering the same service or the same price, so there is no benefit to anyone to switch to your service now that they're offering the same thing, and inertia then allows them to keep the majority of the customers. Which means you're now in a price war where you'll be the one to go out of business first because customers will stay with them by default when you both charge the same price. And since this result is predictable, it's hard to get anyone to invest in a company destined to be bankrupted by the incumbent.

Which means that if the customers want someone to compete with the incumbent, they have to invest in it themselves. At which point going bankrupt by forcing the incumbent to install fiber is actually a decent ROI, because you pay the money and then you get fiber. Furthermore, you can even choose to not go bankrupt, by making the basic fiber service "free" (i.e. paid for through local taxes), which then bankrupts the incumbent and prevents the local residents from having to pay the cost of building two fiber networks instead of just one.


There is an elegant way to solve this. Mandate that whoever install the fiber lets other companies run their ISP on top of it (with a small but reasonable cut of the profit presumably). I believe this happens (mandated or not) already for mobile phone networks in the form of MVNOs.

And here in Sweden we have the same for fiber. I don't think it is mandated here, since not every place has multiple options like that, but many do. If you have municipality owned fiber (stadsnät) it always work like that I believe, often you have a choice between 15 or so different ISPs.


why would we do that? not everything has to skim profits to a certain group of people just because they exist. they can use magical competition and build it if they want a piece.

if an area has been waiting for… (what would it be now? around 30 years since the internet took off?) so these companies had 3 decades to build out and have refused, if we the tax payers step in and we pay for it, why should we let them in? they have refused to do anything for literal decades… even worse, many of these companies took billions in subsidies and still did nothing. they’ve refused to be good boot strappin capitalists, for decades.

(i want to reiterate what i said above, i believe competition can often work really really well. but if we dont understand by now that it fails sometimes too, we're not seeing clearly.)

think about how long that is, like some people become grandparents at around 35. someone born in the windows 95 days might have a grandkid and the poor sap still wont be able to get fiber. even in tons of urban and suburban areas.

some of these same ceos have gone on about how perfect the marketplace is, how awful taxes are, how magical the marketplace is… decades later if we have to build it, why should they get a piece?


The physical cable that goes to every house is a natural monopoly. Really it's even more like the conduit the cable is installed in. Doing that part more than once is both fairly inefficient and tends to market failure.

The rest of the service isn't. Transit is a fairly competitive market. You may also have providers willing to use more expensive terminating equipment and then offer higher-than-gigabit speeds on the same piece of fiber. You want the competitive market for every aspect of the system where it can work and to keep the monopoly as narrow as possible.

Notice that the point isn't to let just Comcast use the municipal fiber and then get ~100% of the customers again, it's to let this happen with fiber to the home:

https://en.wikipedia.org/wiki/List_of_mobile_virtual_network...


Having the municipality run the whole thing would be even better sure. I'm not sure why we do that mix here in Sweden, but it worked out OK for us I think.

Also, wouldn't those subsidies come with a legaly enforceable requirement to actually build out infrastructure? If not, I think that is where you went wrong.


im saying we shouldnt give them subsidies at all. if they cant make it work in the marketplace, if they arent up to the task, then the competitive marketplace is a failure in that instance. and thats ok.

no subsidies. if they cant do it, fine, we'll do it and we'll provide cheaper than they ever would have. and in the case of fiber, we know this is the case. there are plenty of municipally owned fiber areas that are solid and cheap af.

its ok to admit that the market doesnt always work. often, absofuckinlutely. always? not at all.

a lack of subsidies would make it obvious where those failures exist so we can just do it ourselves (the spooky government) for cheaper. tell them "you had your chance" and move on with our day.


There are two options here:

1) You can have an encrypted connection between two jurisdictions that have different laws, but then anyone can route around censorship because you don't know if they're discussing geopolitics or distributing DeCSS.

2) You can't have an encrypted connection between two jurisdictions that have different laws, which is >99% of all connections because even different cities have different laws, which is an Orwellian panopticon and the destruction of all privacy.

I'm going to have to insist we stick with the first one.


I am fine with encryption, but there should be a legal process that can stop the violation of laws such as by disconnecting nodes that violate laws or preventing linking to nodes that violate laws.

I think you're missing the concept here that laws change as a packet travels from one switch to another, not to mention what happens after they go under the ocean.

Are you prepared to be held accountable for breaking the laws of repressed countries that sentence people to death for leaving a religion or insulting authority?

I assume not, but then it's an arbitrary game of whos laws and when. The only logical continuation would be if we had a standard of law worldwide, but that's a separate problem in itself and not anywhere near reality today.


Suppose there is a shared server outside your jurisdiction which is hosting a wide variety of content none of which is a violation of the law in their jurisdiction, but 2% of the content is a violation of the law in your jurisdiction. Or isn't hosting any content at all but also isn't a jurisdiction that does the same censorship as yours and then people can use the connection as a VPN.

If people in your jurisdiction can make a secure connection to it, e.g. to get the 98% of the content they have which is lawful in your jurisdiction, then they can also get the content you were trying to ban because you can't tell which one they're doing. Preventing this is all or nothing: Either they can connect to the server that isn't subject to your laws, or they can't. And the latter is heinous and tyrannical.


> It's lawful if you have a good faith belief that it's a circumvention tool.

Is it? Isn't Section 512 the takedown section that applies to infringing works (e.g. notices require "Identification of the copyrighted work claimed to have been infringed", 512(c)(3)(A)(ii)) and Section 1201 the separate anti-circumvention section which has government-imposed criminal penalties but no private takedown provision?


> The crucially important subtlety here is that Apple requiring developers to use the App Store doesn't leverage an existing monopoly (like what Microsoft had with Windows).

Copyright (e.g. over iOS) and patent (e.g. over iPhone hardware) are explicitly government-granted monopolies. Having that monopoly is allowed on purpose, but that isn't the same as it not existing, and having a government-granted monopoly and leveraging into another market are two quite distinct things.

> Compare the games console market.

Okay, all of the consoles that require you to sell you to sell through their stores shouldn't be able to do that either.

> but they're approaching the sort of market dominance where it might soon be illegal for them (and them alone) to do that in some markets.

Wait, your theory is that a console with ~50% market share has market dominance but Apple with ~60% of US phones doesn't?


There’s no such thing as “having a monopoly on iPhone” in law. You have to have a monopoly in a market, of which iPhone is part of the “smartphone” market. It is not a monopoly in the smartphone market, to the best of my knowledge.

> You have to have a monopoly in a market, of which iPhone is part of the “smartphone” market.

Products and markets are not a one to one mapping. For example, if you sell low-background steel, that's part of the broader "steel" market because anyone who needs ordinary steel could buy it from you and use it for the same purposes as ordinary steel. But low-background steel is also its own market, because the people who need that can't use ordinary steel. Likewise for sellers of products with higher purity levels, products that satisfy particular standards or regulatory requirements, etc. It's only the same market if it's the same thing. Clorox bleach is the same as other bleach; Microsoft Windows is not the same as MacOS.

And iOS is not the same as Android. I mean this really isn't that hard: Are they substitutes for each other? If you have a GE washing machine, can you use any brand of bleach? You can, so they're in the same market. If you have an app that exists for iOS and not Android, can you use an Android device? No, so they're not in the same market. Likewise, if you've written a mobile app and need to distribute it to your customers who have iOS devices, can you use Google Play? Again no, which is what makes them different markets. They're not substitutes, any more than a retailer in Texas is a substitute for a retailer in California when you have customers in both states -- or only have customers in California.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: