People bring this up regularly, but I don't think it's that relevant. Studies regularly show that campaign contributions actually have very low influence on elections.
Trump notably had much smaller campaign budgets than his opponents in both winning elections, not even including the massive amounts of brazen fraud he used to pay himself with the money.
Fundamentally, it's presidential democracy that is flawed. We have a very powerful high office, and if enough people want to willing vote in a corrupt president, there's really not many checks against the damage that they can do.
Yes, it's possible to win with less money than your opponent, but why would anyone want to take that risk?
The problem with money in politics is not that money guarantees a win, but that the presence of large donations distorts the entire incentive structure of campaigning and governing: Courting big donations means spending time with big donors (who expect access in exchange for their money) and when it comes time to govern, studies have shown that campaign contributions and lobbying are dramatically more influential to what gets proposed and passed than the preferences of the general public.
Focusing on the problems with presidential campaigns re: money in politics is missing the forest for the trees: All politicians have limited time to spend between campaigning and governing, and if they're constantly raising money the governing gets delegated to lobbyists.
(This is why people are always so shocked when politicians who don't accept corporate PAC contributions have drastically different priorities than those who do. Of course they do! They don't have to spend all their time hanging out with corporate lobbyists!)
This doesn't really speak to Citizens United though. The nature of Dark Money is that no one knows where it comes from, so politicians cozying up to their donors is not actually the particular concern here.
(Also, there has been the opposite trend, which is that more money than ever comes from private donations from billionaires and other wealth.)
> if enough people want to willing vote in a corrupt president
Why do people do this though? Maybe it's inevitable, but I think there was a lot of pent up frustration with the government that led a lot of people to just say "fuck it". Not really excusing it (especially for his second term), but I feel like we're reaping years and years of a dysfunctional and ineffectual congress. Not that that's an especially easy problem to solve either.
I think this also explains a lot of the frustration with SCOTUS. In-theory, SCOTUS is supposed to just interpret and flesh out the policies decided on by congress. In practice, congress doesn't really do anything, and people started depending on SCOTUS's ability and willingness to make far-reaching and impactful decisions. Now a more conservative SCOTUS isn't doing that.
It's worth noting that an ineffective and gridlocked congress is specifically a problem of presidential-style democracies. Parliamentary systems with a prime minister have some of their own shortcomings (notably a weak executive), but the government is actually controlled by the legislature.
Countries that follow the presidential model regularly succumb to strong man type leaders. Ironically, in the modern era when the US had a hand in helping other countries establish their governments, we specifically helped them establish parliaments.
Citizens United affected far more than campaign contributions. Non-campaign political spending (aka "outside spending") has increased nearly eightfold and shows no signs of slowing down.
The top distro is Arch - implying that the Steam Deck userbase is moving the needle.
Linus has said on a few occasions that the main thing holding back user adoption for desktop is a single distro with a clear focus. What Android did for mobile.
It's clear that SteamOS could be "that guy" if Valve wants it to be.
No, the growth in Linux in the Steam Hardware survey over the last two years has little to do with the Steam Deck. When the deck was first released it had a big impact, topping out at 45% of all Linux installs in May 2024, but since then the growth has been due to other Linux distros, bringing Steam OS down to 25% of Linux installs today.
The top distro is SteamOS, which is based on Arch, but does not appear as such in the stats. The Arch appearing in the stats has to be CachyOS and other gaming-distros, as also real Arch-users.
But yes, SteamOS makes ~25% of the users. Though, thinking about, do they collect per account, or per device? I do have a Steamdeck, but mainly play on the big desktop running on debian, so I'm curious if I'm appearing as one or two entries in that stat.
It's not just the single distro, but single Desktop Environment upon which app and ecosystem developers will standardise. I'm glad that the latest generation of gaming distros are converging on Plasma.
Can Linus bless a particular desktop Linux distro where he can at least veto unreasonable decisions? So when someone says "I'm switching to Linux," it means that one.
Steam Deck is currently ~25% of those 5% Linux users. Good chunk but not a majority. You can estimate it in two different ways which produce similar results: filtering to Linux only looking at OS list "SteamOS Holo 64 bit" is 24.48% and in the GPU list "AMD Custom GPU 0405"+"RADV VANGOGH" add up to 23.72%.
I had always assumed there was a methodological failure that kept getting replicated. There were enough articles like "scientists find microplastics at bottom of peat bog" that really made me dubious of the claims.
"Strong claims require strong evidence". Somehow it happens pretty regularly in academia that only one method becomes acceptable and any conflicting results get herded out on technical grounds.
I think it's partly deliberate and partly ignorance of chemistry. We are simply finding that biological matter is nearly indistinguishable from man-made microplastics.
- You check their work and they made some mistakes, but it's good enough to use
- You ultimately don't know if they're doing the best at their job but you have regular performance check-ins to be safe
As ICs we can complain all we want about the quality of AI, but as far as your manager goes - you using AI is not that much different to them having an employee.
This is them performing a deep scouring across all of their marketing databases for your information. Junk mail, phone, etc. (Letting you include multiple phone numbers is a nice touch). This is actually a pretty big deal since these systems probably don't even talk to each other.
You even have the option to "retrieve" the data on your email address.
Never in a million years would our org let us put this much effort into this without the threat of lawsuits.
An interesting hearing it from that perspective. I had not considered that this could be used across all of the systems they have in place.
To add to this, I did a quick search on archive.org and it seems like they’ve had this site in place since 2006, albeit in different variations. Perhaps that is why they have a “global” opt-out across all of their marketing databases? Something that has just always been in place.
I mean, how many companies offer to look up your mailing address and remove you from their junk mail?
I suspect in practice this search is not going to be perfect. There are so many variations that could exist on an address.
That doesn't even begin to deal with potential "ghost" sources. A database backup. An integration with a product database, etc.
I would honestly not be surprised if there wasn't a human reviewer somewhere over these requests. (At our company, all GDPR requests are STILL manually handled).
> I don’t want to depend on something doing the work I earn money with.
> I don’t want to give up my brain and become lazy and not think for myself anymore.
There are a lot of good reasons we should be skeptical of AI and not give up on essential skills. But sometimes I want to shake these people by the shoulders. Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?
The entire point of civilization and society is that we are all "addicted" to technology and progress. But the invention of the plow did not, in fact, make us lazier or stop using our brain. We just moved on to the next problems. Maybe the Amish are have it right and we should just be happy with a certain level of technology. But none of us have "lost" the ability to go backwards if we really wanted.
You can finally ask a computer to think and solve problems, and it will! People act like this is a brave new world, but this is literally what computers were supposed to be doing for us 50 years ago! If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."
> “I’ve come up with a set of rules that describe our reactions to technologies,” writes Douglas Adams in The Salmon of Doubt.
> 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
> 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
> 3. Anything invented after you’re thirty-five is against the natural order of things.
That's probably true to some extent, but I'm not completely on board.
> 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Television and calculators were in the world when I was born, but I never viewed them as "natural". TV always seemed to be a way to distract yourself from the world.
> 2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
I was happy to get on board with the WWW, the web browser, and widespread email usage. Those were revolutionary technologies with immense values. On the other hand, I'm still not on board with text messaging, phone scrolling, or social media. If I could, I'd eliminate social media from society.
> 3. Anything invented after you’re thirty-five is against the natural order of things.
I'm over 50 and a strong believer in the value of the LLM. It's a work tool that I can use at work and put away when I'm at home (or not, depending on my mood). It's new and exciting and revolutionary and a move in the right direction for humanity.
I chuckled when I read this. Being 55 I tend to think this is true. But I realized when looking back the things I accepted when growing up, even though they were normal, I now notice that they have had a detrimental effect on society.
So, Although age tends to have this effect on how we see the world, and some of it probably not to worry about. I think there is part of this awareness that has some wisdom and is trying to protect our species..
you need not stick to any level. Some things that always have been are still bad (slavery is an obvious example now dated enough to be uncontrolversial). Some new things are bad and others good at any age.
don't grow up too set in your ways to not learn the new. But do grow up fast/young to get some cynicism for everything. now that I'm in my 50s the first is important but when younger the later was important.
The problem is that you're likening fundamentally unlike things. AI isn't like a microwave or an automatic car or a power tool. It does not augment you. As I said elsewhere: AI is not a bicycle for the mind, it's an easy chair. You will lose more than you ever gain.
This is purely a matter of perception. Cooking a meal is a deeply intellectual process. If I buy a meal from a restaurant, yes I am losing a skill. But if making a hollandaise is not a skill I ever need in my life, it's not really a practical loss.
AI is taking problems and putting them in a drawer so we never have to think about it again. Matches de-intellectualized making a fire. A washing machine de-intellectualized doing laundry. These are now solved problems.
Our brainpower spent on them is effectively worth nothing. The only reason we need to learn to make a fire from scratch is for the intellectual satisfaction or for emergency situations. The same reason we would choose to work on the problems that AI can now solve.
It only a loss if you think the skill and ability you are losing is intrinsically valuable, and the only thing you are going to replace it with is leisure.
>It only a loss if you think the skill and ability you are losing is intrinsically valuable
What about the skill of learning itself? I would suggest that's one of the most important skills humans have evolved. The more integrated AI becomes in our societies, the more it will automate away potential opportunities for learning. I can forsee a world tightly integrated with AI where people are not only physically sedentary, but mentally as well.
As we progress further into the future, we need more educated people than ever to tackle the exponentially increasing complexities of our society. But AI presents an obstacle that many will never cross due to how to convenient it is to skip the messy work of understanding.
Also, this problem is not unique to AI. It existed before the GPTs and Claude's of the world. But it's a problem of scale, and every company on the Earth right now is trying to scale AI up as fast as possible.
Here's a practical example: I am using AI to help me with my garden. It's been amazing - it helps me identify plants, identify soil issues, what fertilizer to use and what days to apply it, etc.
What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
It's also clearly obvious when AI gives bad or incorrect advice - I am still trying different things and watching for the results.
Coding is a outlier example where AI can just do the work semi-competently without anyone checking it. But I think it speaks more to the nature of coding itself - coding is a means to an end and for most people not an actual pursuit in itself.
>What exactly did AI take from me? Spending hours of research on Google and Youtube to glean little incomplete bits and pieces? Calling a yard service?
An opportunity for a deeper understanding of gardening? If you spend hours researching on gardening and come away with an incomplete understanding of what you were attempting to do, I'm not sure that's immediately the fault of the research available. It could be that you just didn't do a good job searching for the necessary information.
In this way, AI can be a boon. It helps figure out what you actually want to know in the moment. But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
>It's also clearly obvious when AI gives bad or incorrect advice
Is it? Isn't this a __core__ problem that researchers around the world are trying to solve? Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment? I think it's hard to know if something is bad advice by looking at just cause and effect. It could be that you just lack the understanding to put the advice into practice.
> It could be that you just didn't do a good job searching for the necessary information.
How can you? The existing resources are terrible.
> But I think it would be a step to far to say that a smattering of specific questions can replace the sturdy foundation povided by a typical education--e.g. through apprenticeship, books, etc.
I am not going to go through a college program for my own garden. And I have books! But unless you can read a tiny and perform a small research project, you are not going to know how all of the plants in your specific garden in your specific region in your specific weather are going to behave.
The best I could do is hire an expert - but again I am learning less by hiring it out.
> Also, __how__ could you make such a statement unless you already possessed the knowledge ahead of time to make such a judgment?
"Use X to kill the moss". It didn't kill the moss. I will now use AI to find a list of alternative things to try to kill the moss, and learn what works in my garden.
The idea that AI is going to make people stop learning I don't think is born out in practice. It might make some people stop researching as an activity though.
> "It only a loss if you think the skill and ability you are losing is intrinsically valuable..."
I'm fascinated by the AI bros putting hollandaise sauce and making fires on the same level as creating production software. One hopes that it is because they create only very simple software, making the analogy less invalid than it would be for more complex software. If not, the implication is that loss of the reasoning and cognitive ability needed to build foundational software like libraries and frameworks is not important to them.
The only thing that separates homo sapiens from other species is the sapience. Diminishing or atrophying one's own cognitive abilities is the same as climbing down the evolutionary ladder.
I mean, doesn't the fact that people rely on these libraries and frameworks without thinking it itself prove the value of intentionally compartmentalizing off skills?
No one is arguing that everyone needs to build programs ground up from assembly. So what's the magic difference between using a framework and asking a computer to write out the for-loops for me?
> making a hollandaise is not a skill I ever need in my life
I know you just wanted to poke at the analogy, but if you like hollandaise, it's one of the easiest and most rewarding sauces to make at home! Restaurant hollaindaise is usually terrible
(Though it's not as easy as a béchamel, and yet I still see people buy jarred alfredo sauces. You can literally make an amazing alfredo sauce with pantry ingredients in less time than it takes to boil the noodles! Why would anyone buy an alfredo sauce!?)
Although this more or less is my point. If people are willing to give up these incredibly high reward, low effort skills - how much more uphill is the battle to make people code and process data?
Now you're getting it! The modern way of life which prioritizes convenience and production destroys human connection. Making sauce is pointless; let's go one step further and make every other thing you might do equally pointless. Welcome to the hellscape! It's surprisingly comfortable.
The other extreme is also a hellscape. Work and suffering is the only thing of value. Let's make pyramids to bring people together and show off our collective wealth.
Again, writing replacing memorization is not a good 1:1 comparison to AI replacing technical understanding. Someone still needs to understand what is written and act upon that knowledge. That requires skill and experience in the domain they're working within.
However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
Also, we need to be honest with ourselves. Human brains did not evolve for the instant gratification of modern technology. We've already seen what technology has done to our attention spans. I am concerned over what further reliance on technology, particularly AI, will do to our brains.
> However, a person using an AI does not need to understand the underlying problem to get results. A person can ask Claude Code to write them a web app dashboard without having ever learned JS/CSS/HTML. It does not require them to have skills within a domain.
This perspective is funny to me because of how much the modern web is already built around web developers refusing to use CSS and PHP. The giving up of the skills happened before the automation.
Dubious. Ai psychosis is the opposite. It’s about being empowered to explore ideas much further but with a maladaptive tool designed to be an appeaser by reinforcement learning.
> Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools?
None of these things allow you to turn your brain off while the machine does the work.
I still have to DRIVE the car and all the thinking that goes with that. It's not a robotaxi.
I still have to acquire and prep the food I am microwaving. It's not a replicator.
I still have to know what I want to eat before grocery shopping and prepare the food. It's not a take out restaurant.
I still have to know how to use the power tools to carefully shape something into a fine piece of furniture and not a pile of splintered firewood. Power tools can't operate on their own unless aliens (see Maximum Overdrive.)
These are better analogies:
Do you take a taxi or public transport? Those let you turn your brain off while someone or something does the driving work.
Do you go to a restaurant where you can pick what you want, turn your brain off and wait for a delicious (or not) meal?
Do you order takeout where you can order what you want form the comfort of your home, turn your brain off and enjoy the meal when it arrives? Then reheat the leftovers in the microwave.
Do you use a fabrication service where you send them a drawing, turn your brain off, and they ship you an assembled thing?
All of your examples involve you sitting and waiting. That doesn't seem like an apt analogy for what AI can do. You don't have to sit there and come up with other things to do while the AI does the work.
When AI works (and technology in general) that's kind of what it's like. You'll never perceive that you are not doing the work anymore because you won't perceive the work.
> All of your examples involve you sitting and waiting. That doesn't seem like an apt analogy for what AI can do.
I just read a blog post or comment (honestly cant keep track of all this AI hype) about someone who literally did just that. They told and AI to build an app then went out and painted their fence or something, came back and had an app.
This is what people want from AI. They want take-out software.
> I want to shake these people by the shoulders... Do you use a microwave?
Microwaves aren't doing active problem solving though. It seems what the author is trying to say is they enjoy problem solving and they find coding a rewarding and creative experience. Sure microwaves saved at-home cooks might enjoy zapping a frozen dinner, but the author is a chef who enjoys writing their own recipes and cooking from scratch. AI isn't just the microwave, it's also the chef.
> None of us have "lost" the ability to go backwards if we really wanted
This absolutely isn't true. Using google maps quickly makes people poorer at navigation - skills need to be practiced. The author thinks letting AI into their kitchen to cook for them will change themself cognitively and make them lazy and lose their skills. And that would be true.
What it sounds like you're getting at but never said is there might be newer skills on the other side that are even more rewarding, which may be true. But if history is any indication, there will be no shortage of folks who like things the old way and want to use their meat brains to provide bespoke goods and services that AI can't.
> The entire point of civilization and society is that we are all "addicted" to technology and progress.
Technology is like much of material reality, in that we can think whatever the hell we like about its various forms, especially so if we’re surrounded by it.
It’s not insane. They are correct that is the point of civilization which carries information from generation to generation outside the oral tradition in a systematic organized reliable way.
The point of civilisation, however loose that idea may be, if it’s anything at all, is determined by people.
Technology exists today in a way that feels like it could be defining its own path in a sense, but much like oral tradition, neither are large enough concepts to describe civilisation.
I love driving a manual transmission. But I also understood why it was so hard for me to find a new Jeep Wrangler with a manual transmission a few years ago.
The automatic transmission gives us more dexterity for... what exactly? Fiddling with the dash, reaching for something in the back seat, texting? The best case human has much more control but the average case seems worse off.
I'd say most automatics give you less direct control over the engine. I always feel like I'm having to tease a gear shift out of the car when I'm driving an automatic. Until very recently, the typical car couldn't see the traffic light changing or the hills ahead so it couldn't possibly change gears as effectively as a competent driver.
I think of themselves as very practical - I drive a manual, I fix my own cars, I do my own house projects, I cook my own meals.
Which is part of the reason these anti-AI screeds fall on deaf ears for me. My generation has willingly abandoned all of these legitimately useful hard-skills But there's also nothing preventing you from picking and choosing what you care about.
I'm not actually against manual coding. I just think people need to be honest about about why it's valuable.
I don't work on my own car because I believe that everyone should fix their own cars. But I think enough people should be knowledgeable and have these skills in society - if for no other reason than to keep mechanics and automakers and dealerships honest. I am not personally upset if you work on your own used car or take it to your dealership.
I am against the idea that everyone should somehow be against AI coding.
Agreed, this is the aspect of the AI criticism I find strange too. We should want to be targeted in how we use it, just as how a practical fusion reactor wouldn't replace solar in every situation. Not reject it outright.
We should be using these capabilities to allow ourselves to work on harder problems. In science, there are a lot of tasks that require a low, but non-zero amount of intelligence and aren't really the most interesting part of science. Many of these tasks limit how much work can actually be done. Automate them, and you can dramatically increase your capabilities and focus on the actual science work.
> "We should want to be targeted in how we use it ... We should be using these capabilities to allow ourselves to work on harder problems."
Yes, people should do those things but we also know that's not what's going to happen to the average developer or the public in general. We are already seeing AI generated nonsense PRs, AI cheating on homework and interviews, AI generated documents and emails containing hallucinations, etc. that points toward a future where people abdicate reasoning and critical thinking instead.
> I don’t want to give up my brain and become lazy and not think for myself anymore.
My brain gets sharper with working with AI, but then again, it often seems covertly foolish to me, so I argue with it a lot and plan more before working with it.
> Do you drive an automatic car? Do you use a microwave? Do you buy food from a grocery store? Do you own power tools
Which of these is behind a subscription paywall and owned by another party that would cut off your access immediately?
These comparisons make little sense, which is the problem with comparisons. They are soundbites from enthusiasts who don't know or understand how this technology will actually affect or shape us, but feel entitled enough to misinform the rest of us.
I don't want any machine doing my thinking for me. This is why I am in favor of banning traffic lights. Why should I trust a machine to tell me when it's safe to stop and when it's safe to go? Plus, we could employ police officers to stand in the middle of the intersection and direct traffic, thus contributing further to employment.
> The entire point of civilization and society is that we are all "addicted" to technology and progress.
I'm not addicted in any way to an automatic car. I prefer an automatic car, because it's easier to drive than a manual car. There have been numerous studies already into the problematic nature of AI addiction, and calling it simply "progress" is denuding the experiences of tons of people who have been harmed, up to and including dying, as a result of too much AI use.
> But the invention of the plow did not, in fact, make us lazier or stop using our brain.
No but industrial farming practices are not an unalloyed good either.
> But none of us have "lost" the ability to go backwards if we really wanted.
I mean, we kind of have in a few ways, at least insofar as the AI boom is concerned. I can't have a version of Windows that doesn't have copilot in it. I can't have Microsoft Office without Copilot. I can't have Photoshop without generative AI features. Like, say what you will about the AI doomsayers and yes, even this one I think is overstating it a bit? But the AI push is relentless. It's everywhere, in every product, all the time. Last time I was at Home Depot I saw an AI powered microwave for fucks sake.
And, that's not to say there are no problems at which LLMs are good solutions, but it isn't this many. I use Claude to generate code, usually boiler-plate type stuff or to help me solve problems, and it's legitimately quite good. Conversely, generated images and video have always, always looked like absolute shit to me. Generated music is... okay? But as a consumer I barely have a way to choose a non-AI future if that's what I want.
> You can finally ask a computer to think and solve problems, and it will!
Sometimes. Other times it tries for awhile and gives up. Other times it makes some shit up that would solve your problem, and Omnissiah be with you if you follow those instructions. Other times you argue with it for 10 goddamn minutes because it doesn't comprehend your instructions.
> If somebody finally came out with a fusion reactor tomorrow I would half expect people to suddenly come out and say "Oh, I don't think I can support this. What about the soul of solar panels? I think cheap electricity is going to make things too easy."
That is flatly ridiculous. LLMs do a lot of interesting things, that I will grant, but they are not the problem solver you're pitching them as, and certainly nothing like a Fusion reactor.
This announcement is pretty sad. If you're wondering why Apple is an IT department nightmare, this announcement is more of a confession. Today your corporate MacBook can have ... preinstalled software! And user groups (for the Apple store and iCloud).
Wait, there's more!
> In addition, customers can now set up business email, calendar, and directory services with their own domain name for seamless and elevated communication and collaboration.
Wow, a custom domain name!
> Apple Business enables automated Managed Apple Account creation for new employees through integration with an identity service provider, including Google Workspace, Microsoft Entra ID, and more.
In the year 2026, I can finally start logging into my corporate laptop with my corporate ID. Wow!
Them stapling on the announcement of advertisements for Apple Maps is especially hilarious. I don't think the people managing fleet devices at a corporation are the same people who are interested in setting their location ad strategy. But Apple saw they had two vaguely business-y things at the same time and thought they would really hit it off together.
I have to imagine that the Apple Neo is heavily aimed at volume sales - low level white collar workers and education. These features seem to be hastily assembled to meet the needs of these potential buyers.
Why people bother with all of this to lock the environment into some kind of corporate nightmare? Why not allow some freedom for the worker. I don't see the appeal, it feels like a claustrophobic cage
* Preprovisioning - devices have the right certificates and know about your corporate networks. They have the necessary apps and just work.
* Tracking - if a device is lost or stolen, monitor where it is and remotely lock or wipe it
* Monitoring - have a log to audit if someone does something malicious
* Security - reduce the chance of your employees installing malware, spyware, etc. whether by accident or intention
* Locking things down - put gates in the way of bad actions like copying sensitive data into public apps or clouds. Even if you're unable to block everything, attempts to block remind honest employees and provide strong evidence that anyone who proceeds was intentionally violating policy and should be fired.
* Predictability - eliminating the number of unknown factors that could cause a person to have issues using their computer. Reminds me of how a secretary I serviced was somehow able to install Google Desktop back in the day, and how that caused a massive argument between my boss and theirs when their computer needed to be re-imaged. Most IT approved programs are known to store user data in known locations on a computer, which makes backups and restorations very easy. Stuff like Google Desktop did not do that, which means likely breaking someone's workflow in the re-image process.
A lot of it is compliance. To get some types of customers you need to pass some security compliance certification or checks, which often have requirements like only giving access to crucial infrastructure when devices are up-to-date, the possibility to remote-disable/erase a device when it is stolen, some kind of anti-virus installed (yeah, I know), etc.
I can understand the underlying reasons, you would be surprised how many employees have bad security hygiene, which becomes an issue when they have access to high value information, tokens, etc. But since they often somewhat draconian rules, they tend to have bad side-effects (similar to password reminders). E.g. Linux users will often set up ClamAV to fulfill the anti-virus requirement. However, ClamAV parses untrusted data in C code without any sandboxing, so it probably opens a new attack vector (as opposed to Windows Defender, which as far as AFAIR uses sandboxing or a micro-VM to parse untrusted data).
The most clear and obvious use case is for school computers. You do NOT want to provision student devices en masse without protections (for both the students and the district). I envision this is a deal breaker right now that Apple is dealing with.
Even if your Corp doesn't want to do full user surveillance, there's still a lot of advantages to group policy. Roll out new software instantly, SSO enforcement, remote troubleshooting, etc.
SOC2 requires to ensure all computers have the software updates installed. While certification apps can check every desktop with a monitor, ABM could just do it and enforce it.
ever worked in IT support? letting people customize their environment both increases the amount of support that users require, and increases the difficulty of providing that support.
a laptop in a stock configuration can be swapped out for a new one when it breaks. a laptop that has three years of accumulated customizations installed on it means that the employee wants their laptop back when it breaks, and they want it fixed ASAP.
when you're supporting a user who doesn't know how to type a URL into their web browser, it's a whole lot easier if you don't have to start that call with asking what web browser they're using.
Regulatory compliance. If you want to sell your product to the UK, for example, you have to (see: UK Cyber Essentials). The more you try to expand your market, the more regulations you will run into that are solved with spyware and locked down computers.
Because corporations like to control their peons. I'm sure your work laptop is laden with the same kind of corporate bullshit, it's just that MS Exchange stopped being a hot topic like 25 years ago.
It's an announcement that they're providing first party integrated first party services for something that until now has largely relied on third party solutions.
Not knowing about the exiting solutions to provision/manage Macs is one thing. Not knowing about them and claiming they're inferior because of what you didn't know is just bizarre.
I don't know what it is about the type of people who end up doing pc support, but an irrational dislike of Macs seems to be systemic. I worked in an IT department when Novell was still a thing, an a senior guy with years of Unix experience would make jokes about "toy operating system" while also alternating between screaming at and practically fellating windows XP.
Apple will probably deliver the best unified AI experience for productivity. Digging into Microsoft’s domain (which has been seriously selling off). Your workers will want iOS and right now its perfect timing to sell LLM subs. This is a very aggressive and opportunistic move.
> Iran's parliament speaker denies talks took place with US officials
> Over the last hour or so, several news outlets have reported that Donald Trump's special envoy Steve Witkoff and Jared Kushner have been negotiating with Iran's parliament speaker, Mohammad-Bagher Ghalibaf, citing sources.
> Now, an X account attributed to Mohammad-Bagher Ghalibaf posts that no negotiations with have taken place with the US.
> It adds that "fake news" has been used to "manipulate" the oil markets, and that the Iranian people "demand complete and remorseful punishment of the aggressors".
People bring this up regularly, but I don't think it's that relevant. Studies regularly show that campaign contributions actually have very low influence on elections.
Trump notably had much smaller campaign budgets than his opponents in both winning elections, not even including the massive amounts of brazen fraud he used to pay himself with the money.
Fundamentally, it's presidential democracy that is flawed. We have a very powerful high office, and if enough people want to willing vote in a corrupt president, there's really not many checks against the damage that they can do.
reply