Given the framing of the article, I can understand where the opposite direction comment is coming from. The author also gives mixed signals, by simultaneously suggesting that the "laziness" of the programmer and code are virtues. Yet I don't think they are ignoring value generation. Rather, I think they are suggesting that the value is in the quality of the code instead of the problem being solves. This seems to be an attitude held by many developers who are interested in the pursuit of programming rather than the end product.
An MMU is pretty much necessary for robust multitasking. Without it, you are at the whim of how well software behaves. Without it, it is more difficult for developers to create well behaved software. That also assumes good intentions from programmers, since an MMU is necessary for memory protection (thus security).
While emulating an FPU results in a huge performance penalty, it is only required in certain domains. In the world of IBM PCs, it was also possible to upgrade your system with an FPU after the fact. I don't recall seeing this option for IBM compatibles. While I have seen socketed MMUs on other systems, I don't know whether they were intended as upgrade options.
And that is just a fraction of what WinUtil does ...
It has been a while since I booted Windows, but I am fairly certain you can still circumvent the OneDrive nonsense (which is what the article is about) by setting up a local account. There are likely simpler ways, since Windows still has the concept of local file storage. That doesn't excuse the dark patterns, but it does highlight that we sometimes over complicate solutions.
My first LG ended up being my last LG because it was defective. Screen issues started popping up about a year after purchase. Sending it in repairs didn't accomplish anything. They probably returned the same phone to me untouched since the screen issues would go away if the phone wasn't used for a few weeks (but it would always come back). While other people had similar issues, LG never acknowledged the problem. It was not confidence inspiring.
In terms of what the phone delivered between software and hardware: it was a wonderful phone, but I lacked confidence in the brand to buy another.
In contrast, I have never had a defective phone from another company. Heck, I've only had two phones that ended up with cracked screens (and those were clearly my fault).
I had a similar experience. I had an LG V20 and I loved so much about it, especially the extra display at the top for notifications and such, and the really incredible DAC. But the glass, both on the screen and the camera on the back, broke 4 times over the two years I owned it. It's still the only phone I've ever broken glass on.
I will forever remember the V20.
I was at the mall shooting the shit with some friends in late 2016 waiting for the bus to bring us back to campus. We went to the Verizon store to look at the hottest new phones none of us could afford.
There was a V20, and someone had changed the little top screen to display the static text "dicks out for harambe"
I still have a photo of it kicking around here somewhere.
Which model was it? Curious, I know older LG models were not the best, but it felt to me that their last few sets were good enough for me. I am a power user for phones too. I use Discord, Slack, etc.
If I recall correctly, the G4. The issue was definitely more memorable than the model name: the image on the right half the screen would gradually compress vertically.
Calculators are deterministic, but they are not necessarily correct. Consider 32-bit integer arithmetic:
30000000 * 1000 / 1000
30000000 / 1000 * 1000
Mathematically, they are identical. Computationally, the results are deterministic. On the other hand, the computer will produce different results. There are many other cases where the expected result is different from what a computer calculates.
A good calculator will however do this correctly (as in: the way anyone would expect). Small cheap calculators revert to confusing syntax, but if you pay $30 for a decent handheld calculator or use something decent like wolframalpha on your phone/laptop/desktop you won't run into precision issues for reasonable numbers.
He’s not talking about order of operations, he’s talking about floating point error, which will accumulate in different ways in each case, because floating point is an imperfect representation of real numbers
Yeap, the specific example wasn't important. I choose an example involving the order of operations and an integer overflow simply because it would be easy to discuss. (I have been out of the field for nearly 20 years now.) Your example of floating point errors is another. I also encountered artifacts from approximations for transcendental functions.
Choosing a "better" language was not always an option, at least at the time. I was working with grad students who were managing huge datasets, sometimes for large simulations and sometimes from large surveys. They were using C. Some of the faculty may have used Fortran. C exposes you the vulgarities of the hardware, and I'm fairly certain Fortran does as well. They weren't going to use a calculator for those tasks, nor an interpreted language. Even if they wanted to choose another language, the choice of languages was limited by the machines they used. I've long since forgotten what the high performance cluster was running, but it wasn't Linux and it wasn't on Intel. They may have been able to license something like Mathematica for it, but that wasn't the type of computation they were doing.
I didn't consider it an order of operations issue. Order of operations doesn't matter in the above example unless you have bad precision. What I was trying to say is that good calculators have plenty of precision.
But floating point error manifest in different ways. Most people only care about 2 to 4 decimals which even the cheapest calculators can do well for a good amount of consecutive of usual computations. Anyone who cares about better precision will choose a better calculator. So floating point error is remediable.
> The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).
In a sense, I think you are right. We are currently going through a period of transition that values some skills and devalues others. The people who see huge productivity gains because they don't have to do the meaningless grunt work are enthusiastic about that. The people who did not come up with the tool are quick to point out pitfalls.
The thing is, the naysayers aren't wrong since the path we choose to follow will determine the outcome of using the technology. Using it to sift through papers to figure out what is worth reading in depth is useful. Using it to help us understand difficult points in a paper is useful. On the other hand, using it as a replacement for reading the papers is counterproductive. It is replacing what the author said with what a machine "thinks" an author said. That may get rid of unnecessary verbosity, but it is almost certainly stripping away necessary details as well.
My university days were spent studying astrophysics. It was long ago, but the struggles with technology handling data were similar. There were debates between older faculty who were fine with computers, as long as researchers were there to supervise the analysis every step of the way, and new faculty, who needed computers to take raw data to reduced results without human intervention. The reason was, as always, productivity. People could not handle the massive amounts of data being generated by the new generation of sensors or systematic large scale surveys if they had to intervene any step of the way. At a basic level, you couldn't figure out whether it was a garbage-in, garbage-out type scenario because no one had the time to look at the inputs. (I mean no time in an absolute sense. There was too much data.) At a deeper level, you couldn't even tell if the data processing steps were valid unless there was something obviously wrong with the data. Sure, the code looked fine. If the code did what we expected of it, mathematically, it would be fine. But there were occasions where I had to point out that the computer isn't working how they thought it was.
It was a debate in which both sides were right. You couldn't make scientific progress at a useful pace without sticking computers in the middle and without computers taking over the grunt work. On the other hand, the machine cannot be used as a replacement for the grunt work of understanding, may that involves reading papers or analyzing the code from the perspective of a computer scientist (rather than a mathematician).
Take a look at the technology sitting in front of you. How many ideas does it incorporate that were tried and failed, or were tried but languished in niche markets for decades before they became an everyday thing?
A lot of ideas fail because they're not ready: they are expensive, they are not reliable (yet), the world is not ready for them. None of those reasons mean an idea is bad. They simply mean it will take more time and effort for them to work.
I agree. I do not think we are anywhere near the original conversation anymore, however. I certainly never said anything that contradicts your comment.
He had a comment that said, essentially, “all good ideas eventually win out.” He then heavily edited his comment after I responded. That could be the source of confusion for you here.
> what exactly am I supposed to do with the above?
Exactly what you did there: ask a question.
You tried it, you know where the potential failure points are. Don't assume the super optimistic person hasn't considered them. Ask them how they would address those issues. If they have addressed them, maybe there is something workable. If they haven't addressed those failure points, you have given them a choice: to tackle those issues in the background or to set the idea aside.
> We (as a society/culture) are absolutely giving our children passes and teaching them to act this way.
That depends upon where you teach. I've worked in schools where families who would put up with that type of behaviour were an anomaly. The school sends the same message.
Of course, one can argue that society is sending conflicting messages. Yet then my question would be: are those messages coming from people who are truly reflective of society? Those messages are certainly coming from the loudest voices, voices that are (more often than not) controlled by a few organizations that seem to have a moral compass that points towards the profit of the organization rather than social welfare. Even then I have to wonder whether the views of the organization reflect the views of the people it is composed of.
Yes, it does. I was speaking generally. I think if you selected teachers at random from the entire set of K-12 teachers in America, you'd find more who do have to deal with that behavior than don't.
That's the impression I have as well, but I am also cautious about accepting it. People tend to discuss the bad schools and ignore the good ones. They tend to focus upon the families who don't care for their kids (may they be poor or rich), and ignore the families who do care for their kids. It's easy to understand why. The kids who do act out need a disproportionate amount of attention to keep the system on track.
I wonder if it isn't so much the absolute number of kids who act out (at least initially) so much as it is the change in the way we've handled consequences? My understanding is in a lot of school systems, it's nearly impossible to hold a child back or to fail them, and that it's much harder to mete out discipline. Even if the number is holding steady, the rest of the class/families are still seeing that there are no consequences for not meeting standards and exhibiting problematic behavior, which is sort of the start of a slow moving poison.
There's the info boxes that it could be added to, that way it is always available at a mouse click.
That said, I'm not sure how useful expanding most of the acronyms would be. Names like Negative/Positive Metal Oxide Semiconductor aren't exactly self-explanatory, Vdd isn't really an acronym, etc..
reply