> People cook with teflon-coated pans for the tiny convenience over a nitrided, ceramic, or seasoned cast iron pan.
...which has absolutely nothing to do with the PFOA that you might reasonably be concerned about. Teflon is chemically inert. It's literally used for human body implants. Teflon-coated pans are not your enemy. Fire-fighting foam, on the other hand -- you probably shouldn't bathe in it.
Any test that "detects" teflon in the generic category of "PFAS" is a hopelessly flawed test [1]. Unfortunately, a great many of these papers don't make the distinction, whether intentionally or due to incompetence, or simply because it's far easier to do that, and it gets better headlines.
[1] Important aside: historically, several of the major manufacturers of teflon had problems with PFOA contamination around the factories due to manufacturing processes. This is unrelated to your personal use of a Teflon pan, and also, the process has been changed. If you want to argue that the new process is also polluting, fine, make that argument -- but don't assert that the use of the final product is itself unsafe.
Overheat them, which means the stuff gets into the air. Many many pet birds have died of this only because they're more susceptible
Use the wrong material in them meaning the start to scratch the Teflon layer.
I'm not saying you cannot use them right, but too many people don't and the product isn't safe when improperly used. This is true for many products but in this case plenty of people aren't aware they're holding it wrong.
> Overheat them, which means the stuff gets into the air. Many many pet birds have died of this only because they're more susceptible
And again, this has nothing to do with PFAS or PFOA. The principle cause is a complete breakdown of teflon into fluorinated small-molecule gases, such as hydrogen fluoride and tetrafluoroethylene. You're literally burning the coating off. It has as much relationship to PFOA as wood smoke has to wood.
This has nothing to do with PFAS. When you heat teflon to 500C+, the molecules break down into small molecule fluorinated gases. These molecules are not PFAS, in any way.
> ...which has absolutely nothing to do with the PFOA that you might reasonably be concerned about. Teflon is chemically inert. It's literally used for human body implants. Teflon-coated pans are not your enemy. Fire-fighting foam, on the other hand -- you probably shouldn't bathe in it.
Unfortunately, that is not the case. Yes, Teflon is inert but only when it's not exposed to high temperatures (>350F). When heated, such as in a non-stick pan, Teflon gives off fumes which contain byproducts including breakdowns back into PFAS compounds. So /YES/ the use of the final product (as cookware) /is/ unsafe. NOBODY SHOULD BE USING TEFLON NONSTICK COOKWARE.
> Teflon gives off fumes which contain byproducts including breakdowns back into PFAS compounds.
Completely incorrect. Overheating (aka "burning") completely destroys the molecule, and releases small molecule gases, like hydrogen fluoride. These have no relation to PFAS, they can't turn back into PFAS, and they look nothing like PFAS.
It's like saying that the smoke from burning wood is, in fact, wood.
the concern is not about immediate effects of using products, but the fact that they are now everywhere in the environment, including water supplies and our own blood streams.
it's a Japanese word for "weird". I'm guessing that OP is a bit of an Otaku (aka "obsessed with Japan") -- which is either ironic or completely appropriate.
> He hasn't kept ahead of the destruction of the dollar very well.
The dollar is trading pretty much at 30-year historic highs relative to all other currencies. You have to go back to ~2000 to find a stronger era, and then the 1980s before that.
I don't know how you know that, but even that argument is a straw man, unless you're asserting that all of the other currencies declined in value equally against whatever theoretical good(s) you're holding out as the objective standard for value.
You can argue that current market multiples are higher than 1929 [1] - and they're certainly high - but this also ignores the mechanism that drove that crash, focusing only on the symptoms. We simply aren't doing the kind of consumer margin buying that drove the '29 crash. It isn't even close. Average schlubs were leveraged to the stratosphere to buy shares of boring industrial stocks.
> The US stock market has nearly tripled since then. Literally the best period of stock growth in history.
The only thing I meant to point out was that a very high stock price by itself is no guarantee that there isn't a crisis around the corner. We plugged a lot of holes after 2008 and then reversed a lot of those fixes, I hear retail investors talking about their stocks at birthday parties again. Deja vu... of course this time it will be different. Or not. Let's just say that with the proverbial bull in the earthenware goods store on the loose if we only end up with another financial crisis that might actually not be so bad.
I actually calculated wrong. It went up 7.5x, not 3x.
In the roaring twenties stockbrokers allowed clients 10:1 margin. Investors were not as well-informed as they are today. There was no deposit insurance.
The SEC wasn't nearly as powerful as it was in 2024 and there was way more shady shit going on. In that respect, and the repeal of Glass-Steagall we're reverting to the pre-depression era.
Because it’s an inverted claim of falsification it works for literally anything (I cannot prove that X will absolutely not hurt you), but you get pilloried if you put something in the blank that the herd happens to support.
We’ve reached the absurd point where all sides of the political spectrum have sacred cows, and an exceedingly poor understanding of scientific reasoning, and all sides also try to dunk on the others by claiming scientific authority.
You found a paper saying that contamination is possible. That doesn’t mean that most of these plastic studies are doing the necessary controls, let alone the (almost impossible) task of preventing the contamination in a laboratory setting where nanomolar detection levels are used to make broad claims.
Are more “controls” what is necessary here? The problem wasn’t plastic contamination, it was the presence of stearates. Distinguishing between stearates and microplastics sounds like a classification problem, not a control problem.
There is practically universal recognition among microplastics researchers that contamination is possible and that strong quality controls are needed, and to be transparent and reproducible, they have a habit of documenting their methodology. Many papers and discussions suggest avoiding all plastics as part of the methodology, e.g. “Do’s and don’ts of microplastic research: a comprehensive guide” https://www.oaepublish.com/articles/wecn.2023.61
Another thing to consider is that papers generally compare against baseline/control samples, and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.
Many papers in this field are missing obvious controls, but you’re correct that controls alone are insufficient to solve this problem.
When you are taking measurements at the detection limit of any molecule that is widespread in the environment, you are going to have a difficult time of distinguishing signal from background. This requires sampling and replication and rigorous application of statistical inference.
> Another thing to consider is that papers generally compare against baseline/control samples,
Right, that’s what a control is.
> and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.
There’s no such thing as “overestimating in baseline samples”, unless you’re just doing a different measurement entirely.
What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant. This is a feature, not a bug.
You’re still bringing up different issues than this article we are commenting on.
> There’s no such thing as “overestimating in baseline samples”
What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.
> What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant.
No. What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.
> What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.
The entire point of a control is to test for that sort of contamination (or more generally, for malfunctions in the experimental workflow). In the case of a negative control, specifically, you're looking for an "positive" where one should not exist. If an experiment is set up such that you can obtain differential contamination in the controls but not the experimental arms, as you've described, then the entire experiment is invalid.
> What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.
The control cannot be "mis-measured", any more or less than the other arms can be "mis-measured". You treat them identically, otherwise the control is not a control. Neither example you've given are exceptions: if the assay mistakes chemical B for chemical A, then it will also do so for the non-controls. If the experimental process contaminates the controls, it will also contaminate the non-controls.
What you're missing is that there's no absolute "correct" measurement -- yes, the control may itself be contaminated with something you don't even know about, thus "understating" the absolute measurement of whatever thing you're looking for, but the absolute measurement was never the goal. You're looking for between-group differences, nothing more.
Just to make it clearer, if I were going to run an extremely naïve experiment of this sort (i.e. detection of trace chemical contamination C via super-sensitive assay A) with any hope of validity, I'd want to do multiple replications of a dilution series, each with independent negative and positive controls. I'd then use something like ANOVA to look for significant deviations across the group means. This is like the "science 101" version of the experimental design. Any failure of any control means the experiment goes in the trash. Any "significant" result that doesn't follow the expected dilution series patterns, again, goes in the trash.
(This is, of course, after doing everything you can to mitigate for baseline levels of the contaminant in the lab environment, which is a process that itself probably requires multiple failed iterations of the experiment I just described.)
Most of the plastic contamination papers I have read are far, far from even that naïve baseline.
> The entire point of a control is to test for that sort of contamination
No, the point of a control is to give you a reference point that shares all the systemic biases and unknown unknowns, not to detect those biases. If you follow the same procedure on a known null and on your experiment and observe an effect, assuming you really did exactly the same thing except the studied intervention, you can subtract out the bias.
This one example of technical jargon diverging from colloquial or intuitive use, and it is the type of thing people who haven't had statistics or scientific process education often struggle with because they keep applying their colloquial intuitions.
You talk like you understand this on the rest of the comment so I'm confused by this framing, and the person you are replying to points out (in my reading ) that contamination of the control 1) does happen in practice (in the sense that there was an accidental intervention) and 2) if the gloves contaminated both the measurements and control the same way then the control is exactly serving it's purposes
You’re repeating several of my points in your own words, supporting them and not arguing with them, even though your language and emphasis suggests you think you are arguing.
> then the entire experiment is invalid
Isn’t that what I said? You even quoted me saying it. But I didn’t say anything about only control being contaminated or mis-measured, I think you’re assuming something I didn’t say. Validity is, of course, compromised if the control is compromised, regardless of what happens to the test samples.
> The control cannot be “mis-measured” […] yes, the control may itself be contaminated […]
So which is it? Isn’t the article we’re commenting on talking about the possibility of mis-measuring? Are you suggesting this article cannot possibly be an issue when measuring control samples? Why not?
Controls absolutely can be mis-measured or contaminated or both. It has been known to happen. It’s bad when this happens because it means the experiment has to be re-done.
> If the experimental process contaminates the controls, it will also contaminate the non-controls
Yes! This is exactly what I was implying, and is exactly how you might end up underestimating the relative presence of whatever you’re looking for in the test, if your classification procedure overestimates it.
> You’re looking for between-group differences
Yes! and this is why if, for example, you didn’t notice your control had stearates and you counted them as microplastics accidentally, and then reported that your test sample had 2x more microplastics than your control, you might have missed the fact that your test actually had 10x more microplastics, or that your control actually had none when you thought incorrectly that it had some.
This, of course, is not the only possible outcome, not the only way that the results might be distorted. But this is one possible outcome that the Michigan paper at hand is warning against, no?
> Most of the papers I have read are far, far from even that naïve baseline.
Short of it, or exceeding it? Based on earlier comments, I assume you mean they’re not meeting your standards. I don’t know what you’ve read, and my brief googling did not seem to support your claims here so far. Can you provide some references? It would be especially helpful if you showed recent/modern SOTA papers, work that is considered accurate, and is highly referenced.
I agree completely. My point is that documenting methodology is standard practice, as is strict quality control, in the microplastics literature. I don’t know what controls are missing according to GP, and we don’t yet have references here to back up that claim. By and large I think researchers are aware of the difficulties measuring this stuff, and doing everything they can to ensure valid science.
Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why all programmers irresponsibly write memory unsafe code given it has a global impact.
Been here 16 years, it's always an adventure seeing whether stuff like this falls into:
A) Polite interest that doesn't turn into self-keyword-association
B) Science journalism bad
C) Can you believe no one else knows what they're doing.
(A) almost never happens, has to avoid being top 10 on front page and/or be early morning/late night for North America and Europe. (i.e. most of the audience)
(B) is reserved for physics and math.
(C) is default leftover.
Weekends are horrible because you'll get a "harshin' the vibe" penalty if you push back at all. People will pick at your link but not the main one and treat you like you're argumentative. (i.e. 'you're taking things too seriously' but a thoughtful person's version)
> Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why programmers irresponsibly write memory unsafe code given it has a global impact.
I used to be a code monkey, I wrote systems software at megacorps, and still can't understand why so many programmers irresponsibly write memory unsafe code given it has a global impact.
That's the analogy working as intended: the answer to "why do programmers still write memory-unsafe code" is the same shape as "why do microplastics researchers still wear gloves." The real answer is boring and full of tradeoffs. The HN thread version skips to indignation: "they never thought of contamination so ipso facto all the research is suspect"
(to go a bit further, in case it's confusing: both you and I agree on "why do people opt-in to memunsafe code in 2026? There’s no reason to" - yet, we also understand why Linux/Android/Windows/macOS/ffmpeg/ls aren't 100% $INSERT_MEM_SAFE_LANGUAGE yet, and in fact, most new written for them is memunsafe)
You joke, but given that SWE/AI researchers literally invented AI that does everything else for them and is often super-human at intelligence across most things, I would unironically prefer the opinion of the creator of such a system over most others for most things.
I cooked a steak yesterday therefore I am an expert in biology.
Creating a user interface for the world’s knowledge doesn’t make the developer an expert on the knowledge that the interface holds in its database. Regardless of how sophisticated that interface might be.
You don’t need to be qualified to be unsure about something. Being unsure is a healthy position because it’s an acknowledgment that you don’t know something entirely. Which can also means you have an open mind to learn more about that subject.
Being certain, on the other hand, requires an assumption that you are a subject expert.
But this is all moot anyway because you’re constructing an elaborate strawman here. The original point was that the GP (possibly you?) trusts SWE more than others because they built AI. And I said building databases doesn’t make you smart at the subject loaded into the database.
Really, this whole premise of SWEs assuming expertise on subjects they’ve trained AI on says more about the Dunning-Kruger effect than anything of value in our little tangent.
You can be skeptical in wrong ways. See solipsism for example.
Typically when I get genuine responses to the question, "What would change your mind?" it's an incredibly high bar that is practically impossible to achieve. That's not necessarily a bad thing, but when skepticism is applied without deliberation, it supports biases rather than truth.
So yes, you do need to be qualified to be skeptical, SWEs doubly so.
johnbarron didn't find it. The authors cited it as foundational to their own work. it's ref. 38 in the paper under discussion. From the paper: "this finding had not been reported in the MP literature until 2020, when Witzig et al. reported that laboratory gloves submerged in water leached residues that were misidentified as polyethylene."[1]
> "most of these plastic studies are [not] doing the necessary controls"
which studies? The paper they linked surveys 26 QA/QC review articles[1]. Seems well understood.
> "a laboratory setting where nanomolar detection levels are used to make broad claims"
This is like saying "miles per gallon" when discussing weight. "nanomolar detection levels"...microplastics are individual particles identified by spectroscopy, reported as particles per mm^2. "Nanomolar" is a dissolved-species concentration unit. It has nothing to do with particle counting. (I, and other laymen, understand what you mean but you go on later in the thread to justify your unsourced and unjustified claims here via your subject-matter expertise.)
> "(almost impossible) task of preventing the contamination"
The paper provides open-access spectral libraries and conformal prediction workflows to identify and subtract stearate false positives from existing datasets[1]. Prevention isn't the strategy. Correction is. That's the entire point of the paper they linked and the follow-up in [2]
> This is like saying "miles per gallon" when discussing weight. "nanomolar detection levels"...microplastics are individual particles identified by spectroscopy, reported as particles per mm^2. "Nanomolar" is a dissolved-species concentration unit. It has nothing to do with particle counting. (I, and other laymen, understand what you mean but you go on later in the thread to justify your unsourced and unjustified claims here via your subject-matter expertise.)
This paper used “light-based spectroscopy” [1]. Many others use methods that depend on gas chromatography or NMR. A relatively infamous recent example used pyrolysis GCMS to make low-concentration measurements (hence: nanomolar), which they credulously scaled up by some huge factor, and then made idiotic claims about plastic spoons in brains.
Relatively little quantitative science in this area depends on counting plastic particles in microscopic images, but it’s what gets headlines, because laypeople understand pictures.
[1] as an aside, the choice of terminology here is noteworthy. A simple visual light absorption spectra is also “light based spectroscopy”, but is measuring the aggregate response of a sample of a heterogeneous mixture, and is conventionally converted to molar equivalents via some sort of calibration curve (otherwise you can’t conclude anything). But there could be other approaches that are closer to microscopy, which they also discuss. “Particles per square millimeter” is also a unit of concentration (albeit a shitty one, unless your particles are of uniform mass).
Anyway, the point is that these kinds of quantitative analyses are all trying to do measurements that are fundamentally about concentration, which is why I chose the words that I did.
"1 nanomole of polyethylene" requires you to pick an arbitrary average molecular weight.
This changes the answer by orders of magnitude depending on what you pick.
Which is why nobody does it.
> Relatively little quantitative science in this area depends on counting plastic particles in microscopic images...Many others use methods that depend on gas chromatography or NMR.
So we're dismissive of some subset of papers, because they get false positives using toy methods.
Real science would use gas chromatography.
But...the paper we're dismissing tested gas chromatography. And found the same false positive. [1, in abstract]
> A relatively infamous recent example used pyrolysis GCMS to make low-concentration measurements (hence: nanomolar)
The brain study I'm guessing you are referring to, [2], measured low concentrations, yes.
But it reported them in ug/g.
Because polymers don't have a defined molecular weight.
> made idiotic claims about plastic spoons in brains
The brain study I'm guessing you are referring to, [2], does not mention spoons, or, come close.
Are we sure there's a paper that did that?
[1] Witzig et al, https://pubs.acs.org/doi/10.1021/acs.est.0c03742, "Therefore, u-Raman, u-FTIR, and pyr-GC/MS were further tested for their capability to distinguish among PE, sodium dodecyl sulfate, and stearates. It became clear that stearates and sodium dodecyl sulfates can cause substantial overestimation of PE."
Not sure what you mean or how it’s related. If the idea is microplastics aren’t actually a problem, I’m totally open to that. But “it’s possible everyone involved is overrating it due to scientists seeing fatty acids or hydrocarbons and calling it plastic” needs a little more than anon assertion :)
PE consists of very long hydrocarbon chains. It can degrade into shorter hydrocarbon chains. Fatty acids also have long hydrocarbon chains. The detection method for microplastics commonly involves pyrolysis, which breaks down polymers into smaller molecules. It's not hard to see that they'll end up looking nearly the same.
> Which leaves as observation, you can only do truly creative work - in a high trust society, where people trust you with the resources and leave you alone, after a initial proof of ability.
I don’t know about “high trust”, but I can say with confidence that the “make more mistakes” thesis misses a critical point: evolutionary winnowing isn’t so great if you’re one of the thousands of “adjacent” organisms that didn’t survive. Which, statistically, you will be. And the people who are trusted with resources and squander them without results will be less trusted in the future [1].
Point being, mistakes always have a cost, and while it can be smart to try to minimize that cost in certain scenarios (amateur painting), it can be a terrible idea in other contexts (open-heart surgery). Pick your optimization algorithm wisely.
What you’re characterizing as “low trust” is, in most cases, a system that isn’t trying to optimize for creativity, and that’s fine. You don’t want your bank to be “creative” with accounting, for example.
[1] Sort of. Unfortunately, humans gonna monkey, and the high-status monkeys get a lot of unfair credit for past successes, to the point of completely disregarding the true quality of their current work. So you see people who have lost literally billions of dollars in comically incompetent entrepreneurial disasters, only to be able to run out a year later and raise hundreds of millions more for a random idea.
> A lot of what one previously needed a SWE to do can now be brute forced well enough with AI. (Granted, everything SWEs complained about being tedious.)
Only if you ignore everything they generate. Look at all the comments saying that the agent hallucinates a result, generates always-passing tests, etc. Those are absolutely true observations -- and don't touch on the fact that tests can pass, the red/green approach can give thumbs up and rocket emojis all day long, and the code can still be shitty, brittle and riddled with security and performance flaws. And so now we have people building elaborate castles in the sky to try to catch those problems. Except that the things doing the catching are themselves prone to hallucination. And around we go.
So because a portion of (IMO always bad, but previously unrecognized as bad) coders think that these random text generators are trustworthy enough to run unsupervised, we've moved all of this chaotic energy up a level. There's more output, certainly, but it all feels like we've replaced actual intelligent thought with an army of monkeys making Rube Goldberg machines at scale. It's going to backfire.
What I want to know is, what has this increase in code generation led to? What is the impact?
I don't mean 'Oh I finally have the energy to do that side project that I never could'.
Afterall, the trade-offs have to be worth something... right? Where's the 1-person billion dollar firms at That Mr Altman spoke about?
The way I think of it is code has always been an intermediary step between a vision and an object of value. So is there an increase in this activity that yields the trade-offs to be a net benefit?
> what has this increase in code generation led to?
Every restaurant in my small town has their menu on the website in a normal way. Apparently someone figured out you can take a picture of a paper menu and have AI code it into HTML.
> we’re finally admitting that all of that leetcode screening and engineer quality gating was a farce, or it wasn’t, and you’re wrong
We’re admitting a bit of both. Offshoring just became more instantaneous, secure and efficient. There will still be folks who overplay their hand.
Macroeconomically speaking, I don’t see why we need more software engineers in the future than we have today, and that’s probably a conservative estimate.
> Macroeconomically speaking, I don’t see why we need more software engineers in the future than we have today, and that’s probably a conservative estimate.
Why? Is the argument that there’s a finite amount of software that the world needs, and therefore we will more quickly reach that finite amount?
Seems more likely to me that if LLMs are a force multiplier for software then more software engineers will exist. Or, instead of “software engineers”, call them “people who create software” (even with the assistance of LLMs).
Or maybe the argument is that you need to be a super genius 100x engineer in order to manipulate 17 collaborative and competitive agents in order to reach your maximum potential, and then you’ll take everyone’s jobs?
Idk just seems like wild speculation that isn’t even worth me arguing against. Too late now that I’ve already written it out I guess.
> instead of “software engineers”, call them “people who create software” (even with the assistance of LLMs)
I think this is my hypothesis. A lot more people with a lot less training will create vastly more software. As a consequence, the trade sort of dissolves at the edges as something that pays a premium. Instead, other competencies become the differentiators.
It's not comparing him to anyone. He has an endowed professorship. This is standard in academia, and you give the name because a) it's prestigious for the recipient and b) it strokes the ego of the donor.
...which has absolutely nothing to do with the PFOA that you might reasonably be concerned about. Teflon is chemically inert. It's literally used for human body implants. Teflon-coated pans are not your enemy. Fire-fighting foam, on the other hand -- you probably shouldn't bathe in it.
Any test that "detects" teflon in the generic category of "PFAS" is a hopelessly flawed test [1]. Unfortunately, a great many of these papers don't make the distinction, whether intentionally or due to incompetence, or simply because it's far easier to do that, and it gets better headlines.
[1] Important aside: historically, several of the major manufacturers of teflon had problems with PFOA contamination around the factories due to manufacturing processes. This is unrelated to your personal use of a Teflon pan, and also, the process has been changed. If you want to argue that the new process is also polluting, fine, make that argument -- but don't assert that the use of the final product is itself unsafe.
reply