Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's no way they used machine learning here. My guess it was some simple rules like older doctors go first. I wouldn't even call it an algorithm. I think algorithm is a weasly way for the administrator to make it sound more complicated then it was.


My guess it was some simple rules like older doctors go first

No need to guess.

"It used an algorithm that assigned each person a crude risk score, taking into account factors such as age, job description and the number of coronavirus cases that had been detected in their hospital department. That resulted in personnel like environmental services workers, food service workers and older employees being shuttled to the front of the line.

Residents, who are early in their careers and tend to be young, rotate throughout the hospital to train with various teams of physicians, making them difficult to place in a designated unit."

https://www.nytimes.com/2020/12/18/world/covid-stanford-heal...


I guess they do deserve some credit for prioritizing food service workers and such who are in high-contact situations, not just medical workers!

In addition to having a bug for residents without designated units (which again, how did they not test it on a sample set including residents?), it sounds like they maybe weighted age (or department?) too much compared to job description. It doesn't seem like an older physician that is working from home should be weighted higher than nearly anyone working every day in the hospital, regardless of age or if other people in their department working in the hospital got sick already.

It seems like amount of contact with patients should have been weighted far higher than anything else (patients for elective procedures are required to get a virus test first, so amount of contact with untested patients like in ER, or covid+ patients like those who need to care for them, even higher). But I'm not a doctor or medical ethicist. If Stanford has a reason for not doing that, they can, well, explain it, if they want to look better. It makes me figure it was just not done very carefully.

It seems to me that formula was just not very good. While it's technically an "algorithm", sure, I think most of us in the field would just call that a "formula" not an algorithm. I think they use the phrase "algorithm" precisely because to the layperson it seems more mysterious, more sophisticated, more complicated, harder to question, less obvious that was just something humans did, possibly not very carefully.


I really don't think you can call that an algorithm, it's a set of human defined rules. You could call it a formula, but an algorithm implies a series of steps which are not present here.


Since algorithm is defined as "a process or set of rules to be followed in calculations or other problem-solving operations", how is it not an algorithm?


Going by that definition, the rules here are not defining the steps to be followed to do the calculation. The rules here are defining the answer that they want, not the steps necessary to figure it out.


As computer scientists and adjacent, we have a whole taxonomy for algorithms. It should be noted that "sort by salary" is an algorithm.


In popular media "algorithm" has taken on a new meaning, probably because the word itself sounds mysterious.

I can't fully define the new meaning, but it's like the YouTube video recommendation algorithm which is more like a whole system than a single algorithm in the original sense, the Facebook algorithm for ranking items on your feed. These big data, machine learning, opaque models.


If the set of rules are defined by humans and then those rules are coded into a SQL statement is it really an algorithm? If the humans define the rules and later they realize the rules are bad it's not the sorting algorithm that caused the problem.


I've got to be honest, I answered yes to both your rhetorical "are these algorithms?" questions, which I'm guessing was not your intention. It would help if you gave some examples of what is missing in each case (in your eyes) disqualifying them.

With the SQL example:

I don't think a set of rules is disqualified from being called an algorithm if it's implemented using some other tool or process, because we do this all the time: when programming, we split up implementations into functions, or we could have used the standard library's sort function too -- I would still consider it an algorithm no matter how the "sort the results" step ended up being implemented.

If the result is wrong, it is not necessarily because the implementation of SQL's ORDER BY is incorrect (it could be but it's unlikely for a popular SQL implementation), and if you know that the the rules are incorrect then I agree, I definitely wouldn't blame the sorting algorithm (at least initially, although it's possible it's also be wrong).


Of course it wasn't using machine learn, there wasn't anything to learn here.

But the point that machine learning has become an ideal package for existing biases still is worth mentioning because machine learning becomes an ideal way to present mistakes involving values, such racism, as mistakes that are simply technical and can't be helped and this allows regular algorithms to also get this kind of pass.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: