Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would you mind talking to how black box interpretability is becoming well known? I've seen Shapley values used for feature interpretation, but not sure what else is being done.


For an accessible recent overview see: https://christophm.github.io/interpretable-ml-book/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: