XKCD nicely illustrates the difference between Frequentist and Bayesian statistics:
Bayesian statistics and Bayesian reasoning are an integral part of Machine Learning and I’ve run into them quite a few times. But I simply never understood them. Oh, I could write down the formulas and quite Bayes’ rule and could say “prior belief”. But I simply didn’t understand what it meant, until Science-Based Medicine discussed it in terms of prior (biological, chemical, physical) plausibility of treatments. Suddenly the concept became clear and having this clarity helps me understand how Bayesian assumptions are used in learners. It wasn’t the courses I took or Bishop’s “Bayes über alles”-book but a practical application.
I’ve had a similar experience with formal logic. I first ran into it in the context of philosophy courses but understanding how one can reformulate natural language as logical formulas, and then manipulate sets of such formulas to derive conclusions about the truth values (and conditions) of a whole set, e.g. a body of believes, came to me when we discussed formal logic in mathematics. That specific algorithms can automatically reduce a set of horn clauses to their minimal form and therefore automatically check whether such a set is consistent was mind-blowing for me…and opened the door to understanding how philosophy uses formal logic.
Update: just what the doctor ordered – Science Based Medicine discusses Moneyball, the 2012 Election, and Science- and Evidence-Based Medicine.