Naive Bayesian classification is like a good magic trick, perplexing until explained and
continuing to provide entertainment. The idea is to use observations from the past to classify a new obeservation. The classic Bayesian example made famous by Paul Graham is the Bayesian spam classifier, which is first trained on some messages at explicitly told which messages are "spam" and which are "clean". After training, it is presented with each new email and "guesses" whether it is spam or clean, based on the "features" of the message.
similar entries (vs):
similar entries (cg):