i2tutorials

Machine Learning- Bayesian Learning: Introduction

Bayesian Learning: Introduction

 

Bayesian machine learning is a subset of probabilistic machine learning approaches (for other probabilistic models, see Supervised Learning). In this blog, we’ll have a look at a brief introduction to bayesian learning. 

 

In Bayesian learning, model parameters are treated as random variables, and parameter estimation entails constructing posterior distributions for these random variables based on observed data.

 

Why Bayesian Learning Algorithms?

For two reasons, Bayesian learning approaches are relevant to machine learning.

 

Features of Bayesian learning methods include:

Each observed training example can reduce or enhance the estimated chance that a hypothesis is correct by a small amount.

 

This is more flexible than methods that fully discard a hypothesis if it is discovered to be inconsistent with any single example. To assess the final probability of a hypothesis, prior knowledge can be merged with observed data.

 

Hypotheses that make probabilistic predictions can be accommodated by Bayesian approaches (e.g., hypotheses such as “this pneumonia patient has a 93 percent chance of complete recovery”).

 

The validity of a proposition is calculated via Bayesian Estimation.

 

The proposition’s validity is determined by two factors:

i). Preliminary Estimate

ii). New evidence that is relevant.

 

Practical Issues: 

 

 

BAYES THEOREM:

Consider a typical machine learning task. You have a set of training data, inputs, and outputs, and you’d like to figure out how to map them together. 

 

As a result, you piece together a model and soon have a deterministic way of making predictions for a target variable y given an unknown input x.

 

There’s only one problem: you have no method of explaining what’s going on inside your model! You just know it was trained to minimize some loss function on your training data, but that’s not much information. In an ideal world, you’d have an objective summary of your model’s parameters, complete with confidence intervals and other statistical morsels, and you’d be able to reason about them in probability terms.

 

This is where Bayesian Machine Learning enters the picture.

 

The Bayes theorem is a method for calculating a hypothesis’s probability based on its prior probability, the probabilities of observing specific data given the hypothesis, and the seen data itself.

 

Bayes theorem definition,

 

 

 

 

 

 

 

 

As P(D) grows, P(h/D) drops.

 

 

 

 

 

 

We’ve already seen one use of Bayes Theorem in the analysis of Knowledge Cascades, we discovered that based on the conditional probabilities computed using Bayes’ Theorem, reasonable decisions may be made where one’s own personal information is omitted.

 

Application of the Bayes Theorem:

The theorem has a wide range of applications that aren’t confined to finance. 

 

Bayes’ theorem, for example, can be used to estimate the accuracy of medical test findings by taking into account how probable any specific person is to have a condition as well as the test’s overall accuracy.

 

Reference

Bayesian Learning: Introduction

Exit mobile version