Electron microscopy
 
AdaBoost Model
- Python Automation and Machine Learning for ICs -
- An Online Book -
Python Automation and Machine Learning for ICs                                                           http://www.globalsino.com/ICs/        


Chapter/Index: Introduction | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | Appendix

=================================================================================

AdaBoost, short for Adaptive Boosting, is an ensemble learning method in machine learning that is used to improve the performance of weak classifiers (learners) and combine them to create a strong classifier. The basic idea behind AdaBoost is to assign weights to data points and focus on the mistakes made by the weak learners in subsequent iterations. The algorithm gives more weight to the misclassified data points, forcing the weak learners to focus on the difficult-to-classify instances:

  1. Initialize weights: Assign equal weights to all training examples.

  2. For T iterations (T is the number of weak learners):

  3. a. Train a weak learner: Train a weak learner (classifier) on the training data with the current weights.

    b. Compute error: Calculate the error of the weak learner on the training data, considering the weights of the data points.

    c. Compute learner weight: Compute the weight of the weak learner based on its error. A low error results in a higher weight.

    d. Update weights: Update the weights of the training examples. Increase the weights for the misclassified examples, making them more influential in the subsequent iterations, and decrease the weights for correctly classified examples.

  4. Combine weak learners: Combine the weak learners into a strong classifier by assigning a weight to each weak learner based on its performance.
  5. Make predictions: Use the combined strong classifier to make predictions on new, unseen data.

The final strong classifier is a weighted sum of the weak classifiers, where each weak classifier's weight is determined by its performance during training.

Mathematically, the weights of the weak learners and the final combined classifier can be represented as follows:

  • Compute the error () of the weak learner:

  •           Compute the error (εi) of the weak learner ------------------------------- [3734a]

  • Compute the weight () of the weak learner:
  •           Compute the error (εi) of the weak learner ------------------------------- [3734b]

  • Update the weights which are assigned to individual training examples during the training process:

  •           Compute the error (εi) of the weak learner -------------------- [3734c]

    where,

              is the true label of example

              is the prediction of the weak learner for example

    Here, the sum of the weights is normalized. The final strong classifier prediction is given by,

              Compute the error (εi) of the weak learner -------------------- [3734d]

    In the equation above, is the total number of weak learners.

Figure 3734a shows an example of boosting with AdaBoost Model in ML.

Upload Files to Webpages

(a)

Upload Files to Webpages

(b)

Figure 3734a. Boosting with AdaBoost Model in ML (Code): (a) Base model (weak learner), and (b) AdaBoost Model.

 

============================================

         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         
         

 

 

 

 

 



















































 

 

 

 

 

=================================================================================