Introduction
Classification is the process of the assigning objects to one of the many predefined categories. There are many areas where this technique is applied. Many processes use classification algorithm to be able to categorize objects to one of the various categories. This paper will look at one model of categorizing objects. Generally, there are two models of categorizing objects, which are descriptive modeling and predictive modeling. In descriptive modeling, the model that is used for classification can be used to differentiate between objects that are found in different classes. On the other hand, a predictive modeling is a modeling that is used to predict the class label of a class which is unknown. The classification model can be seen to be a black box that takes the attributes and outputs the class category of the object. The techniques that are used for classifications are best used in data which are either nominal or binary nature. They work poorly for ordinal data.
There are many classification models which have been built from scratch. A classification technique is the method of building classification models from data set that is used for input. Each technique makes use of a learning algorithm which is used for the identification of the relationship that exists between the attributes that are in the input and the class label of the data that is being input. One requirement for the learning algorithm is that it should fit well with the data input and the class label that will be predicted. In this respect, therefore, a learning algorithm should be able to build good models which have good generalization ability. This is the ability of a classification model to predict a class label that belonged to records which were unknown. The rest of the paper will focus on the decision tree and Hunts algorithm that is used in this model.
Classification techniques are applicable in many areas. Neural networks have been widely used to improve the speed and performance of programs which are based on heuristics and also checking on thecorpous of programs which are already in existence. There was also an elaboration of an algorithm done by Young and Smith (1994). The algorithm allowed branch correlation to be used in predictors which are based on profiles. This is achieved by encoding the previous results in the value of the counter of the program. Moreover, Young and Smith discussed the history-based branch prediction; they maintained a collection of taken versus non-taken statistics for each branch where the taken versus non-taken tendencies are tabulated for each execution path that reaches this branch (Calder et al., 1997). Young and Smith (1994) also extended the work of Ball and Larusby the use of the Dempster-Shafer theory of evidence.This theory of evidence combined several predictions of the outcome of a branch into an estimate of the frequency with which the branch is taken. Nevertheless, Bandyopadhyay et al. (2007) showed this method is vulnerable to differences in compilers and architectures as it was based on a prior prediction of object code. From these researchers, I learnt that the previous branch is important in predicting the next branch. I also learnt that the computation and the method of achieving the next branch are important. The researchers bring up an important step of using an algorithm. This is an important process.
Decision tree
This is a tool that makes use of nodes and branches. Decision trees make use of the input data that have been input to the system. A decision tree is a tool used by decision makers when they are faced with the decision problem. The problem could include choosing one course of working out of several alternatives that are available for the company. In addition, this tool can help in accessing the constraints that are associated with that decision. Many organizations use decision tree when making the decision. This is because there are strong and common tools for classifications of alternatives and predictions of the outcomes of making such decisions. Decision trees have become common with decision making processes.
Hunt’s algorithm
This is a model that is used to make a recursive manner so that the training records are arranged in purer subsets. In this algorithm, there will be the use of Dt to represent the training records. The training records are associated with node t. in this regard, y=(y1, y2.yc). The definition of the Hunt’s algorithm in a recursive manner is shown below:
- If it is found out that all the records in Dt are in the same class represented by yt, then t is taken to be a leaf node which is labeled to be yt.
- If, on the other hand, Dt,, has records which are categorized in more than one class, it will mean that there will be the selection of an attribute test condition which will be used to partition the records into smaller subsets. In this case, a child node will be created for the test condition that has been created for each given outcome. After this has been undertaken, the records that are found in Dt will then be distributed to the available children basing on the outcomes. After this has been undertaken, the algorithm will be implemented in a recursive manner to each child node.
Loan example
For this case, we will use the example of a loan applicant who will be assessed to find out if they are worth to be loaned some funds. The table clearly shows how this algorithm works.
The table is used to illustrate whether to loan a loan to someone or they will go without paying the loan. A training set of this problem can be developed by looking at the previous list of the borrowers who have initially borrowed loans and their evaluation of how they behaved and how they repaid. From the table, each borrower has their own information regarding whether they have defaulted before.
In the tree, the initial node has a class which has been labeled No. this means that most borrowers paid their loans successfully. There is a need to refine the tree because the node has records that belong to both classes. There is a need to have a clear refining so that the classes can have definite classes. In the figure that follow (figure b), the records are subdivided so that they are in smaller subsets basing on the outcome of the House owner test condition that has been applied.
This initial test condition that has been chosen is assumed to be the best criterion that is available for the test at this point. There will then be the application of the Hunt’s algorithm to the child of the root node. It is clear from the training set that has been given in the table that all home owners who are borrowers successfully repaid the loans that they took. In this case, child node of the root node is a leaf which is labeled Defaulted = No. For the right node, there will be the need to continue applying the Hunt’s algorithm up to a stage where all the records are seen to be belonging to the same class. The trees that come as a result of the recursive method are shown in the diagram.
The working of the algorithm will be seen to be successful if each and every value of the attribute that is applied is found in the training data. Also, it will work of each combination has a unique class label that is used. The assumptions that have been stated are too stringent so that it is impractical to be used in practicality. This is not the case for most situations in real life. There will be additional conditions that will be required in order to handle the situations that are laid down below:
- There is a possibility that there are some nodes which are empty. That is, the nodes do not have records that are associated with them. This is possible if the nodes that have been selected do not have attributes that can be combined for the given situation. When this happens, it will be taken that the child is a leaf node which has class labels which are the same as most of the training records which are associated with the parent node.
- In the second stage, if all the records that have their associations with Dt have the same values of the attributes except the class label. When this is the case, then it is not possible to split these records further. When this is the case, it will be taken that the child is a leaf node which has class labels which are the same as most of the training records which are associated with the parent node.
There are issues that should be integrated when designing decision tree induction. One of the issues is the way the splitting of the records. This is because each of the recursive steps of the tree which is being grown will have to split the node into smaller subset.
The second design issue is the way the splitting procedure will be stopped. There will be the need to have a stopping procedure for the tree growing procedure to be terminated.
Bibliography
Alsabti, R., & Singh, V. (2008). A decision tree classifier for large datasets. New York: Cengage Learning.
Anderson, D. (2006). Chapter 13: Decision analysis. In D. Anderson, Decision making (pp. 601-656). New York: Cengage Learning.
Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression trees. New York: Chapman & Hall.
Kramer, J. (2012). Decision making process. Journal of Management , 6 (2), 71-76.