The following is a guide to using tree based methods in R, based on the corresponding chapter in ‘An Introduction to Statistical Learning’ but using data from the Sloan Digital Sky Survey (SDSS). The aim is to use the five colour bands provided by the SDSS extract, u (ultraviolet), g (green), r (red), i & z (very-near-infrared), to predict whether the sources in the survey are Quasars, Stars or White Dwarfs. I use a variety of techniques, from simple decision trees to ensemble methods such as random forests.

## Data

Rather than getting the data directly from SDSS and doing the cleaning myself, I’m going to cheat and use a pre-filtered data set used in the book ‘Modern Statistical Methods for Astronomy’, available here. As part of their extract they perform a few cleaning operations, such as ignoring spatially resolved galaxies, those with large measurement errors, and those that are very bright (and could cause saturation) or very faint (with uncertain measurements). They also provide 3 labelled data sets for training, one each for Quasars, Stars & White Dwarfs.

The colour bands as they stand aren’t particularly useful, since objects of the same class can be at different distances, and therefore have relatively lower flux across all bands. This can be avoided by looking at the ratios of brightness across bands, and since magnitudes are logarithmic units of brightness we simply find the difference between the provided values to get four colour indices, (u-g), (g-r), (r-i) & (i-z).

The following three code chunks extract and clean the training data for all three sources and combine them in to a single data frame. Quasras, stars and white dwarfs are given the labels 1,2 and 3 respectively. There are 5000 stellar objects available for training, but for quasars there are over 7.7429 × 104 and for white dwarfs over 1.009 × 104, so I’ve filtered each of the latter two down to only 5000 so that there are equal numbers of each class.

Quasar training set (Class 1):

Star training set (Class 2):

White dwarf training set (Class 3):

Combine the training sets

The plot below shows each training class on a bivariate colour-colour scatter plot.There’s plenty of structure to each class, something that tree based methods should be more than capable of picking up on.

## Decision Trees

Decision trees are the most basic tree based method, and one on which the majority of other methods are built on They work by splitting the predictor space in to regions; each split can be thought of as a branch, and each of the remaining regions are leaves.

The default tree library has a simple binary recursive partitioning method for growing regression or classification trees.

Below we split the data in to a training and test set, and train the classifier on the training test.

tree.sdss is our trained classifier. We can plot it to see the major branches and leaves of the tree. The default is pretty cluttered, so I’ve coloured and rotated the text to make it easier to read… not sure if that really helps. The rpart package for building trees has some nicer plotting capabilities but, in the spirit of every undergraduate lab report, ‘that is beyond the purposes of this investigation’.

To evaluate our tree, we use it to predict the class of our test data.

Below is a confusion matrix of the predicted classes against the actual.

The overall test error rate is 7.3%.

Often the algorithm that builds the tree can create more branches than necessary, and end up reducing the predictive accuracy of our classifier. To test this I perform cross validation on the built tree. Specifying FUN = prune.misclass tells cv.tree that we want the cross validation to be guided by the classification error rate, rather than the default which is the deviance.

The left hand plot shows the error rate against tree size, the right against the cost complexity parameter k.

The 10 branch tree has the same error rate as the 11, so pruning to this size will not reduce the predictive power of the model, but will reduce the complexity.

The error rate, 7.3%, is the same, as expected, but the tree is easier to interpret.

## Bagging and Random Forests

Bagging and random forests are both examples of ensemble methods, where many decison trees are combined together to improve the prediction accuracy. Both can be implemented using the randomForest package.

Bagging (derived from the full name Bootstrap Aggregation) takes multiple bootstrapped samples from the same training set and builds an ensemble of trees that are then averaged. Bagging uses all predictors; mtry states that all 4 predictors should be considered for each split of the tree.

The test error rate associated with the bagged tree is 2.5%, a significant improvement over the single decision tree.

Random forests are similar to bagged trees, but with a small tweak to the algorithm; at each step, when a split is considered only a random subset of the predictors is made available. This prevents strong features from dominating the root branches of the trees, otherwise this can lead to correlations between the predictions of the trees, as they all look relatively similar. The trees in a random forest ensemble can be thought of as decorrelated.

Growing a random forest proceeds in the same way as Bagging, but with a smaller value for mtry. By default, for classification problems randomForst uses $\sqrt{p}$ predictors, so 2 in our case.

The test error rate associated with the random forest is 2.37%, a further improvement over the bagged tree.

We can use the importance function to view the importance of each of the variables used as our features. The first, %IncMSE, measures the mean decrease in accuracy of the predictions on out of bag samples when that feature is excluded from the model. The second, IncNodePurity, measures the decrease in node impurity due to splits over that variable, over all trees; node impurity measured by training RSS in the case of regression trees, and deviance for classification trees. varImpPlot plots these importance functions.

## Boosting

Boosting algortihms for regression and classification problems are different, and I will not provide a full description here (for details, see here). In basic terms, boosting algorithms apply many weak learners sequentially to the residuals (i.e. the remaining unexplained data) of previous trees. The algorithm learns slowly and incrementally, which can lead to a better resulting model, at the cost of extra computation compared to more direct learners.

The gbm function, from the identically named package, is used here to perform Boosting.

The plot above shows the relative importance of each feature in the training data. The interaction.depth argument, in the cal to gbm, limits the depth of each tree. Here we use a multinomial distribution as this is a multinomial classification problem; if it was binary, use a bernoulli distribution, or if performing a regression, use a gaussian distribution.

Below are some partial dependence plots, which integrate out other variables to show the marginal effect of selected variables. The black line shows class 1, the red line class 2, and green class 3 (Quasars, Stars & White Dwarfs respectively). The peaks of each line show where for this line ratio that particular class can be identified most clearly.

Test error rate associated with Boosting is 3.23%. This is actually worse than the bagging and random forest approaches above, for this particular data set, and the performance is $~\mathcal{O}(10)$ worse.

## Extremely randomized trees

Extremely Randomised Trees (ERTs) are a relatively modern incarnation of random forests. The difference is that, after choosing a random subset of features, the threshold for the split on each feature is also chosen randomly, and the best split is then chosen. Then ensemble of trees is again combined to provide the best estimate. This randomness increases the variance at the cost of a little bias.

The extraTrees package in R can execute ERTs. Somme of the documentation looks a little rough around the edges, so I’d certainly take a closer look at the source code if you’re doing anything important with it. For our purposes though it will suffice.

Test error rate associated with ERTs is 2.2%, the best of all the approaches demonstrated here.

## SDSS ‘test’ data

The source of the data used in this post, the textbook ‘Modern Statistical Methods for Astronomy’, made another set of SDSS data available named ‘test data’ that consist of 17000 sources. Unfortunately it doesn’t have any associated source classes, making it a pretty useless test set! However, it is useful to apply our models to and analyse from inspection. Here I use the random forest model, since it has one of the best error rate to complexity ratios. Below is a colour-colour plot similar to that made at the start of the workbook for the training data (repeated below for easier comparison).

SDSS point sources test dataset, N=17,000 (mag<21, point sources, hi-qual)

There are a few interesting features here. Firstly, there are hardly any white dwarfs identified. This could be because there aren’t very many in this data set, or the algorithm is failing to pick up on them. In the u-g/g-r plot on the left hand side their is also a clear vertical boundary on the red stellar classification. This lines up exactly with where the stars with lowest u-g line ratio lie in the training set, suggesting that our model isn’t able to classify stars beyind this range.

Given that we don’t know what is actually in the ‘test’ data set, it’s hard to draw any firm conclusions from it, but it does highlight some of the limitations of such learning algorithms, namely that they are very bad at predicting events beyond what they’ve been trained to; this is more generally known as ‘overfitting’, in relation to the training set.

All of the code used to produce this post is available here. Thanks for reading.