User 3147 | 3/7/2016, 2:28:57 PM
Hi all! What about extending the GLC classifiers, by a class that would implement a meta estimator capable of fitting a number of decision trees on various sub-samples of the data set, resembling (or generalizing) that way the "ExtraTreesClassifier" of sklearn lib? I guess, that would be really great for data mining problems where fitting decision trees is required but the provided data sets are in-homogeneous as far as the dtype of their variables (attribute) is concerned, i.e. having a plethora of float, str and int dtype variables. Of course, one has to make all the necessary transformations on the data set to bring it in a friendly and useful form before training a GLC "randomforest" or "boostedtrees_classifier", but is it enough?