We have been investigating the algorithms you've mentioned, and we're in the stage of gathering data about specific use cases and domains. The challenge is that many of these tend to perform poorly in many ML contexts (albeit great in a few), and often give limited effectiveness, so we've put more emphasis on making sure our algorithms perform well when not all the features are informative.
One method that is effective, and that does get used frequently, is looking at the feature importance scores of a boosted tree model to use in trimming the feature space. This information tends to be quite accurate for feature selection, and applies to a number of different use cases.
Hope that helps!