User 5236 | 5/29/2016, 12:30:47 PM
I'm trying to use Dato's pretrained imagenet model (http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenetmodeliter45) to evaluate using internal layers as feature-vectors for classification.
I have some questions: 1. What is the architecture on which it is based? 2. Is there a way to programmatically (using the API) query its internal structure (i.e. get the type and parameters of each layer)? 3. I can see in the example on the site (http://blog.dato.com/deep-learning-blog-post) that the images need to be resized to 256X256X3. It seems to me that the input layer (node?) is of size 200,3,227,227. I'm guessing (please correct me if I'm wrong) that that means mini-batch of 200 individual RGB images, each 227X227 pixels? if so, is the central crop being used for classification or is the image being further resized internally? Is random crop/mirroring used while training?
Thanks for any clarification, Ran