User 2488 | 11/3/2015, 11:52:06 PM
From a high-level, I'm wondering if anyone can explain the difference between what happens under the hood when I pickle versus save a model I've trained and built using graphlab's algorithms. My specific use case is to try and get a sense of how big the models I'm building will be, so as to plan for future deployment, number servers needed, size of those servers, etc.
I barely know anything about how pickling works, while we're on the topic. As best as I can tell, pickling a model involves taking the entire object and converting it into a byte stream of characters. That is to say that it would do more than simply store the values of the coefficients (for a logistic regression classifier, say). If I am wrong here, I would definitely love some feedback.
Also, it is true that graphlab now supports pickling as per the following post, no? http://forum.dato.com/discussion/812/sframe-sgraph-and-sarray-cannot-be-pickled