ValueError when running Dato Distributed

User 3531 | 3/14/2016, 6:16:02 AM

After deploying Dato Distributed, I run the example in README.But the following error arises.


ValueError Traceback (most recent call last) <ipython-input-5-e0a6be37290b> in <module>() 3 c = gl.deploy.hadoopcluster.create(name='test-cluster', 4 datodistpath='hdfs://219.245.186.219:8020/graphlab', ----> 5 hadoopconf_dir='/mirror/hadoop-2.6.4/etc/hadoop') 6 7 def echo(input):

/mirror/anaconda2/envs/dato-env/lib/python2.7/site-packages/graphlab/deploy/hadoopcluster.pyc in create(name, datodistpath, hadoopconfdir, numcontainers, containersize, numvcores, startport, endport, additionalpackages) 124 cluster = HadoopCluster(name, datodistpath, hadoopconfdir, 125 numcontainers, containersize, numvcores, --> 126 additional_packages) 127 128 # Save to local session and overwrite if exists

/mirror/anaconda2/envs/dato-env/lib/python2.7/site-packages/graphlab/deploy/hadoopcluster.pyc in init(self, name, datodistpath, hadoopconfdir, numcontainers, containersize, numvcores, additionalpackages) 341 self.hadoopconfdir = hadoopconfdir 342 --> 343 config = self.readclusterstate() 344 self.numcontainers = numcontainers if numcontainers is not None else \ 345 config.getint('runtime', 'numcontainers')

/mirror/anaconda2/envs/dato-env/lib/python2.7/site-packages/graphlab/deploy/hadoopcluster.pyc in readclusterstate(self) 435 hadoopconfdir = self.hadoopconfdir): 436 raise ValueError('Path "%s" does not seem like a valid Dato Distributed ' --> 437 'installation.' % self.datodistpath) 438 439 fileutil.downloadfromhdfs(

ValueError: Path "hdfs://219.245.186.219:8020/graphlab" does not seem like a valid Dato Distributed installation.

What's wrong with it? Thanks!

Comments

User 3531 | 3/14/2016, 6:46:24 AM

I have fixed it. The problem happens because "core-site.xml" differs among nodes. Now new problem came out with "ImportError: No module named _bsddb". I'm trying to fix it!


User 1207 | 3/14/2016, 6:30:05 PM

Hello @qyy0180,

Thanks for looking in to this! I passed on this to the relevant engineers here so we can hopefully make things more robust or give a better error message in this case. Thanks!

-- Hoyt