lambda workers did not start

User 425 | 7/4/2014, 2:40:12 AM

Hi all! Wanted to let you guys know that when running the recommendation example I got to:

ix = sf['user_id'].apply(lambda x: x in userset, int)

and when trying to apply it to the dataset: user_data = sf[ix]

I got a RunTimeError:

PROGRESS: First Use of Python Lambda: Starting Lambda Workers. This might take a few seconds. repr (graphlab.data_structures.sarray.SArray failed at 0x10eebcf10) RuntimeError: Runtime Exception: 0. Unable to evaluate lambdas. Python lambda workers did not start

Any suggestions you may have would be greatly appreciated. Also wanted to say I'm really excited about your project. Looking forward to trying it out in the world!

Comments

User 14 | 7/4/2014, 3:14:40 AM

Hi,

Can you please provide more information? What version of GraphLab Create are you using? What is the platform that GraphLab Create is running on?

Thanks!


User 190 | 7/18/2014, 3:05:15 PM

Hello, I'm having the same error at my end.

Version of Graphlab Create = 0.3 Running on Ubuntu 14.04

The same analysis runs just fine on another computer, with similar specs.

I'm about to install inside a virtualenv.


User 190 | 7/18/2014, 4:28:59 PM

Just installed in a virtual env, had the same issue.

When I ran 'top', it appears that I'm running out of RAM when I try to perform a pre-processing function.


User 14 | 7/18/2014, 5:19:57 PM

How much RAM do you have, and what is the memory usage before invoking the first lambda function?


User 190 | 7/18/2014, 6:06:13 PM

Now, I'm not so sure about the above... I just made some tweaks and ran it again.... I didn't even come close to running out of memory but the lambda workers still failed to start.

here's the code that's 'failing to start the workers': def removebadvalues(sframe):

names = list(enumerate(sframe.column_names()))
types = list(enumerate(sframe.column_types()))

for column in names:
    type = types[column[0]][1]
    if type == int or type == float:
        sframe[column[1]] = sframe[column[1]].apply(lambda x: 0 if x == np.inf else x)

return sframe

and here's the contents of the log: [0mPROGRESS: (start:38): First Use of Python Lambda: Starting Lambda Workers. This might take a few seconds. INFO: (start:43): Start pylambda worker at ipc:///var/tmp/graphlab-ashleybt/23721/000561 using binary: /home/ashleybt/graphlab/lib/python2.7/site-packages/graphlab/pylambda_worker INFO: (start:53): pid = -1 ERROR: (start:56): Fail forking pylambda_worker at address: ipc:///var/tmp/graphlab-ashleybt/23721/000561Error: Cannot allocate memory

... thing is, there's like 2 gigs of memory available. So I'm confused as to why it won't allocate it.


User 190 | 7/18/2014, 6:19:46 PM

fixed it. I just needed to add a bunch of swap memory to deal with it. Apparently it was requiring quite a bit of memory to run that function.


User 951 | 11/16/2014, 10:48:38 PM

I am also experiencing trouble applying Lambda functions to a SFrame. A minimal example of my error and system info is linked in the IPython notebook below:

<a href="http://nbviewer.ipython.org/github/phillip-pope/graphlab/blob/master/graphlab-lambda-worker-error.ipynb">http://nbviewer.ipython.org/github/phillip-pope/graphlab/blob/master/graphlab-lambda-worker-error.ipynb</a>

My environment does not appear to properly be utilizing swap memory. Allocating more swap memory (+64 GB) as ashbt has done did not help.


User 14 | 11/17/2014, 6:34:48 PM

Hi,

We are aware of the issue that lambda worker does not work properly with conda python, but we believe we are getting close to get it addressed. Meanwhile, can you try using virtualenv with the regular system python?

Thanks, Jay


User 951 | 11/18/2014, 2:23:53 PM

Installing graphlab with virtualenv and the system python works. Thanks!


User 2054 | 6/15/2015, 3:12:39 PM

Trying to work through the "Bringing Deep Learning to the Grocery Store" example, but unfortunately getting the same error at the dedup step: Unable to evaluate lambdas. lambda workers did not start

Using a virtualenv with python 2.7.10

Memory usage is quite minimal, have around 12GBs unused.


User 1207 | 6/15/2015, 8:43:07 PM

Hi gavanderlinden,

It could be a number of things. What version of GraphLab Create are you using, and what is platform / OS are you using?

  • Hoyt

User 32 | 6/22/2015, 5:50:10 AM

I am also having issue with lambda workers . For the below command, I am getting this issue,

Graphlab Create version is 1.4 and platform is OSX 10.9.3

sf2['tokens'] = sf2['X1'].apply(lambda x: x.split())

RuntimeError Traceback (most recent call last) <ipython-input-25-679b7f9cf0bc> in <module>() 1 sf2 2 # Split the sf2['X1'] by space ----> 3 sf2['tokens'] = sf2['X1'].apply(lambda x: x.split())

/Users/devendra/anaconda/lib/python2.7/site-packages/graphlab/datastructures/sarray.pyc in apply(self, fn, dtype, skipundefined, seed, luatranslate) 1423 1424 with cythoncontext(): -> 1425 return SArray(proxy=self.proxy.transform(fn, dtype, skip_undefined, seed)) 1426 1427

/Users/devendra/anaconda/lib/python2.7/site-packages/graphlab/cython/context.pyc in exit(self, exctype, excvalue, traceback) 29 def exit(self, exctype, excvalue, traceback): 30 if not self.showcythontrace and exctype: ---> 31 raise exctype(exc_value)

RuntimeError: Runtime Exception. Unable to evaluate lambdas. lambda workers did not start

Thanks !


User 2002 | 6/23/2015, 10:49:54 PM

Hi @Devendra - Sorry you're hitting this issue. We believe it may be related to your GraphLab Create installation. Did you use 'sudo' to install GLC to your system python installation? If so can you try installing GLC in a virtualenv environment or using Conda Python Distribution.

Let us know if this addresses the issue.

Thanks, Punit


User 954 | 7/10/2015, 6:29:03 AM

This issue is very similar to the following: https://github.com/conda/conda/issues/1367 if you are using conda and you have python 2.7.10 downgrade to "conda install python=2.7.9" and lambda worker starts working again.


User 2298 | 9/23/2015, 4:06:30 PM

Has the lambda issue been resolved?


User 2298 | 9/23/2015, 4:07:58 PM

I'm running benchmark code from the kaggle competition and I keep getting this error.

It is on the line - sf['text_clean'] = sf['text'].apply(lambda x: re.sub(r'[,]+', ';', ' '.join(x)))


User 940 | 9/23/2015, 6:05:09 PM

Hi @Jai,

Which OS and version of graphlab are you using?

Cheers! -Piotr


User 2298 | 10/3/2015, 2:31:46 PM

Hi Piotr! Thanks for picking up the question. I use Windows 7 pro. I might have solved the problem. I was running low on disk space not sure if it is related. I cleaned up my disk and tried the whole thing again and I haven't seen the issue so far. I do have another question though - when my classifier code tries to load data into sframe, it kinda gets stuck and the memory at that point is ~4.5GB. I have left it alone for couple days and it just doesn't do anything.I checked the logs, nothing there either.. I killed the process and restarted and the same thing happens. Up until the stuck point, the code loads 1000 rows per sec but right at about that memory usage point it stops.. can you please help?

Thanks, Jai


User 2298 | 10/3/2015, 2:33:05 PM

BTW, the 4.5GB is for the unity_server process


User 940 | 10/6/2015, 6:21:00 PM

Hi @Jai,

Thanks for the report. We're currently investigating, and will keep you posted.

Cheers! -Piotr


User 940 | 10/7/2015, 6:20:47 PM

Hi @Jai ,

At what point does the parsing fail? Is it in sampleSubmission.csv?

Cheers! -Piotr


User 940 | 10/7/2015, 6:47:57 PM

Hi @Jai,

Looking through the classifier.py code in the benchmark, it looks like we're trying to perform TFIDF on a very large dataset. I suspect you could wait and it will (eventually) finish, or you could prune infrequent words so TF-IDF takes less time.

I hope this helps!

Cheers! -Piotr


User 2409 | 10/14/2015, 12:05:04 PM

Is there a fix yet for 2.7.10?


User 3251 | 5/18/2016, 12:32:45 PM

seeing this issue with graphlab 1.6.1 and python 2.7.11 . Was working fine with python 2.7.10


User 1207 | 5/18/2016, 5:48:59 PM

@shardool -- We fixed a bunch of issues with GLC in recent versions. Do you still have problems when updating to GLC 1.9?

-- Hoyt