User 1036 | 12/8/2014, 6:53:44 AM

I am writing a program using graphlab in which i am creating some edges of n vertices .But when I am dumping the final result apart from the edges that i am using i want to dump all other edges whose weight should be 0.For example suppose i have 3 vertices .then i have created edge 1 2 and 2 3 .But in the final dumping i need to show edges 1 3 value 0 and 2 to 1 (0) ,3 1(0) and 3 2(0).How can i do this

User 6 | 12/8/2014, 9:54:20 AM

This sounds like something which is really inefficient to do. Imagine you have 1,000,000 nodes in your graph, to list all possible pairs of edges it will take around 1,000,000,000,000 edges!

You can always write python code to enumerate the edges but this is not efficient. Why do you need this output?

Thanks,

User 1036 | 12/8/2014, 6:15:31 PM

Actually my problem is i am writing the code of semisupervised learning using graphlaplacian.(http://papers.nips.cc/paper/2506-learning-with-local-and-global-consistency.pdf)I looked the code of https://github.com/graphlab-code/graphlab/blob/master/toolkits/clustering/graph*laplacian*for_sc.cpp.We have used many features of this file .After calculating the similarity between the vertices as mentioned the algorithm i have to solve a linear equation .Since edge i have taken only the neighbour edges but for linear equation i need all the remaining edges whose values are 0. Thats why i am looking for the solution of the problem which i have asked above .I have also asked in my pther thread do i need to mention these values or i just ignore and solve the linear equation with similarity matrix using jacobian formula .

I hope my question is clear :)

User 6 | 12/8/2014, 7:47:01 PM

This is not true - for solving a linear system you can use the Jacobi method which is a sparse solver, there is no need to express the zero values. See <a href="http://docs.graphlab.org/linear_solvers.html">here</a>.