Skip to main content

node2vec

The node2vec is a semi-supervised algorithmic framework for learning continuous feature representations for nodes in networks. The algorithm generates a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. By using a biased random walk procedure, it enables exploring diverse neighborhoods. In tasks such as multi-label classification and link prediction, node2vec shows great results.

The node2vec algorithm was inspired by a similar NLP technique. The same way as a document is an ordered sequence of words, by sampling sequences of nodes from the underlying network and turning a network into an ordered sequence of nodes. Although the idea of sampling is easy, choosing the actual strategy can be challenging and dependant on the techniques that will be applied afterward.

Capturing information in networks often shuttles between two kinds of similarities: homophily and structural equivalence. Under the homophily hypothesis, nodes that are highly interconnected and belong to similar network clusters or communities should be embedded closely together. In contrast, under the structural equivalence hypothesis, nodes that have similar structural roles in networks should be embedded closely together (e.g., nodes that act as hubs of their corresponding communities).

The current implementation easily captures homophily or structural equivalence by changing hyperparameters.

BFS and DFS strategies play a key role in producing representations that reflect either of the above equivalences. The neighborhoods sampled by BFS lead to embeddings that correspond closely to structural equivalence. The opposite is true for DFS. It can explore larger parts of the network as it can move further away from the source node. Therefore, DFS sampled walks accurately reflect a macro-view of the neighborhood, which is essential in inferring communities based on homophily.

By having different parameters:

  • return parameter p
  • and in-out parameterq

one decides whether to prioritize the BFS or DFS strategy when sampling. If p is smaller than 1, then we create more BFS like walks and we capture more structural equivalence. The opposite is true if q is smaller than 1. Then we capture DFS like walks and homophily.

1 Scalable Feature Learning for Networks, A. Grover, J. Leskovec

docs-source

TraitValue
Module typemodule
ImplementationPython
Graph directiondirected/undirected
Edge weightsweighted/unweighted
Parallelismsequential
Too slow?

If this algorithm implementation is too slow for your use case, contact us and request a rewrite to C++ !

Procedures

info

If you want to execute this algorithm on graph projections, subgraphs or portions of the graph, be sure to check out the guide on How to run a MAGE module on subgraphs.

get_embeddings( is_directed, p, q, num_walks, walk_length, vector_size, alpha, window, min_count, seed, workers, min_alpha, sg, hs, negative, epochs,)

Input:

  • is_directed : boolean ➡ If True, graph is treated as directed, else not directed
  • p : float ➡ Return hyperparameter for calculating transition probabilities.
  • q : float ➡ In-out hyperparameter for calculating transition probabilities.
  • num_walks : integer ➡ Number of walks per node in walk sampling.
  • walk_length : integer ➡ Length of one walk in walk sampling.
  • vector_size : integer ➡ Dimensionality of the word vectors.
  • window : integer ➡ Maximum distance between the current and predicted word within a sentence.
  • min_count : integer ➡ Ignores all words with total frequency lower than this.
  • workers : integer ➡ Use these many worker threads to train the model (=faster training with multicore machines).
  • sg : {0, 1} ➡ Training algorithm: 1 for skip-gram; otherwise CBOW.
  • hs : {0, 1} ➡ If 1, hierarchical softmax will be used for model training. If 0, and negative is non-zero, negative sampling will be used.
  • negative : integer ➡ If > 0, negative sampling will be used, the integer for negative specifies how many "noise words" should be drawn (usually between 5-20). If set to 0, no negative sampling is used.
  • cbow_mean : {0, 1} ➡ If 0, use the sum of the context word vectors. If 1, use the mean, only applies when cbow is used.
  • alpha : float ➡ The initial learning rate.
  • min_alpha : float ➡ Learning rate will linearly drop to min_alpha as training progresses.
  • seed : integer ➡ Seed for the random number generator. Initial vectors for each word are seeded with a hash of the concatenation of word + str(seed).

Output:

  • nodes: mgp.List[mgp.Vertex] ➡ List of nodes for which embeddings were calculated
  • embeddings: mgp.List[mgp.List[mgp.Number]]) ➡ Corresponding list of embeddings

Usage:

CALL node2vec_online.get_embeddings(False, 2.0, 0.5, 4, 5, 100, 0.025, 5, 1, 1, 1, 0.0001, 1, 0, 5, 5);

set_embeddings( is_directed, p, q, num_walks, walk_length, vector_size, alpha, window, min_count, seed, workers, min_alpha, sg, hs, negative, epochs,)

Input:

  • is_directed : boolean ➡ If True, graph is treated as directed, else not directed
  • p : float ➡ Return hyperparameter for calculating transition probabilities.
  • q : float ➡ In-out hyperparameter for calculating transition probabilities.
  • num_walks : integer ➡ Number of walks per node in walk sampling.
  • walk_length : integer ➡ Length of one walk in walk sampling.
  • vector_size : integer ➡ Dimensionality of the word vectors.
  • window : integer ➡ Maximum distance between the current and predicted word within a sentence.
  • min_count : integer ➡ Ignores all words with total frequency lower than this.
  • workers : integer ➡ Use these many worker threads to train the model (=faster training with multicore machines).
  • sg : {0, 1} ➡ Training algorithm: 1 for skip-gram; otherwise CBOW.
  • hs : {0, 1} ➡ If 1, hierarchical softmax will be used for model training. If 0, and negative is non-zero, negative sampling will be used.
  • negative : integer ➡ If > 0, negative sampling will be used, the int for negative specifies how many "noise words" should be drawn (usually between 5-20). If set to 0, no negative sampling is used.
  • cbow_mean : {0, 1} ➡ If 0, use the sum of the context word vectors. If 1, use the mean, only applies when cbow is used.
  • alpha : float ➡ The initial learning rate.
  • min_alpha : float ➡ Learning rate will linearly drop to min_alpha as training progresses.
  • seed : integer ➡ Seed for the random number generator. Initial vectors for each word are seeded with a hash of the concatenation of word + str(seed).

Output:

  • nodes: mgp.List[mgp.Vertex] ➡ List of nodes for which embeddings were calculated
  • embeddings: mgp.List[mgp.List[mgp.Number]]) ➡ Corresponding list of embeddings

Usage:

CALL node2vec_online.get_embeddings(False, 2.0, 0.5, 4, 5, 100, 0.025, 5, 1, 1, 1, 0.0001, 1, 0, 5, 5);

help()

Output:

  • name: string ➡ Name of available functions
  • value: string ➡ Documentation for every function

Usage:

CALL node2vec_online.help();

Example