Skip to main content
Mathematics LibreTexts

30.3: The Power of a Matrix

  • Page ID
    69507
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    • For a diagonalizable matrix \(A\), we have \(C^{-1}AC = D\). Then we have \(A = CDC^{-1}\)
    • We have \(A^2 = CDC^{-1}CDC^{-1} = CD^{2}C^{-1}A^{n} = CDC^{-1} \ldots CDC^{-1} = CD^{n}C^{-1}\).
    • Because the columns of \(C\) are eigenvectors, so we can say that the eigenvectors for \(A\) and \(A^n\) are the same if \(A\) is diagonalizable.
    • If \(x\) is an eigenvector of \(A\) with the corresponding eigenvalue \(\lambda\), then \(x\) is also an eigenvector of \(A^n\) with the corresponding eigenvalue \(\lambda^n\).
    # Here are some libraries you may need to use
    %matplotlib inline
    import numpy as np
    import sympy as sym
    import networkx as nx
    import matplotlib.pyplot as plt
    from urllib.request import urlretrieve
    
    sym.init_printing(use_unicode=True)
    urlretrieve('https://raw.githubusercontent.com/colbrydi/jupytercheck/master/answercheck.py', 
                'answercheck.py');

    Graph Random Walk

    • Define the following matrices:
      • \(I\) is the identity matrix
      • \(A\) is the adjacency matrix
      • \(D\) is diagonal matrix of degrees (number of edges connected to each node)

    \[W=\frac{1}{2}(I + AD^{-1}) \nonumber \]

    • The lazy random walk matrix, \(W\), takes a distribution vector of stuff, \(p_t\), and diffuses it to its neighbors:

    \[p_{t+1}=Wp_{t} \nonumber \]

    • For some initial distribution of stuff, \(p_0\), we can compute how much of it would be at each node at time, \(t\), by powering \(W\) as follows:

    \[p_{t}=W^{t}p_{0} \nonumber \]

    • Plugging in the above expression yields:

    \[p_{t}=\left( \frac{1}{2}(I+AD^{-1}) \right)^t p_{0} \nonumber \]

    Do This

    Using matrix algebra, show that \(\frac{1}{2}(I + AD^{-1})\) is similar to \(I-\frac{1}{2}N\), where \(N=D^{-\frac{1}{2}}(D-A)D^{-\frac{1}{2}}\) is the normalized graph Laplacian.

    Random Walk on Barbell Graph

    To generate the barbell graph, run the following cell.

    n = 60 # number of nodes
    B = nx.Graph() # initialize graph
    
    ## initialize empty edge lists
    edge_list_complete_1 = [] 
    edge_list_complete_2 = []
    edge_list_path = []
    
    ## generate node lists
    node_list_complete_1 = np.arange(int(n/3))
    node_list_complete_2 = np.arange(int(2*n/3),n)
    node_list_path = np.arange(int(n/3)-1,int(2*n/3))
    
    ## generate edge sets for barbell graph
    for u in node_list_complete_1:
        for v in np.arange(u+1,int(n/3)):
            edge_list_complete_1.append((u,v))
            
    for u in node_list_complete_2:
        for v in np.arange(u+1,n):
            edge_list_complete_2.append((u,v))
    
    for u in node_list_path:
        edge_list_path.append((u,u+1))
    
    # G.remove_edges_from([(3,0),(5,7),(0,7),(3,5)])
    
    ## add edges
    B.add_edges_from(edge_list_complete_1)
    B.add_edges_from(edge_list_complete_2)
    B.add_edges_from(edge_list_path)
    
    
    ## draw graph
    pos=nx.spring_layout(B) # positions for all nodes
    
    ### nodes
    nx.draw_networkx_nodes(B,pos,
                           nodelist=list(node_list_complete_1),
                           node_color='c',
                           node_size=400,
                           alpha=0.8)
    nx.draw_networkx_nodes(B,pos,
                           nodelist=list(node_list_path),
                           node_color='g',
                           node_size=200,
                           alpha=0.8)
    nx.draw_networkx_nodes(B,pos,
                           nodelist=list(node_list_complete_2),
                           node_color='b',
                           node_size=400,
                           alpha=0.8)
    
    
    ### edges
    nx.draw_networkx_edges(B,pos,
                           edgelist=edge_list_complete_1,
                           width=2,alpha=0.5,edge_color='c')
    nx.draw_networkx_edges(B,pos,
                           edgelist=edge_list_path,
                           width=3,alpha=0.5,edge_color='g')
    nx.draw_networkx_edges(B,pos,
                           edgelist=edge_list_complete_2,
                           width=2,alpha=0.5,edge_color='b')
    
    
    plt.axis('off')
    plt.show() # display
    Do This

    Generate the lazy random walk matrix, \(W\), for the above graph.

    A = nx.adjacency_matrix(B)
    A = A.todense()
    
    d = np.sum(A,0) # Make a vector of the sums.
    D = np.diag(np.asarray(d)[0])
    #Put your answer to the above question here.
    from answercheck import checkanswer
    checkanswer.matrix(W, "7af4a5b11892da6e1a605c8239b62093")
    Do This

    Compute the eigenvalues and eigenvectors of \(W\). Make a diagonal matrix \(J\) with the eigenvalues on the diagonal. Name the matrix of eigenvectors \(V\) (each column is an eigenvector).

    #Put your answer to the above question here. 

    Now we make sure we constructed \(V\) and \(A\) correctly by double checking that \(W = VJV^{-1}\)

    np.allclose(W, V*J*np.linalg.inv(V))
    Do This

    Let your \(p_0 = [1,0,0, \ldots, 0]\). Compute \(p_t\) for \(t = 1,2,\ldots,100\), and plot \(||v_1 - p_t||_1\) versus \(t\), where \(v_1\) is the eigenvector associated with the largest eigenvalue \(\lambda_1 = 1\) and whose sum equals 1. (Note: \(||\cdot||_1\) may be computed using np.linalg.norm(v_1-p_t, 1).)

    #Put your answer to the above question here. 

    Compare to Complete Graph

    If you complete the above, do the same for a complete graph on the same number of nodes.

    Question

    What do you notice about the graph that is different from that above?


    This page titled 30.3: The Power of a Matrix is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Dirk Colbry via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.