There is a newer version of the record available.

Published August 28, 2018 | Version v0.99.0
Software Open

rsfcNet: An R Package for Resting State Functional Connectivity Network Analysis

  • 1. MUSC Department of Neuroscience

Description

Version 1.0 is nearly complete. All functions should be working (though please alert me to any issues) but package documentation needs some refining (spell checks, etc).


Introduction

Graph theory is an important and versatile approach to studying the connectivity of different brain regions and can be applied to any number of neuroimaging methods, including structural MRI, functional MRI, diffusion tensor or kurtosis imaging, EEG, and MEG. A graph is simply a matrix that quantifies the connections between individual members of some group of interest. Each member is called a node (or vertex) and the connection between each node is an edge. Such graphs can be binary, where each edge is either a 1 or a 0, or weighted, where the strength of connection can vary. Furthermore, weighted networks can be signed, as in the case of a correlation matrix. Correlations (or partial correlations) quantify a primary type of connectivity between given pair of nodes, but when considering many variables higher-order relationships can illuminate aspects of the network in question that bi-variate relationships fail to capture. However, correlations are not the only means by which one can define edges in a network.

Most graph theoretic metrics are centrality measures, which aim to identify through various methods the most important members of a network. Which measure is most appropriate depends on the type of network being characterized, the modality of being important (for example, a CEO of a company may have fewer direct connections than others within a company but nevertheless occupy a privileged and important place in the company network), and the means by which edges are represented. 


Network Construction

Data Preparation

The first step is importing the data and preparing it for analysis. Assuming one has both time series files and confound matrices from a preprocessing pipeline in FSL or some other software, the first step is to import both. 
First, create variables pointing to the files in the directory. 

ts_list = get_file_list("C:/Users/YourName/Documents/fMRIstudy/timeseries/")

confound_list = get_file_list("C:/Users/YourName/Documents/fMRIstudy/confounds/")

Next import the files from the lists.

ts = import_from_list(ts_list, header=TRUE) 
confounds = import_from_list(confound_list, header=FALSE)

Next scrub the bad time points (excessive head motion) from the time series files. If you have dealt with head motion in some other way, you can skip this step.

new_ts = scrub_time_series(ts, confounds, method="censor", n=100, n.nodes=116)


Connectivity Matrix

Network construction in this package is done by creating signed, weighted networks. Allowing connections between brain regions to be positive or negative reflects the nature of functional connectivity. However, several methods are offered for doing so and which is best might depend on one's particular needs. For methods "pearson", "partial", "covar", and "parvar" the estimation is done using the James-Stein based methods described in Schaefer and Strimmer (2005) and the optimal shrinkage level calculated for each subject using the method of Opgen-Rhein and Strimmer (2007). Respectively these give the regularized pearson correlation matrix, shrinkage estimated partial correlation matrix, regularized covariance matrix, and sihrinkage estimated partial variance matrix. This depends on the corpcor package.The covariance and partial variance options shouldn't be used directly for network analysis, but can be used with whatever you need them for. The option "ridge" uses ridge regression (L2 penalty) with a cross-validated penalty parameter to estimate the partial correlation matrix. This method was developed by Kraemer, Schaefer, & Boulesteix (2009). This method is much slower than the James-Stein type penalty used in Schaefer and Strimmer's algorithm, but should yield more sparse results. For "adaglasso" the adaptive GLASSO of Zou (2006) method recommended by Zhu & Cribben (2018) is used where an initial GLASSO is used as a starting point for the adaptive GLASSO procedure. For this method a list of lists is returned, where each list contains both the best GLASSO solution and the best adaptive GLASSO solution. Inspect both to see which is best suited to your needs. The ridge and adaptive GLASSO methods depend on the pcor package. A final option provided by this package is the elastic net method. This relies on the CorShrink package and allows the user to enter an elastic net penalty to yield a compromise between a graphical ridge and graphical lasso estimate of the partial correlation matrix.

The partial correlation option using the Shaefer and Strimmer (2005) will give fully connected networks very quickly and most likely work well enough for most purposes. Thresholding can be applied if needed, and the 
get_cor_matrices function includes an optional "pre-threshold" option to automatically apply light thresholding. Failing this, the ridge or elastic net options will likely serve well, although they take much longer to run and depending on the data set may give minimal gains. The adaptive GLASSO and GLASSO methods will give extremely sparse networks and probably shouldn't be applied to data sets where scans are less than 400-600 + time points. 

After selecting your method, simply call the get_cor_matrices function.
 

cormats = get_cor_matrices(new_ts, method="partial", n=100, n.nodes=116, pre.thresh=TRUE)


Several functions in rsfcNet are dependent upon or are modifications of igraph package functions. To make life easier, the mats_as_graphs will convert the correlation matrices into the igraph format.
 

graphs =  mats_as_graphs(cormats)


Graph Theory Measures 

The next step is selecting which graph theory measures you'd like to use.  Below is a dictionary of graph theory measures and concepts utilized in the rsfcNet package. 

Betweenness Centrality

Nodes with a high betweenness centrality are considered important because they comprise pathways which control information flow. Betweenness centrality shows which nodes act as "relays" or "hubs" between nodes in a network by measuring number of times a node lies on the shortest path between other nodes. Betweenness centrality is considered a more global measure of centrality because it attempts to estimate a node's importance to network integrity. Although a popular measure in resting state fMRI research (see for example Wang, Zuo, & He, 2010), its reliance on shortest paths makes it an inappropriate measure for systems like the brain which uses a parallel distributed means of information flow (where information does not deterministically follow a shortest path) (Borgatti, 2005; Telesford et al., 2011; Power et al., 2013) .
 

For a single graph:
btwn_centr(graph) 

For multiple graphs: 
btwn_centr_mult(graphs)

Closeness Centrality

Closeness Centrality is a measure where each node’s importance is determined by closeness to all other nodes. Closeness is defined as the reciprocal of the sum of shortest path lengths (the distance) between \( node_i \) and all other nodes.


\(\frac{1}{∑_{j\neq i}^N d(n_i,e_{ij})}\)
 

Closeness centrality estimates how fast the flow of information would be through a given node to other nodes. Nodes with high closeness centrality may have better access to information at other nodes or more direct influence on other nodes. Although it relies on the calculation of shortest paths, it is more applicable to parallel diffusion networks than betweenness centrality. It is still apt for parallel diffusion networks because closeness centrality is measuring closeness to *any* other node, rather than treating the node as a junction between two nodes as in betweenness centrality (Borgatti, 2005). 

However, see current centrality for an analagous spectral measure even more appropriate to parallel diffusion networks.

For a single graph:
closeness_centr(graph)

For multiple graphs: 
closeness_centr_mult(graphs)

Clustering Coefficient (Local) 

The local transitivity (or clustering coeffecient) of a node is the fraction of edges a node forms with its neighbors out of the the number of edges it would take to make complete triangles. The basic method in this package is available in the local_trans function and works for weighted or unweighted networks (Barratt et al., 2004). Barrat's method is given in the formula below. For unweighted networks this simply reduces to the unweighted clustering coefficient. 

\(C_i^w=\frac{1}{s_i(k_i-1)}\sum_{j,h}\frac{w_{ij}+w_{ih}}{2}a_{ij}a_{ih}a_{jh}\)


An alternative to Barratt's method is the Zhang & Horvath (2005) method available in the clustcoef_signed function.

\(C_i^w=\frac{\displaystyle\sum\nolimits_{j,q} (w_{(j,i)}w_{(i,q)}w_{(j,q)}) } {{\big({\displaystyle\sum\nolimits_{q}w_{i,q}})}^2\big)-{\displaystyle\sum\nolimits_{q}{w^2_{(i,q)}}}}.\)

 

For signed weighted graphs one should use the Constantini & Perugini (2014) method available in the clustcoef_signed function.

\(C_i^w=\frac{\displaystyle\sum\nolimits_{l\neq i}\sum\nolimits_{m\neq i,l}a_{il}a_{lm}a_{mi}}{\{{({\displaystyle\sum\nolimits_{l\neq i}a_{il}})}^2-{\displaystyle\sum\nolimits_{l\neq i}{(a_{il})}^2}\}}.\)

 

For a single graph (Zhang or Constantini):

clustcoef_signed(graph)

For multiple graphs (Barratt's method):

local_trans(graphs)

For multiple graphs (Zhang or Constantini):

clustcoef_signed_mult(graphs)


Clustering Coefficient (Global)

Simply the average of all local clustering coefficients, this tells you how closely on average connected nodes tend to be to their neighbors (Watts & Strogatz, 1998)
 

For a single graph:
global_clust(graph)

For multiple graphs:
global_clust_mult(graphs)

For multiple graphs (Barratt's method):
global_trans(graphs) 	


Communities ; Community Structure

See Modules.

Current Centrality (Circuit-Flow Closeness Centrality)

Circuit flow closeness centrality, or current centrality, is a method of calculating the closeness more appropriate to networks where information does not travel in a necessarily serial fashion (Brandes et al., 2005). In an analogy to electricity flowing through a series of circuits, closeness is defined as the conductance between two nodes, which is the inverse of the resistance distance. Current centrality is then the average conductance between two nodes. Current centrality is a spectral measure calculated using the Moore-Penrose inversion of the graph Laplacian. Current centrality is also known as information centrality. 

For a single graph:
current_centr(graph)

For multiple graphs:
current_centr_mult(graphs)

 

Diversity  Coefficient

A measure that characterizes the degree to which a node is connected to the entire network (high Diversity  coefficient) or only within a module (low Diversity  coefficient). Low Diversity  nodes with high within-module z scores are considered provincial hubs (important for within-module communication) and high Diversity  nodes with high within-module z-scores are considered connector hubs (important for inter-module communication).

The diversity coefficient is given by the following formula:

\(h_i = -\frac{1}{log(m)}  \sum_{u=1}^{N_M} \Bigg( \left ( \frac{s_{iu}}{s_i} \right ) \cdot log  \left ( \frac{s_{iu}}{s_i} \right ) \Bigg)\)

Rubinov & Sporns (2011) give a generalization of this to signed networks, where the diversity is calculated separately for positive and negative edge weights. These are then combined by the following formula, where  \(s_i^{-}\)and \(s_i^{+}\) are respectively the strength of the negative and positive connections in the network. This is offered in the rsfcNet package.


\(h_{i}^{*} = h_{i}^{+} - \Bigg( \frac{s_i^{-}}{s_i^{+}+s_i^{-}}\Bigg) h_{i}^{-} \)

Available as part of the module connectivity function.

module_connectivity(graph, module)


Degree Centrality

Degree centrality, also known as just degree, is simply a count of the number of non-zero connections a node has (Fornito et al., 2016b). If the nodes in a network have weighted edges, the weighted degree, also called strength, is simply the sum of a node's edge weights. Degree and Strength are considered local measures of centrality because only a given node's immediate connections are considered.

For signed weighted graphs the weighted strength* measure is available: 

 

For a single graph:
degree_centr(graph)

For multiple graphs:
degree_mult(graphs)

For a weighted graph:
strength_signed(graph)

For multiple weighted graphs:
strength_multiple(graphs)


Delta Centrality (Energy)

Delta centrality measures the change in a global property of the graph that occurs due to the deletion of a node or edge (Fornito et al., 2016a). Implemented in this package is delta energy, which tracks the change in graph energy due to each of the ith nodes being deleted. See Graph Energy for more information. Also see Laplacian Centrality and Vitality, another delta centrality measure.

For a single graph:
delta_energy(graph)

For multiple graphs:
delta_energy_mult(graphs)


Neighbor centrality

Neighbor centrality is a measure of the average degree or strength of the edges of a node's neighbors. Neighbor centrality shows which nodes are connected to well connected nodes (Barratt et al., 2004; Fornito et al., 2016). This offers an improvement over degree since a low-degree node with connections to high degree nodes may have a central role in the network. Like leverage centrality and Laplacian centrality it considers not only the immediate environment of a node but an intermediate space between the local neighborhood and global embeddedness. However, this is conceptually distinct from leverage centrality, which defines importance as being connected to nodes with fewer connections of their own. 

For a single graph:
neighbor_centr(graph)

For multiple graphs:
neighbor_centr_mult(graphs)


Eigenvector Centrality

The eigenvector centrality is the ith entry (for the ith node) in the principal eigenvector, that is, the eigenvector belonging to the largest eigenvalue of a network (Fornito et al., 2016a). Eigenvector centrality differs conceptually from degree or strength. A node with many connections does not necessarily have a high eigenvector centrality. For example, a node may have many very weak connections that yield a large value for strength/degree. Likewise, a node with high eigenvector centrality may have a low degree but be well connected to a small number of important nodes. Eigenvector centrality is a spectral measure appropriate for networks where parallel diffusion occurs.

For a single graph:
eigen_centr(graph)

For multiple graphs:
eigen_centr_mult(graphs)


Fiedler Value

The Fiedler value is the second smallest eigenvalue of the Laplacian representation of a graph. The closer the Fiedler value is to zero the more easily the graph can be split into separate components unconnected to each other. The Fiedler value is also known as the algebraic connectivity of a graph (Mohar, 1991). Hence the Fiedler value can be used as a measure of a network's robustness to becoming disconnected.

For a single graph:
fiedler_value(graph)

For multiple graphs:
fiedler_value_mult(graphs)


Graph Energy

Graph energy was originally applied in organic chemistry to quantify the stability of molecular orbitals associated with pi-electrons (Li, Shi, & Gutman, 2012). The graph energy informs about the connectivity of the graph as a whole, indicating how resilient the network might be to attack (Shatto & Cetinkaya, 2017). A graph energy of zero means the nodes are not connected at all. The graph energy is calculated simply by summing the absolute values of the eigenvalues of a matrix: 


\(E(G) = \sum{|\lambda_i|}\)
 

For a single graph:
graph_energy(graph)

For multiple graphs:
graph_energy_mult(graphs)

Laplacian Centrality

Laplacian centrality is a spectral graph theory measure and a member of the delta centrality family (where centrality is defined as the change in some graph-level measure due to the deletion of a node). Here the graph level measure of interest is the Laplacian energy of a graph, which is defined as the sum of squared eigenvalues of the graph Laplacian. This measure not only takes into account the local environment immediately around it but also the larger environment around its neighbors. It is an intermediate between metrics that assess a node's position in the whole network (such as eigenvector centrality) and the local neighborhood (such as strength).

The Laplacian energy can be calculated as the sum of the sums of squared degrees (weighted degree or binary) for each node and twice the sum of squared edge weights for each edge in a graph. 


\(E_{L}(G)=∑_{i=1}^nd_i^2+2∑_{i<j}w_{i\text{,}j}^2\)
 

The Laplacian centrality for a \( node_i \)  is then the difference in Laplacian graph energy between the full graph and the graph where \( node_i \)  is deleted. 


\(\Delta E_{L} (G) = E_L (G) - E_L(G_{-node_i})\)
 

For a single graph:
laplace_centr(graph)

For multiple graphs:
laplace_centr_mult(graphs)

 

Leverage Centrality

Leverage centrality defines importance as being connected to other nodes who in turn have only fewer connections.

\(l_i = \frac{1}{k_i} \sum_{j \in N_i} \frac{k_i - k_j}{k_i + k_j}\)

It was proposed by Joyce et al (2010) and inspired by the fact that neural connections integrate information from their connections. If a single cell (or region) synapses onto many areas that receive relatively fewer inputs, the source region will have a greater influence on the targets. Leverage centrality does not assume information flows strictly along shortest paths in a serial manner, but rather that information diffuses along parallel routes. Like neighbor centrality (its opposite) and Laplacian centrality, it can be considered as an intermediate (as opposed to global or strictly local) measure of centrality.

For a single graph:
leverage_centr(graph)

For multiple graphs:
leverage_centr_mult(graphs)

Modules 

Modules, also known as communities, are groups of nodes that connect more (or more strongly, if considering weighted networks) to each other than to other nodes. Modules can be found by a large number of algorithms, each with its own strength and weaknesses. Module algorithms typically use clustering algorithms to find a community structure that maximizes the modularity statistic. Note that the concept of modules in the network theory sense differs from the concept in cognitive psychology of modules as functionally encapsulated regions of brain tissue. The key difference is that modules in the Fodorian sense are typically thought to be associated with a single task, while modules in the network theory sense are descriptions of how different nodes cluster together and makes no assumptions regarding *functional encapsulation*. A graph theoretic module may participate in a large number of tasks, and a single task may involve multiple modules (Colombo, 2013; Sporns & Betzel, 2016).

A number of methods are included in this package including the fast-greedy method (Clauset, Newman, & Moore, 2004), louvain method (Blondel et al., 2008), eigenvector method (Newman, 2006), walktrap method (Gates et al., 2016; Yang et al., 2016), and re-iterative spinglass method (Traag & Bruggeman, 2009; Wang et al., 2013; Zhang & Moore, 2014). The louvain and walktrap methods are typically the better options, and run quickly as well  (Gates et al., 2016; Yang et al., 2016). The fast greedy method can work well when the size of modules is not very small. (Yang, Algesheimer, & Tessone, 2016)

For a single graph: 
get_modules(graph, method="louvain")

Applied to a list of graphs:
lapply(graphs, function(i) get_modules(i, method="louvain"))

Participation Coefficient

A measure that characterizes the degree to which a node is connected to the entire network (high participation coefficient) or only within a module (low participation coefficient) (Guimera & Nunes Amaral, 2005). Low participation nodes with high within-module z scores are considered provincial hubs (important for within-module communication) and high participation nodes with high within-module z-scores are considered connector hubs (important for inter-module communication).

The participation coefficient is defined by the following formula:

\(p_i = 1 - \sum_{s=1}^{N_M} \left ( \frac{s_{iu}}{s_i} \right )^2\)

One can define a weighted participation for signed networks analogous to the diversity coefficient, which is offered in the rsfcNet package:

\(p_{i}^{*} = p_{i}^{+} - \Bigg( \frac{s_i^{-}}{s_i^{+}+s_i^{-}} \Bigg) p_{i}^{-} \)

module_connectivity(graph, module)


Strength 

See Degree Centrality.

Transitivity (Local)

See clustering coefficient (local).

Transitivity (Global)

See clustering coefficient (global) 

Vitality 

Vitality, also known as closeness vitality, is a delta measure of centrality. The closeness vitality of a node is the change in total distance between all other nodes in a graph when a node is deleted (Brandes & Erlebach, 2005). A node with high closeness vitality creates greater distance between nodes in the network when it is deleted, implying it has a privileged place in the network and is vital to global communication. 

For a single graph:
vitality(graph)

For multiple graphs:
vitality(graphs)

Bibliography

Bibliography

Barrat, A., Barthélemy, M., Pastor-Satorras, R., & Vespignani, A. (2004). The architecture of complex weighted networks. Proceedings of the National Academy of Sciences of the United States of America, 101(11), 3747–3752. https://doi.org/10.1073/pnas.0400087101

Blondel, V. D., Guillaume, J.-L., Lambiotte, R., & Lefebvre, E. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10), P10008. https://doi.org/10.1088/1742-5468/2008/10/P10008

Borgatti, S. P. (2005). Centrality and network flow. Social Networks, 27(1), 55–71. https://doi.org/10.1016/j.socnet.2004.11.008

Brandes, U., & Erlebach, T. (Eds.). (2005). Network Analysis (Vol. 3418). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/b106453

Brandes, U., & Fleischer, D. (2005). Centrality Measures Based on Current Flow. In V. Diekert & B. Durand (Eds.), STACS 2005 (pp. 533–544). Springer Berlin Heidelberg.

Clauset, A., Newman, M. E. J., & Moore, C. (2004). Finding community structure in very large networks. Physical Review E, 70(6). https://doi.org/10.1103/PhysRevE.70.066111

Colombo, M. (2013). Moving Forward (and Beyond) the Modularity Debate: A Network Perspective. Philosophy of Science, 80(3), 356–377. https://doi.org/10.1086/670331

Costantini, G., & Perugini, M. (2014). Generalization of Clustering Coefficients to Signed Correlation Networks. PLoS ONE, 9(2), e88669. https://doi.org/10.1371/journal.pone.0088669

Csardi, G., & Nepusz, T. (2006). The igraph software package for complex network research. InterJournal, Complex Systems, 1695.

Fornito, A., Zalesky, A., & Bullmore, E. (2016a). Centrality and Hubs. In Fundamentals of Brain Network Analysis (pp. 137–161). Elsevier. https://doi.org/10.1016/B978-0-12-407908-3.00005-4

Fornito, A., Zalesky, A., & Bullmore, E. (2016b). Node Degree and Strength. In Fundamentals of Brain Network Analysis (pp. 115–136). Elsevier. https://doi.org/10.1016/B978-0-12-407908-3.00004-2

Gates, K. M., Henry, T., Steinley, D., & Fair, D. A. (2016). A Monte Carlo Evaluation of Weighted Community Detection Algorithms. Frontiers in Neuroinformatics, 10. https://doi.org/10.3389/fninf.2016.00045

Guimerà, R., & Nunes Amaral, L. A. (2005). Functional cartography of complex metabolic networks. Nature, 433(7028), 895–900. https://doi.org/10.1038/nature03288

Joyce, K. E., Laurienti, P. J., Burdette, J. H., & Hayasaka, S. (2010). A New Measure of Centrality for Brain Networks. PLoS ONE, 5(8), e12200. https://doi.org/10.1371/journal.pone.0012200

Krämer, N., Schäfer, J., & Boulesteix, A.-L. (2009). Regularized estimation of large-scale gene association networks using graphical Gaussian models. BMC Bioinformatics, 10(1), 384. https://doi.org/10.1186/1471-2105-10-384

Li, X., Shi, Y., & Gutman, I. (2012). Graph Energy. New York, NY: Springer New York. https://doi.org/10.1007/978-1-4614-4220-2

Lohmann, G., Margulies, D. S., Horstmann, A., Pleger, B., Lepsien, J., Goldhahn, D., … Turner, R. (2010). Eigenvector Centrality Mapping for Analyzing Connectivity Patterns in fMRI Data of the Human Brain. PLoS ONE, 5(4), e10232. https://doi.org/10.1371/journal.pone.0010232

Mohar, B. (1991). Eigenvalues, diameter, and mean distance in graphs. Graphs and Combinatorics, 7(1), 53–64. https://doi.org/10.1007/BF01789463

Newman, M. E. J. (2006). Finding community structure in networks using the eigenvectors of matrices. Physical Review E, 74(3). https://doi.org/10.1103/PhysRevE.74.036104

Opgen-Rhein, R., & Strimmer, K. (2007). Accurate Ranking of Differentially Expressed Genes by a Distribution-Free Shrinkage Approach. Statistical Applications in Genetics and Molecular Biology, 6(1). https://doi.org/10.2202/1544-6115.1252

Power, J. D., Barnes, K. A., Snyder, A. Z., Schlaggar, B. L., & Petersen, S. E. (2012). Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage, 59(3), 2142–2154. https://doi.org/10.1016/j.neuroimage.2011.10.018

Power, J. D., Schlaggar, B. L., Lessov-Schlaggar, C. N., & Petersen, S. E. (2013). Evidence for Hubs in Human Functional Brain Networks. Neuron, 79(4), 798–813. https://doi.org/10.1016/j.neuron.2013.07.035

Qi, X., Fuller, E., Wu, Q., Wu, Y., & Zhang, C.-Q. (2012). Laplacian centrality: A new centrality measure for weighted networks. Intelligent Knowledge-Based Models and Methodologies for Complex Information Systems, 194, 240–253. https://doi.org/10.1016/j.ins.2011.12.027

Rubinov, M., & Sporns, O. (2011). Weight-conserving characterization of complex functional brain networks. Neuroimage, 56(4), 2068–2079. https://doi.org/10.1016/j.neuroimage.2011.03.069

Schäfer, J., & Strimmer, K. (2005). A Shrinkage Approach to Large-Scale Covariance Matrix Estimation and Implications for Functional Genomics. Statistical Applications in Genetics and Molecular Biology, 4(1). https://doi.org/10.2202/1544-6115.1175

Shatto, T. A., & Cetinkaya, E. K. (2017). Variations in graph energy: A measure for network resilience. In 2017 9th International Workshop on Resilient Networks Design and Modeling (RNDM) (pp. 1–7). Alghero, Italy: IEEE. https://doi.org/10.1109/RNDM.2017.8093019

Sporns, O., & Betzel, R. F. (2016). Modular Brain Networks. Annual Review of Psychology, 67, 613–640. https://doi.org/10.1146/annurev-psych-122414-033634

Telesford, Q. K., Simpson, S. L., Burdette, J. H., Hayasaka, S., & Laurienti, P. J. (2011). The Brain as a Complex System: Using Network Science as a Tool for Understanding the Brain. Brain Connectivity, 1(4), 295–308. https://doi.org/10.1089/brain.2011.0055

Traag, V. A., & Bruggeman, J. (2009). Community detection in networks with positive and negative links. Physical Review E, 80(3). https://doi.org/10.1103/PhysRevE.80.036115

Wang, J., Zuo, X., & He, Y. (2010). Graph-based network analysis of resting-state functional MRI. Frontiers in Systems Neuroscience. https://doi.org/10.3389/fnsys.2010.00016

Wang, Z., Hu, Y., Xiao, W., & Ge, B. (2013). Overlapping community detection using a generative model for networks. Physica A: Statistical Mechanics and Its Applications, 392(20), 5218–5230. https://doi.org/10.1016/j.physa.2013.06.038

Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’ networks. Nature, 393(6684), 440–442. https://doi.org/10.1038/30918

Yang, Z., Algesheimer, R., & Tessone, C. J. (2016). A Comparative Analysis of Community Detection Algorithms on Artificial Networks. Scientific Reports, 6(1). https://doi.org/10.1038/srep30750

Zhang, B, & Horvath, S. (2005). A General Framework for Weighted Gene Co-Expression Network Analysis. Statistical Applications in Genetics and Molecular Biology, 4(1). https://doi.org/10.2202/1544-6115.1128

Zhang, P., & Moore, C. (2014). Scalable detection of statistically significant communities and hierarchies, using message passing for modularity. Proceedings of the National Academy of Sciences, 111(51), 18144–18149. https://doi.org/10.1073/pnas.1409770111

Zhu, Y., & Cribben, I. (2018). Sparse Graphical Models for Functional Connectivity Networks: Best Methods and the Autocorrelation Issue. Brain Connectivity, 8(3), 139–165. https://doi.org/10.1089/brain.2017.0511

Zou, H. (2006). The Adaptive Lasso and Its Oracle Properties. Journal of the American Statistical Association, 101(476), 1418–1429. https://doi.org/10.1198/016214506000000735

Notes

Version 1.0 is nearly complete. All functions should be working (though please alert me to any issues) but package documentation needs some refining (spell checks, etc).

Files

abnormally-distributed/rsfcNet-v0.99.0.zip

Files (121.8 kB)

Name Size Download all
md5:ca8cfe261cd2dc3ef7dbfadfa7b23293
121.8 kB Preview Download

Additional details