Welcome to the ENIGMA TOOLBOX 👋🏼¶

100+ ENIGMA-derived statistical maps 💯¶
The ENIGMA TOOLBOX provides a centralized, and continuously updated, repository of meta-analytical case-control comparisons across a wide range of disorders. As many ENIGMA groups have moved beyond meta-analysis towards ‘mega’-analysis of subject-level data, the ENIGMA TOOLBOX also includes an example of subject-level example data from an individual site that have been processed according to ENIGMA protocols. It should be noted, however, that subject-level or raw imaging data are not openly available for dissemination to the scientific community. Interested scientists are encouraged to visit the ENIGMA website to learn more about current projects, joining and contributing to an active Working Group, or proposing a new group.
Compatible with any neuroimaging datasets and software 💌¶
To increase generalizability and usability, every function within the ENIGMA TOOLBOX is also compatible with typical neuroimaging datasets parcellated according to the Desikan-Killiany, Glasser, and Schaefer parcellations.
To simplify things, we provide tutorials on how to (i) import vertexwise and/or parcellated data, (ii) parcellate vertexwise cortical data, (iii) map parcellated data to the surface, and (iv) export data results. Import/export file formats include: .txt/.csv, FreeSurfer/”curv”. mgh/.mgz, GIfTI/.gii, and CIfTI/.dscalar.nii/.dtseries.nii. Cortical and subcortical surfaces are also available in FreeSurfer/surface, GIfTI/.gii, .vtk, and .obj formats, allowing cross-software compatibility.
Cortical and subcortical visualization tools 🎨¶
Tired of displaying your surface findings in tables? Look no further! The ENIGMA TOOLBOX has got you covered! Check out our visualization tools to project cortical and subcortical data results to the surface and generate publication-ready figures.
Preprocessed micro- and macroscale data 🤹🏼♀️¶
The emergence of open databases yields new opportunities to link human gene expression, cytoarchitecture, and connectome data. As part of the ENIGMA TOOLBOX, we have generated and made available a range of preprocessed and parcellated data, including: (i) transcriptomic data from the Allen Human Brain Atlas, (ii) cytoarchitectural data from the BigBrain Project, (iii) a digitized parcellation of the von Economo and Koskinas cytoarchitectonic map, and (iv) functional and structural connectivity data from the Human Connectome Project.
Multiscale analytical workflows 🦾¶
The ENIGMA Toolbox comprises two neural scales for the contextualization of findings: (i) using microscale properties, namely gene expression and cytoarchitecture, and (ii) using macroscale network models, such as regional hub susceptibility analysis and disease epicenter mapping. Moreover, our Toolbox includes non-parametric spatial permutation models to assess statistical significance while preserving the spatial autocorrelation of parcellated brain maps. From these comprehensive workflows, users can gain a deeper understanding of the molecular, cellular, and macroscale network organization of the healthy and diseased brains.
Step-by-step tutorials 👣¶
The ENIGMA TOOLBOX includes ready-to-use and easy-to-follow code snippets for every functionality and analytical workflow! Owing to its comprehensive tutorials, detailed functionality descriptions, and visual reports, the ENIGMA TOOLBOX is easy to use for researchers and clinicians without extensive programming expertise.
Development and getting involved ⚙️¶
Should you have any problems, questions, or suggestions about the ENIGMA TOOLBOX, please do not hesitate to post them to our mailing list! Are you interested in collaborating or sharing your ENIGMA-related data/codes/tools? Noice! Make sure you familiarize yourself with our contributing guidelines first and then discuss your ideas on our Github issues and pull request.
Installation¶
ENIGMA TOOLBOX is available in Python and Matlab!
ENIGMA TOOLBOX has the following dependencies:
The ENIGMA TOOLBOX can be directly downloaded from Github as follows:
git clone https://github.com/MICA-MNI/ENIGMA.git
cd ENIGMA
python setup.py install
ENIGMA TOOLBOX was tested with Matlab R2017b.
To install the Matlab toolbox simply download and unzip the GitHub toolbox (slow 🐢) or run the following command in your terminal (fast 🐅):
git clone https://github.com/MICA-MNI/ENIGMA.git
Once you have the toolbox on your computer, simply run the following command in Matlab:
addpath(genpath('/path/to/ENIGMA/matlab/'))
Usage notes¶
Some tutorials may have specific prerequisite steps; if that is the case, we provided the links to the steps you must complete first as follows:
Prerequisites ↪ Install me!
What’s new?¶
v2.0.0 (July 26, 2022)¶
One year of bug fixing and brand spanking new BigBrain moments
↪ [FIX 🐛] Bug fixes all throughout | @saratheriver
↪ [DOC 📄] Updated documentations | @saratheriver
↪ [NEW 💾] New BigBrain moments | @caseypaquola
v1.1.3 (July 28, 2021)¶
Typo in economo_koskinas_spider
matlab function.
↪ [FIX 🐛] Fixed typo in economo_koskinas_spider | @saratheriver
↪ [DOC 📄] Updated citation | @saratheriver
v1.1.2 (June 30, 2021)¶
Replaced dashes with underscores in summary statistics.
↪ [FIX 🐛] Removed dashes in sumstats | @saratheriver
↪ [DOC 📄] Update documentations | @saratheriver
v1.1.1 (April 12, 2021)¶
New import/export features for CIfTI files. Improved documentation.
↪ [NEW 💾] New import/export module (CIfTI) | @NicoleEic, @saratheriver
↪ [DOC 📄] Update documentations | @saratheriver
v1.1.0 (March 15, 2021)¶
New import/export features allowing compatibility with other dataset formats and neuroimaging softwares. Improved documentation.
↪ [NEW 💾] New import/export module | @saratheriver
↪ [FIX 🐛] Fix spin permutation index error | @saratheriver
↪ [DOC 📄] Update documentations | @saratheriver
v1.0.3 (February 25, 2021)¶
OCD subcortical volume measures were not loading when using load_summary_stats('ocd')
; this issue is now resolved.
↪ [ENH 🔧] Fix OCD sctx summary stats loading issue | @saratheriver
↪ [DOC 📄] Update documentations | @saratheriver
v1.0.2 (February 23, 2021)¶
Bug fixes in the fetch_ahba
function. Users who wish to use this function (or have used it in a previous version)
must update their ENIGMA Toolbox to this release; fetch_ahba
from previous releases will now error.
↪ [ENH 🔧] Fix python version mislabeling | @saratheriver
↪ [ENH 🔧] Fix bug and replace stable gene lists | @saratheriver
↪ [DOC 📄] Update documentations | @saratheriver
v1.0.1 (January 11, 2021)¶
Bug fixes and improvement
↪ [ENH 🔧] Enhance cross-disorder module | @saratheriver
↪ [ENH 🔧] Fix bug in surface visualization | @saratheriver
v1.0.0 (December 22, 2020)¶
Initial release 😱
↪ [DOC 📄] Citation for ENIGMA Toolbox preprint | @saratheriver
↪ [NEW 🗽] Pip package available | @saratheriver
v0.1.2 (December 20, 2020)¶
Bug fixes and improvement - still in pre-release
↪ [DOC 📄] Revise documentations | @saratheriver
↪ [NEW 🗽] Generalizability to several parcellations | @saratheriver
↪ [NEW 🗽] Additional parcellations for HCP connectivity | @saratheriver
↪ [NEW 🗽] Additional parcellations for AHBA | @saratheriver
↪ [NEW 🗽] Additional parcellations for histology module | @saratheriver
↪ [NEW 🗽] Cross-disorder module | @boyongpark, @saratheriver
↪ [ENH 🔧] Fix bug in fetch_ahba | @saratheriver
↪ [ENH 🔧] Reference updates | @saratheriver
v0.1.1 (November 14, 2020)¶
Bug fixes and improvement following beta testing - still in pre-release
↪ [TES 🧪] Toolbox testing | @boyngpark, @royj23, @caseypaquola, @YezhouWang, @sofievalk
↪ [DOC 📄] Revise documentations | @saratheriver
↪ [ENH 🔧] Fix minor code bugs here and there | @saratheriver
↪ [ENH 🔧] Remove unstable genes across donors | @saratheriver
↪ [NEW 🗽] BigBrain contextualization module | @caseypaquola, @saratheriver
↪ [NEW 🗽] Cytoarchitectonics contextualization module | @caseypaquola, @saratheriver
↪ [NEW 🗽] HCP subcortico-subcortical connectivity | @saratheriver
↪ [ENH 🔧] References for new modules | @saratheriver
↪ [ENH 🔧] Update API for new functions | @saratheriver
v0.1.0 (October 16, 2020)¶
Pre-release of the ENIGMA TOOLBOX: it’s the start of a beautiful thing!
↪ [DOC 📄] Write and revise various documentations | @saratheriver
↪ [STY 🎨] Stylistic updates to ReadTheDocs | @saratheriver
↪ [NEW 🗽] Load example data | @saratheriver
↪ [NEW 🗽] Load summary statistics | @saratheriver
↪ [NEW 🗽] Load connectivity data | @saratheriver
↪ [NEW 🗽] Hub susceptibility model | @saratheriver
↪ [NEW 🗽] Spin permutation tests | @saratheriver
↪ [NEW 🗽] Epicenter mapping | @saratheriver
↪ [NEW 🗽] Fetch gene expression data | @saratheriver
↪ [NEW 🗽] Extract disease-related gene expression maps | @saratheriver
↪ [NEW 🗽] Surface data visualization tools | @saratheriver
Summary statistics¶
This page contains descriptions and examples to load case-control datasets from several ENIGMA Working Groups. These ENIGMA summary statistics contain the following data: effect sizes for case-control differences (d_icv), standard error (se_icv), lower bound of the confidence interval (low_ci_icv), upper bound of the confidence interval (up_ci_icv), number of controls (n_controls), number of patiens (n_patients), observed p-values (pobs), false discovery rate (FDR)-corrected p-value (fdr_p).
ENIGMA’s standardized protocols for data processing, quality assurance, and meta-analysis of individual subject data were conducted at each site. For site-level meta-analysis, all research centres within a given specialized Working Group tested for case vs. control differences using multiple linear regressions, where diagnosis (e.g., healthy controls vs. individuals with epilepsy) was the predictor of interest, and subcortical volume, cortical thickness, or surface area of a given brain region was the outcome measure. Case-control differences were computed across all regions using either Cohen’s d effect sizes or t-values, after adjusting for different combinations of age, sex, site/scan/dataset, intracranial volume, IQ (see below for disease-specific models).
Can’t find the data you’re searching for? 🙈
Let us know what’s missing and we’ll try and fetch that data for you and implement it in our toolbox. Get in touch with us here. If you have locally stored summary statistics on your computer, check out our tutorials on how to import data and accordingly take advantage of all the ENIGMA TOOLBOX functions.
* 📸 indicates case-control tables used in the code snippets.
22q11.2 deletion syndrome¶
Available summary statistics tables¶
>>> from enigmatoolbox.datasets import load_summary_stats
>>> # Load summary statistics for ENIGMA-22q
>>> sum_stats = load_summary_stats('22q')
>>> # Get case-control cortical thickness and surface area tables
>>> CT = sum_stats['CortThick_case_vs_controls']
>>> SA = sum_stats['CortSurf_case_vs_controls']
>>> # Extract Cohen's d values
>>> CT_d = CT['d_icv']
>>> SA_d = SA['d_icv']
% Load summary statistics for ENIGMA-22q
sum_stats = load_summary_stats('22q');
% Get case-control cortical thickness and surface area tables
CT = sum_stats.CortThick_case_vs_controls;
SA = sum_stats.CortSurf_case_vs_controls;
% Extract Cohen's d values
CT_d = CT.d_icv;
SA_d = SA.d_icv;
Attention deficit hyperactivity disorder¶
Available summary statistics tables¶
>>> from enigmatoolbox.datasets import load_summary_stats
>>> # Load summary statistics for ENIGMA-ADHD
>>> sum_stats = load_summary_stats('adhd')
>>> # Get case-control cortical thickness and surface area tables
>>> CT = sum_stats['CortThick_case_vs_controls_adult']
>>> SA = sum_stats['CortSurf_case_vs_controls_adult']
>>> # Extract Cohen's d values
>>> CT_d = CT['d_icv']
>>> SA_d = SA['d_icv']
% Load summary statistics for ENIGMA-ADHD
sum_stats = load_summary_stats('adhd');
% Get case-control cortical thickness and surface area tables
CT = sum_stats.CortThick_case_vs_controls_adult;
SA = sum_stats.CortSurf_case_vs_controls_adult;
% Extract Cohen's d values
CT_d = CT.d_icv;
SA_d = SA.d_icv;
Autism spectrum disorder¶
Available summary statistics tables¶
>>> from enigmatoolbox.datasets import load_summary_stats
>>> # Load summary statistics for ENIGMA-Autism
>>> sum_stats = load_summary_stats('asd')
>>> # Get case-control cortical thickness table
>>> CT = sum_stats['CortThick_case_vs_controls_meta_analysis']
>>> # Extract Cohen's d values
>>> CT_d = CT['d_icv']
% Load summary statistics for ENIGMA-Autism
sum_stats = load_summary_stats('asd');
% Get case-control cortical thickness table
CT = sum_stats.CortThick_case_vs_controls_meta_analysis;
% Extract Cohen's d values
CT_d = CT.d_icv;
Bipolar disorder¶
Available summary statistics tables¶
>>> from enigmatoolbox.datasets import load_summary_stats
>>> # Load summary statistics for ENIGMA-BD
>>> sum_stats = load_summary_stats('bipolar')
>>> # Get case-control surface area table
>>> CT = sum_stats['CortThick_case_vs_controls_adult']
>>> SA = sum_stats['CortSurf_case_vs_controls_adult']
>>> # Extract Cohen's d values
>>> CT_d = CT['d_icv']
>>> SA_d = SA['d_icv']
% Load summary statistics for ENIGMA-BD
sum_stats = load_summary_stats('bipolar');
% Get case-control surface area table
CT = sum_stats.CortThick_case_vs_controls_adult;
SA = sum_stats.CortSurf_case_vs_controls_adult;
% Extract Cohen's d values
CT_d = CT.d_icv;
SA_d = SA.d_icv;
Epilepsy¶
Available summary statistics tables¶
>>> from enigmatoolbox.datasets import load_summary_stats
>>> # Load summary statistics for ENIGMA-Epilepsy
>>> sum_stats = load_summary_stats('epilepsy')
>>> # Get case-control subcortical volume and cortical thickness tables
>>> SV = sum_stats['SubVol_case_vs_controls_ltle']
>>> CT = sum_stats['CortThick_case_vs_controls_ltle']
>>> # Extract Cohen's d values
>>> SV_d = SV['d_icv']
>>> CT_d = CT['d_icv']
% Load summary statistics for ENIGMA-Epilepsy
sum_stats = load_summary_stats('epilepsy');
% Get case-control subcortical volume and cortical thickness tables
SV = sum_stats.SubVol_case_vs_controls_ltle;
CT = sum_stats.CortThick_case_vs_controls_ltle;
% Extract Cohen's d values
SV_d = SV.d_icv;
CT_d = CT.d_icv;
Major depressive disorder¶
Available summary statistics tables¶
>>> from enigmatoolbox.datasets import load_summary_stats
>>> # Load summary statistics for ENIGMA-MDD
>>> sum_stats = load_summary_stats('depression')
>>> # Get case-control cortical thickness and surface area tables
>>> CT = sum_stats['CortThick_case_vs_controls_adult']
>>> SA = sum_stats['CortSurf_case_vs_controls_adult']
>>> # Extract Cohen's d values
>>> CT_d = CT['d_icv']
>>> SA_d = SA['d_icv']
% Load summary statistics for ENIGMA-MDD
sum_stats = load_summary_stats('depression');
% Get case-control cortical thickness and surface area tables
SV = sum_stats.SubVol_case_vs_controls_adult;
CT = sum_stats.CortThick_case_vs_controls_adult;
SA = sum_stats.CortSurf_case_vs_controls_adult;
% Extract Cohen's d values
SV_d = SV.d_icv;
CT_d = CT.d_icv;
SA_d = SA.d_icv;
Obsessive-compulsive disorder¶
Available summary statistics tables¶
>>> from enigmatoolbox.datasets import load_summary_stats
>>> # Load summary statistics for ENIGMA-OCD
>>> sum_stats = load_summary_stats('ocd')
>>> # Get case-control cortical thickness and surface area tables
>>> CT = sum_stats['CortThick_case_vs_controls_adult']
>>> SA = sum_stats['CortSurf_case_vs_controls_adult']
>>> # Extract Cohen's d values
>>> CT_d = CT['d_icv']
>>> SA_d = SA['d_icv']
% Load summary statistics for ENIGMA-OCD
sum_stats = load_summary_stats('ocd');
% Get case-control cortical thickness and surface area tables
CT = sum_stats.CortThick_case_vs_controls_adult;
SA = sum_stats.CortSurf_case_vs_controls_adult;
% Extract Cohen's d values
CT_d = CT.d_icv;
SA_d = SA.d_icv;
Schizophrenia¶
Available summary statistics tables¶
>>> from enigmatoolbox.datasets import load_summary_stats
>>> # Load summary statistics for ENIGMA-Schizophrenia
>>> sum_stats = load_summary_stats('schizophrenia')
>>> # Get case-control cortical thickness and surface area tables
>>> CT = sum_stats['CortThick_case_vs_controls']
>>> SA = sum_stats['CortSurf_case_vs_controls']
>>> # Extract Cohen's d values
>>> CT_d = CT['d_icv']
>>> SA_d = SA['d_icv']
% Load summary statistics for ENIGMA-schizophrenia
sum_stats = load_summary_stats('schizophrenia');
% Get case-control cortical thickness and surface area tables
CT = sum_stats.CortThick_case_vs_controls;
SA = sum_stats.CortSurf_case_vs_controls;
% Extract Cohen's d values
CT_d = CT.d_icv;
SA_d = SA.d_icv;
Individual site data¶
This page contains descriptions and examples to load our example data, re-order subcortical labels, and z-score values!
Load example data¶
This is an example dataset that includes 10 healthy controls (7 females, age±SD=33.3±8.8 years) and 10 individuals with epilepsy (7 females, age±SD=39.8±14.8 years).
Covariates | As per ENIGMA-Epilepsy protocol, covariate information includes SubjID (subjectID), Dx (diagnosis, 0=controls, 1=patients), SDx (sub-diagnosis, 0=controls, 1=non-lesional patients, 2=genetic generalized epilepsy (IGE/GGE) patients, 3=left TLE, 4=right TLE), Age (in years), Sex (1=males, 2=females), Handedness (1=right, 2=left), AO (age at onset in years, patients only), DURILL (duration of illness in years, patients only), and ICV (intracranial volume).
Subcortical volume | Subcortical grey matter volumes regroup data from 12 subcortical regions, bilateral hippocampus, and bilateral ventricles.
Cortical thickness | Cortical thickness was measured at each vertex as the Euclidean distance between white and pial surfaces, and subsequently averaged within each of the Desikan-Killiany parcels.
Cortical surface area | The cortical surface area of every Desikan-Killiany parcel is also provided as part of ENIGMA imaging protocols; this morphological measure is defined by the sum of the area of each of the triangles within the parcel.
>>> from enigmatoolbox.datasets import load_example_data
>>> # Load all example data from an individual site
>>> cov, metr1_SubVol, metr2_CortThick, metr3_CortSurf = load_example_data()
% Load all example data from an individual site
[cov, metr1_SubVol, metr2_CortThick, metr3_CortSurf] = load_example_data();
Re-order subcortical regions¶
The column order from ENIGMA-derived subcortical volume matrices does not match the order in the connectivity matrices nor
the pre-requisite for our subcortical visualization tools. But, hey, don’t you worry! To re-order ENIGMA-derived subcortical volume data, you may use
our reorder_sctx()
function, which will re-order the columns of the subcortical volume dataset accordingly (i.e., alphabetically,
with all left hemisphere structures first followed by all right hemisphere structures).
>>> from enigmatoolbox.utils.useful import reorder_sctx
>>> # Re-order the subcortical data alphabetically and by hemisphere
>>> metr1_SubVol_r = reorder_sctx(metr1_SubVol)
% Re-order the subcortical data alphabetically and by hemisphere
metr1_SubVol_r = reorder_sctx(metr1_SubVol);
Z-score data¶
With our example data loaded and re-ordered, we can then z-score data in patients, relative to controls, so that lower values correspond to greater atrophy. For simplicity, we also compute the mean z-score across all left TLE patients. These mean, z-scored vectors will be used in subsequent tutorials as measures of subcortical and cortical atrophy!
Prerequisites ↪ Re-order subcortical data
>>> from enigmatoolbox.utils.useful import zscore_matrix
>>> # Z-score patients' data relative to controls (lower z-score = more atrophy)
>>> group = cov['Dx'].to_list()
>>> controlCode = 0
>>> SV_z = zscore_matrix(metr1_SubVol_r.iloc[:, 1:-1], group, controlCode)
>>> CT_z = zscore_matrix(metr2_CortThick.iloc[:, 1:-5], group, controlCode)
>>> SA_z = zscore_matrix(metr3_CortSurf.iloc[:, 1:-5], group, controlCode)
>>> # Mean z-score values across individuals with from a specific group (e.g., left TLE, that is SDx == 3)
>>> SV_z_mean = SV_z.iloc[cov[cov['SDx'] == 3].index, :].mean(axis=0)
>>> CT_z_mean = CT_z.iloc[cov[cov['SDx'] == 3].index, :].mean(axis=0)
>>> SA_z_mean = SA_z.iloc[cov[cov['SDx'] == 3].index, :].mean(axis=0)
% Z-score patients' data relative to controls (lower z-score = more atrophy)
group = cov.Dx;
controlCode = 0;
SV_z = zscore_matrix(metr1_SubVol_r(:, 2:end-1), group, controlCode);
CT_z = zscore_matrix(metr2_CortThick(:, 2:end-5), group, controlCode);
SA_z = zscore_matrix(metr3_CortSurf(:, 2:end-5), group, controlCode);
% Mean z-score values across individuals with from a specific group (e.g., left TLE, that is SDx == 3)
SV_z_mean = array2table(mean(SV_z{find(cov.SDx == 3), :}, 1), ...
'VariableNames', SV_z.Properties.VariableNames);
CT_z_mean = array2table(mean(CT_z{find(cov.SDx == 3), :}, 1), ...
'VariableNames', CT_z.Properties.VariableNames);
SA_z_mean = array2table(mean(SA_z{find(cov.SDx == 3), :}, 1), ...
'VariableNames', SA_z.Properties.VariableNames);
Cross-disorder effect¶
This page contains descriptions and examples to perform cross-disorder analyses to explore brain structural abnormalities that are common or different across disorders.
Principal component analysis¶
To yield novel insights into brain structural abnormalities that are common or different across disorders, we can explore shared and disease-specific morphometric signatures by applying a principal component analysis (PCA) to any combination of disease-specific summary statistics (or other imported data), resulting in shared latent components that can be used for further analysis.
>>> from enigmatoolbox.cross_disorder import cross_disorder_effect
>>> from enigmatoolbox.plotting import plot_cortical, plot_subcortical
>>> from enigmatoolbox.utils import parcel_to_surface
>>> import matplotlib.pyplot as plt
>>> # Extract shared disorder effects
>>> components, variance, names = cross_disorder_effect()
>>> # Visualize cortical and subcortical eigenvalues in scree plots
>>> fig, ax = plt.subplots(1, 2, figsize=(14, 6))
>>> for ii, jj in enumerate(components):
>>> ax[ii].plot(variance[jj], lw=2, color='#A8221C', zorder=1)
>>> ax[ii].scatter(range(variance[jj].size), variance[jj], s=78, color='#A8221C',
>>> linewidth=1.5, edgecolor='w', zorder=3)
>>> ax[ii].set_xlabel('Components')
>>> if ii == 0:
>>> ax[ii].set_ylabel('Cortical eigenvalues')
>>> else:
>>> ax[ii].set_ylabel('Subcortical eigenvalues')
>>> ax[ii].spines['top'].set_visible(False)
>>> ax[ii].spines['right'].set_visible(False)
>>> fig.tight_layout()
>>> plt.show()
>>> # Visualize the first cortical and subcortical components on the surface brains
>>> plot_cortical(parcel_to_surface(components['cortex'][:, 0], 'aparc_fsa5'), color_range=(-0.5, 0.5),
... cmap='RdBu_r', color_bar=True, size=(800, 400))
>>> plot_subcortical(components['subcortex'][:, 0], color_range=(-0.5, 0.5),
... cmap='RdBu_r', color_bar=True, size=(800, 400))
% Extract shared disorder effects
[components, variance, ~, names] = cross_disorder_effect();
% Visualize cortical and subcortical eigenvalues in scree plots
fns = fieldnames(components);
f = figure,
set(gcf,'color','w');
set(gcf,'units','normalized','position',[0 0 .75 0.3])
for ii = 1:numel(fieldnames(components))
axs = subplot(1, 2, ii); hold on
s = scatter(1:size(components.(fns{ii}), 2), variance.(fns{ii}), 128, [0.66 0.13 0.11], 'filled');
s.LineWidth = 1.5; s.MarkerEdgeColor = 'w';
plot(1:size(components.(fns{ii}), 2), variance.(fns{ii}), 'linewidth', 2, 'color', [0.66 0.13 0.11])
xlabel('Components')
if ii == 1
ylabel('Cortical eigenvalues')
else
ylabel('Subcortical eigenvalues')
end
end
% Visualize the first cortical and subcortical components on the surface brains
f = figure,
plot_cortical(parcel_to_surface(components.cortex(:, 1)), 'color_range', [-0.5 0.5], 'cmap', 'RdBu_r')
f = figure,
plot_subcortical(components.subcortex(:, 1), 'color_range', [-0.5 0.5], 'cmap', 'RdBu_r')

Cross-correlation¶
We can also explore shared and disease-specific morphometric signatures by systematically cross-correlating patterns of brain structural abnormalities with any combination of summary statistics (or other pre-loaded ENIGMA-type data), resulting in a correlation matrix
>>> from enigmatoolbox.cross_disorder import cross_disorder_effect
>>> from nilearn import plotting
>>> # Extract shared disorder effects
>>> correlation_matrix, names = cross_disorder_effect(method='correlation')
>>> # Plot correlation matrices
>>> plotting.plot_matrix(correlation_matrix['cortex'], figure=(12, 8), labels=names['cortex'], vmax=1,
... vmin=-1, cmap='RdBu_r', auto_fit=False)
>>> plotting.plot_matrix(correlation_matrix['subcortex'], figure=(12, 8), labels=names['subcortex'], vmax=1,
... vmin=-1, cmap='RdBu_r', auto_fit=False)
% Extract shared disorder effects
[~, ~, correlation_matrix, names] = cross_disorder_effect('method', 'correlation');
% Plot correlation matrices
f = figure('units','normalized','outerposition',[0 0 .65 1]),
imagesc(correlation_matrix.cortex, [-1 1])
axis square;
colormap(RdBu_r);
colorbar;
set(gca, 'YTick', 1:1:size(correlation_matrix.cortex, 1), ...
'YTickLabel', strrep(names.cortex, '_', ' '), 'XTick', 1:1:size(correlation_matrix.cortex, 1), ...
'XTickLabel', strrep(names.cortex, '_', ' '), 'XTickLabelRotation', 45)
f = figure('units','normalized','outerposition',[0 0 .65 1]),
imagesc(correlation_matrix.subcortex, [-1 1])
axis square;
colormap(RdBu_r);
colorbar;
set(gca, 'YTick', 1:1:size(correlation_matrix.subcortex, 1), ...
'YTickLabel', strrep(names.subcortex, '_', ' '), 'XTick', 1:1:size(correlation_matrix.subcortex, 1), ...
'XTickLabel', strrep(names.subcortex, '_', ' '), 'XTickLabelRotation', 45)

Import vertexwise or parcellated data¶
This page contains descriptions and examples to import your own data, stored as different file format types. Compatible import file formats are: .txt/.csv, FreeSurfer/”curv”, .mgh/.mgz, GIfTI/.gii, CIfTI/.dscalar.nii/.dtseries.nii
As an example, we will import vertexwise cortical thickness data sampled on the Conte69 surface template (available within the ENIGMA Toolbox). The examples below can be easily modified to import any vertexwise or parcellated data by simply changing the path to the data and the filenames. Here, we imported data from ENIGMA’s import_export directory.
.txt / .csv¶
If your data are stored as text (.txt) or comma-separated values (.csv) files, then you may use the dlmread()
(Matlab) or np.loadtxt()
(Python)
functions to load your data.
Table please 🍴
Should you prefer to upload your data as a table (e.g., when importing ENIGMA-type summary statistics stored in your local computer),
then you may use the readtable()
(Matlab) or pd.readcsv()
(Python) functions.
>>> import enigmatoolbox
>>> import numpy as np
>>> import os
>>> # Define path to your data
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> # Import data stored as .txt / .csv
>>> CT = []
>>> for _, h in enumerate(['lh', 'rh']):
>>> CT = np.append(CT, np.loadtxt(dpath + '{}.conte69_32k_thickness.txt'.format(h)))
% Define path to your data
dpath = what('import_export'); dpath = data_path.path;
% Import data stored as .txt / .csv
data_lh = dlmread([dpath '/lh.conte69_32k_thickness.txt']);
data_rh = dlmread([dpath '/rh.conte69_32k_thickness.txt']);
CT = [data_lh data_rh];
FreeSurfer / “curv”¶
If your data are stored as FreeSurfer “curv” format (e.g., ?h.thickness), then you may use the read_curv()
(Matlab) or nib.freesurfer.io.read_morph_data()
(Python)
functions to load your data. You can get the Matlab function from here.
>>> import enigmatoolbox
>>> import nibabel as nib
>>> import numpy as np
>>> import os
>>> # Define path to your data
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> # Import data stored as FreeSurfer "curv"
>>> CT = []
>>> for _, h in enumerate(['lh', 'rh']):
>>> CT = np.append(CT, nib.freesurfer.io.read_morph_data(dpath + '{}.conte69_32k_thickness'.format(h)))
% Define path to your data
dpath = what('import_export'); dpath = dpath.path;
% Import data stored as FreeSurfer "curv"
data_lh = read_curv([dpath '/lh.conte69_32k_thickness']);
data_rh = read_curv([dpath '/rh.conte69_32k_thickness']);
CT = [data_lh; data_rh].';
.mgh / .mgz¶
If your data are stored as .mgh or .mgz formats, then you may use the load_mgh()
(Matlab) or nib.load
(Python)
functions to load your data. You can get the Matlab function from here.
>>> import enigmatoolbox
>>> import nibabel as nib
>>> import numpy as np
>>> import os
>>> # Define path to your data
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> # Import data stored as .mgh / .mgz
>>> CT = []
>>> for _, h in enumerate(['lh', 'rh']):
>>> CT = np.append(CT, nib.load(dpath + '{}.conte69_32k_thickness.mgh'.format(h)).get_fdata().squeeze())
% Define path to your data
dpath = what('import_export'); dpath = dpath.path;
% Import data stored as .mgh / .mgz
data_lh = load_mgh([dpath '/lh.conte69_32k_thickness.mgh']);
data_rh = load_mgh([dpath '/rh.conte69_32k_thickness.mgh']);
CT = [data_lh; data_rh].';
GIfTI / .gii¶
If your data are stored as GIfTI/.gii format, then you may use the gifti()
(Matlab) or nib.load
(Python)
functions to load your data. You can get the Matlab function from here.
>>> import enigmatoolbox
>>> import nibabel as nib
>>> import numpy as np
>>> import os
>>> # Define path to your data
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> # Import data stored as GIfTI / .gii
>>> CT = []
>>> for _, h in enumerate(['lh', 'rh']):
>>> CT = np.append(CT, nib.load(dpath + '{}.conte69_32k_thickness.gii'.format(h)).darrays[0].data)
% Define path to your data
dpath = what('import_export'); dpath = dpath.path;
% Import data stored as GIfTI / .gii
data_lh = gifti([dpath '/lh.conte69_32k_thickness.gii']);
data_rh = gifti([dpath '/rh.conte69_32k_thickness.gii']);
CT = [data_lh.cdata; data_rh.cdata].';
CIfTI / .dscalar.nii / .dtseries.nii¶
If your data are stored as CIfTI/.dscalar.nii/dtseries.nii format, then you may use the cifti_read()
(Matlab) or nib.load
(Python)
functions to load your data. You can get the Matlab function from here.
>>> import enigmatoolbox >>> import nibabel as nib >>> import numpy as np >>> import os >>> # Define path to your data >>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/") >>> # Import data stored as CIfTI / .dscalar.nii / .dtseries.nii >>> CT = [] >>> for _, h in enumerate(['lh', 'rh']): >>> CT = np.append(CT, np.asarray(nib.load(dpath + '{}.conte69_32k_thickness.dscalar.nii'.format(h)).get_data()))% Define path to your data dpath = what('import_export'); dpath = dpath.path; % Import data stored as CIfTI / .dscalar.nii / .dtseries.nii data_lh = cifti_read([dpath '/lh.conte69_32k_thickness.dscalar.nii']); data_rh = cifti_read([dpath '/rh.conte69_32k_thickness.dscalar.nii']); CT = [data_lh.cdata; data_rh.cdata].';
Vertexwise ↔ parcellated data¶
This page contains descriptions and examples to (i) parcellate vertexwise data (sampled on fsaverage5 or conte69), allowing users to take advantage of every function within the ENIGMA TOOLBOX, and (ii) map parcellated data back to vertexwise space, allowing surface visualization of data and cross-software compatibility.
Vertexwise → parcellated data¶
As an example, we parcellate vertexwise cortical thickness data imported in the previous tutorial according to the Schaefer-200 atlas. Users can use this function to parcellate (fsaverage5 or conte69) vertexwise data using different parcellations, including: aparc_fsa5, glasser_fsa5, schaefer_100_fsa5, schaefer_200_fsa5, schaefer_300_fsa5, schaefer_400_fsa5, aparc_conte69, glasser_conte69, schaefer_100_conte69, schaefer_200_conte69, schaefer_300_conte69, schaefer_400_conte69.
>>> from enigmatoolbox.utils.parcellation import surface_to_parcel
>>> # Parcellate vertexwise data
>>> CT_schaefer_200 = surface_to_parcel(CT, 'schaefer_200_conte69')
% Parcellate vertexwise data
CT_schaefer_200 = surface_to_parcel(CT, 'schaefer_200_conte69');
Parcellated → vertexwise data¶
Mapping parcellated data to the surface has never been easier! Our parcel_to_surface()
function works with ENIGMA- and non-ENIGMA datasets.
This function is especially useful for visualizing parcellated data on cortical surface templates using our visualization tools
or other popular neuroimaging softwares (check out our tutorials on how to export data).
As an example, we map the parcellated data from the above tutorial back to the Conte69 surface.
Don’t like conte69? Relax, we got you covered! 🛀🏾
The same approach can be used to map parcellated data to the fsaverage5 surface template; simply replace every ‘conte69’ with ‘fsa5’!
>>> from enigmatoolbox.utils.parcellation import parcel_to_surface
>>> # Map parcellated data to the surface
>>> CT_schaefer_200_c69 = parcel_to_surface(CT_schaefer_200, 'schaefer_200_conte69')
% Map parcellated data to the surface
CT_schaefer_200_c69 = parcel_to_surface(CT_schaefer_200, 'schaefer_200_conte69');
Export data results¶
This page contains descriptions and examples to export data results across a range of file formats compatible with several neuroimaging software. These include: .txt/.csv, FreeSurfer/”curv”, .mgh/.mgz, GIfTI/.gii
As an example, we will export the vertexwise data from the previous tutorial. The examples below can be easily modified to export any vertexwise (all formats) or parcellated (.txt/.csv format) data results by simply changing the data array to be saved, the data export path, and the filenames. Here, we exported cortical thickness data to the ENIGMA’s import_export datasets directory.
What about exporting brain surfaces? 🏄🏼♀️
Exported data results can be mapped to the surface templates using several other neuroimaging software. Cortical and subcortical surfaces are available here in different file formats (FreeSurfer/surface, GIfTI/.gii, .vtk, .obj).
.txt / .csv¶
If you want to export your data as text (.txt) or comma-separated values files, then you may use the writetable()
(Matlab) or np.savetxt()
(Python)
functions. If your input data are in a table or dataframe, then you may remove the array2table()
function (Matlab) or use the .to_csv()
function (Python)
functions.
>>> import enigmatoolbox
>>> import numpy as np
>>> import os
>>> # Specify data to be exported
>>> data = CT_schaefer_200_c69
>>> # Define output path and filenames
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> fname_lh = 'lh.schaefer_200_c69_thickness.txt'
>>> fname_rh = 'rh.schaefer_200_c69_thickness.txt'
>>> # Export data as .txt / .csv
>>> np.savetxt(dpath + fname_lh, data[:len(data)//2])
>>> np.savetxt(dpath + fname_rh, data[len(data)//2:])
% Specify data to be exported
data = CT_schaefer_200_c69
% Define output path and filenames
dpath = what('import_export'); dpath = dpath.path;
fname_lh = 'lh.schaefer_200_c69_thickness.txt'
fname_rh = 'rh.schaefer_200_c69_thickness.txt'
% Export data as .txt / .csv
writetable(array2table(data(1:end/2)), [dpath, fname_lh], 'WriteVariableNames', 0)
writetable(array2table(data(end/2+1:end)), [dpath, fname_rh], 'WriteVariableNames', 0)
FreeSurfer / “curv”¶
If you want to export your data as FreeSurfer “curv” format (e.g., ?h.thickness), then you may use the write_curv()
(Matlab) or
nib.freesurfer.io.write_morph_data()
(Python) functions to load your data.
You can get the Matlab function from here.
What is this nfaces business? 🤖
In the write_curv()
and nib.freesurfer.io.write_morph_data()
functions, users are required
to specify the number of faces (or triangles) in the associated surface file. To simplify things,
we built a simple function, nfaces()
, in which users can specify the surface name (‘conte69’, ‘fsa5’)
and the hemisphere (‘lh’, ‘rh’, ‘both’) to obtain the appropriate number of faces.
>>> import enigmatoolbox
>>> from enigmatoolbox.datasets import nfaces
>>> import nibabel as nib
>>> import os
>>> # Specify data to be exported
>>> data = CT_schaefer_200_c69
>>> # Define output path and filenames
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> fname_lh = 'lh.schaefer_200_c69_thickness'
>>> fname_rh = 'rh.schaefer_200_c69_thickness'
>>> # Export data as FreeSurfer "curv"
>>> nib.freesurfer.io.write_morph_data(dpath + fname_lh, data[:len(data)//2], nfaces('conte69', 'lh'))
>>> nib.freesurfer.io.write_morph_data(dpath + fname_rh, data[len(data)//2:], nfaces('conte69', 'rh'))
% Specify data to be exported
data = CT_schaefer_200_c69
% Define output path and filenames
dpath = what('import_export'); dpath = dpath.path;
fname_lh = 'lh.schaefer_200_c69_thickness'
fname_rh = 'rh.schaefer_200_c69_thickness'
% Export data as FreeSurfer "curv"
write_curv([dpath, fname_lh], data(1:end/2), nfaces('conte69', 'lh'));
write_curv([dpath fname_rh], data(end/2+1:end), nfaces('conte69', 'rh'));
.mgh / .mgz¶
If you want to export your data as .mgh or .mgz formats, then you may use the load_mgh()
(Matlab) or nib.freesurfer.mghformat.MGHImage()
(Python)
functions to load your data. You can get the Matlab function from here.
What is this getaffine business? 🧜🏼♀️
In the save_mgh()
and nib.freesurfer.mghformat.MGHImage()
functions, users are required
to specify a vox2ras transform matrix. To simplify things,
we built a simple function, getaffine()
, in which users can specify the surface name (‘conte69’, ‘fsa5’)
and the hemisphere (‘lh’, ‘rh’, ‘both’) to obtain the appropriate transform.
>>> import enigmatoolbox
>>> from enigmatoolbox.datasets import getaffine
>>> import nibabel as nib
>>> import numpy as np
>>> import os
>>> # Specify data to be exported
>>> data = CT_schaefer_200_c69
>>> # Define output path and filenames
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> fname_lh = 'lh.schaefer_200_c69_thickness.mgh'
>>> fname_rh = 'rh.schaefer_200_c69_thickness.mgh'
>>> # Export data as .mgh / .mgz
>>> nib.freesurfer.mghformat.MGHImage(np.float32(data[:len(data)//2]),
... getaffine('conte69', 'lh')).to_filename(dpath + fname_lh)
>>> nib.freesurfer.mghformat.MGHImage(np.float32(data[len(data)//2:]),
... getaffine('conte69', 'lh')).to_filename(dpath + fname_rh)
% Specify data to be exported
data = CT_schaefer_200_c69
% Define output path and filenames
dpath = what('import_export'); dpath = dpath.path;
fname_lh = 'lh.schaefer_200_c69_thickness.mgh'
fname_rh = 'rh.schaefer_200_c69_thickness.mgh'
% Export data as .mgh / .mgz
save_mgh(data(1:end/2), [dpath, fname_lh], getaffine('conte69', 'lh'));
save_mgh(data(end/2+1:end), [dpath, fname_rh], getaffine('conte69', 'rh'));
GIfTI / .gii¶
If you want to export your data as GIfTI/.gii format, then you may use the savegifti()
(Matlab) or nib.save
(Python)
functions to load your data. You can get the Matlab function from here.
Script name change 📛
To avoid confusion with Matlab’s function save()
, we renamed the GIfTI Toolbox’s save function as
savegifti()
.
>>> import enigmatoolbox
>>> import nibabel as nib
>>> import os
>>> # Specify data to be exported
>>> data = CT_schaefer_200_c69
>>> # Convert left and right hemisphere data to GIfTI image
>>> data_lh = nib.gifti.gifti.GiftiImage()
>>> data_lh.add_gifti_data_array(nib.gifti.gifti.GiftiDataArray(data=data[:len(data)//2]))
>>> data_rh = nib.gifti.gifti.GiftiImage()
>>> data_rh.add_gifti_data_array(nib.gifti.gifti.GiftiDataArray(data=data[len(data)//2:]))
>>> # Define output path and filenames
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> fname_lh = 'lh.schaefer_200_c69_thickness.gii'
>>> fname_rh = 'rh.schaefer_200_c69_thickness.gii'
>>> # Export data as GIfTI / .gii
>>> nib.save(data_lh, dpath + fname_lh)
>>> nib.save(data_rh, dpath + fname_rh)
% Specify data to be exported
data = gifti(CT_schaefer_200_c69);
data_lh = data; data_lh.cdata = data_lh.cdata(1:end/2);
data_rh = data; data_rh.cdata = data_rh.cdata(end/2+1:end);
% Define output path and filenames
dpath = what('import_export'); dpath = dpath.path;
fname_lh = 'lh.schaefer_200_c69_thickness.gii'
fname_rh = 'rh.schaefer_200_c69_thickness.gii'
% Export data as GIfTI / .gii
savegifti(data_lh, [dpath, fname_lh], 'Base64Binary');
savegifti(data_rh, [dpath, fname_rh], 'Base64Binary');
CIfTI / .dscalar.nii / .dtseries.nii¶
If you want to export your data as CIfTI/.dscalar.nii/.dtseries.nii format, then you may use the write_cifti()
(Matlab and Python)
functions to load your data. The Matlab function, however, relies on this toolbox right here.
>>> from enigmatoolbox.datasets import write_cifti
>>> import os
>>> # Specify data to be exported
>>> data = CT_schaefer_200_c69;
>>> # Define output path and filenames
>>> dpath = os.path.join(os.path.dirname(enigmatoolbox.__file__) + "/datasets/import_export/")
>>> fname_lh = 'lh.schaefer_200_c69_thickness.dscalar.nii'
>>> fname_rh = 'rh.schaefer_200_c69_thickness.dscalar.nii'
>>> # Export data as CIfTI / .dscalar.nii / .dtseries.nii
>>> write_cifti(data[:len(data)//2], dpath=dpath, fname=fname_lh, labels=None, surface_name='conte69', hemi='lh')
>>> write_cifti(data[len(data)//2:], dpath=dpath, fname=fname_rh, labels=None, surface_name='conte69', hemi='rh')
% Specify data to be exported
data = CT_schaefer_200_c69;
% Define output path and filenames
dpath = what('import_export'); dpath = dpath.path;
fname_lh = 'lh.schaefer_200_c69_thickness.dscalar.nii'
fname_rh = 'rh.schaefer_200_c69_thickness.dscalar.nii'
% Export data as CIfTI / .dscalar.nii / .dtseries.nii
write_cifti(data(1:end/2), 'dpath', dpath, 'fname', fname_lh, 'surface_name', 'conte69', 'hemi', 'lh')
write_cifti(data(end/2+1:end), 'dpath', dpath, 'fname', fname_rh, 'surface_name', 'conte69', 'hemi', 'rh')
Surface data visualization¶
This page contains descriptions and examples to visualize and manipulate surface data.
How to zoom and rotate surfaces 🔍
Users can manipulate cortical and subcortical surfaces within the Python and Matlab viewers. To zoom in and out: Scroll up and down on trackpad or mouse wheel (Python) or use the magnifying glass buttons in the figure toolbar and click on the surface (Matlab). To rotate surfaces in 3D: Click and hold down the left mouse button and move it around (Python) or press the 3D rotate figure toolbar button then click and hold down the left mouse button and move it around (Matlab).
Cortical surface visualization¶
ENIGMA TOOLBOX comes equipped with fsaverage5 and Conte69 cortical midsurfaces and numerous parcellations. Following the examples below, we can easily map parcellated data (e.g., Desikan-Killiany) to fsaverage5 surface space (i.e., vertices). In the following example, we will display cortical atrophy in individuals with left TLE.
Prerequisites ↪ Load summary statistics, example data, or your own data ↪ Z-score data (mega only)
>>> from enigmatoolbox.utils.parcellation import parcel_to_surface
>>> from enigmatoolbox.plotting import plot_cortical
>>> # Map parcellated data to the surface
>>> CT_d_fsa5 = parcel_to_surface(CT_d, 'aparc_fsa5')
>>> # Project the results on the surface brain
>>> plot_cortical(array_name=CT_d_fsa5, surface_name="fsa5", size=(800, 400),
... cmap='RdBu_r', color_bar=True, color_range=(-0.5, 0.5))
% Map parcellated data to the surface
CT_d_fsa5 = parcel_to_surface(CT_d, 'aparc_fsa5');
% Project the results on the surface brain
f = figure,
plot_cortical(CT_d_fsa5, 'surface_name', 'fsa5', 'color_range', ...
[-0.5 0.5], 'cmap', 'RdBu_r')
>>> from enigmatoolbox.utils.parcellation import parcel_to_surface
>>> from enigmatoolbox.plotting import plot_cortical
>>> # Before visualizing the data, we need to map the parcellated data to the surface
>>> CT_z_mean_fsa5 = parcel_to_surface(CT_z_mean, 'aparc_fsa5')
>>> # Project the results on the surface brain
>>> plot_cortical(array_name=CT_z_mean_fsa5, surface_name="fsa5", size=(800, 400),
>>> cmap='Blues_r', color_bar=True, color_range=(-2, 0))
% Map parcellated data to the surface
CT_d_fsa5 = parcel_to_surface(CT_z_mean{:, :}, 'aparc_fsa5');
% Project the results on the surface brain
f = figure,
plot_cortical(CT_z_mean_fsa5, 'surface_name', 'fsa5', 'color_range', ...
[-2 0], 'cmap', 'Blues_r')

Subcortical surface visualization¶
The ENIGMA TOOLBOX’s subcortical viewer includes 16 segmented subcortical structures obtained from the Desikan-Killiany atlas (aparc+aseg.mgz). Subcortical regions include bilateral accumbens, amygdala, caudate, hippocampus (technically not subcortical but considered as such by FreeSurfer), pallidum, putamen, thalamus, and ventricles. In the following example, we will display subcortical atrophy in individuals with left TLE.
We’ve mentioned this already, but don’t forget that…
Subcortical input values are ordered as follows: left-accumbens, left-amygdala, left-caudate, left-hippocampus,
left-pallidum, left-putamen, left-thalamus, left-ventricles, right-accumbens, right-amygdala, right-caudate, right-hippocampus,
right-pallidum, right-putamen, right-thalamus, right-ventricles! You can re-order your subcortical dataset using our reorder_sctx()
function.
*Ventricles are optional.
Prerequisites ↪ Load summary statistics or example data ↪ Re-order subcortical data (mega only) ↪ Z-score data (mega only)
>>> from enigmatoolbox.plotting import plot_subcortical
>>> # Project the results on the surface brain
>>> plot_subcortical(array_name=SV_d, size=(800, 400),
... cmap='RdBu_r', color_bar=True, color_range=(-0.5, 0.5))
% Project the results on the surface brain
f = figure,
plot_subcortical(SV_d, 'color_range', [-0.5 0.5], 'cmap', 'RdBu_r')
>>> from enigmatoolbox.plotting import plot_subcortical
>>> # Project the results on the surface brain
>>> plot_subcortical(array_name=SV_z_mean, size=(800, 400),
>>> cmap='Blues_r', color_bar=True, color_range=(-3, 0))
% Project the results on the surface brain
f = figure,
plot_subcortical(SV_z_mean{:, :}, 'color_range', [-2 1], 'cmap', 'Blues_r')

Connectivity data¶
This page contains descriptions and examples to use Human Connectome Project (HCP) connectivity data.
Neuroimaging, particularly with functional and diffusion MRI, has become a leading tool to characterize
human brain network organization in vivo and to identify network alterations in brain disorders.
The ENIGMA TOOLBOX includes high-resolution structural (derived from diffusion-weighted tractography)
and functional (derived from resting-state functional MRI) connectivity data from a cohort of unrelated
healthy adults from the Human Connectome Project (HCP). Preprocessed cortico-cortical, subcortico-cortical,
and subcortico-subcortical functional and structural connectivity matrices can be easily retrieved using the
load_fc()
and load_sc()
functions.
High-resolution functional and structural data were parcellated according to the Desikan-Killiany, Glasser, as well as Schaefer 100, 200, 300, and 400 parcellations.
Normative functional connectivity matrices were generated by computing pairwise correlations between the time series of all cortical regions and subcortical (nucleus accumbens, amygdala, caudate, hippocampus, pallidum, putamen, thalamus) regions; negative connections were set to zero. Subject-specific connectivity matrices were then z-transformed and aggregated across participants to construct a group-average functional connectome. Available cortico-cortical, subcortico-cortical, and subcortico-subcortical matrices are unthresholded.
Normative structural connectivity matrices were generated from preprocessed diffusion MRI data using MRtrix3. Anatomical constrained tractography was performed using different tissue types derived from the T1-weighted image, including cortical and subcortical gray matter, white matter, and cerebrospinal fluid. Multi-shell and multi-tissue response functions were estimated104 and constrained spherical deconvolution and intensity normalization were performed. The initial tractogram was generated with 40 million streamlines, with a maximum tract length of 250 and a fractional anisotropy cutoff of 0.06. Spherical-deconvolution informed filtering of tractograms (SIFT2) was applied to reconstruct whole-brain streamlines weighted by the cross-section multipliers. Reconstructed streamlines were mapped onto the 68 cortical and 14 subcortical (including hippocampus) regions to produce subject-specific structural connectivity matrices. The group-average normative structural connectome was defined using a distance-dependent thresholding procedure, which preserved the edge length distribution in individual patients, and was log transformed to reduce connectivity strength variance. As such, structural connectivity was defined by the number of streamlines between two regions (i.e., fiber density).
Additional details on subject inclusion and data preprocessing are provided in the ENIGMA TOOLBOX preprint.
One matrix 🎤
Are you looking for subcortico-subcortical connectivity? Or do you want cortical and subcortical connectivity
in one matrix? If so, check out our functions: load_fc_as_one()
here and load_sc_as_one()
here!
Load cortical connectivity matrices¶
The ENIGMA TOOLBOX provides structural (diffusion MRI) and functional (resting-state functional MRI) connectivity matrices obtained from the HCP.
>>> from enigmatoolbox.datasets import load_sc, load_fc
>>> from nilearn import plotting
>>> # Load cortico-cortical functional connectivity data
>>> fc_ctx, fc_ctx_labels, _, _ = load_fc()
>>> # Load cortico-cortical structural connectivity data
>>> sc_ctx, sc_ctx_labels, _, _ = load_sc()
>>> # Plot cortico-cortical connectivity matrices
>>> fc_plot = plotting.plot_matrix(fc_ctx, figure=(9, 9), labels=fc_ctx_labels, vmax=0.8, vmin=0, cmap='Reds')
>>> sc_plot = plotting.plot_matrix(sc_ctx, figure=(9, 9), labels=sc_ctx_labels, vmax=10, vmin=0, cmap='Blues')
% Load cortico-cortical functional connectivity data
[fc_ctx, fc_ctx_labels, ~, ~] = load_fc();
% Load cortico-cortical structural connectivity data
[sc_ctx, sc_ctx_labels, ~, ~] = load_sc();
% Plot cortico-cortical connectivity matrices
f = figure,
imagesc(fc_ctx, [0 0.8]);
axis square;
colormap(Reds);
colorbar;
set(gca, 'YTick', 1:1:length(fc_ctx_labels), ...
'YTickLabel', fc_ctx_labels)
f = figure,
imagesc(sc_ctx, [0 10]);
axis square;
colormap(Blues);
colorbar;
set(gca, 'YTick', 1:1:length(sc_ctx_labels), ...
'YTickLabel', sc_ctx_labels)

Following the example below, we can also extract seed-based cortico-cortical connectivity, using the left middle temporal gyrus as example seed.
>>> from enigmatoolbox.utils.parcellation import parcel_to_surface
>>> from enigmatoolbox.plotting import plot_cortical
>>> # Extract seed-based connectivity
>>> seed = "L_middletemporal"
>>> seed_conn_fc = fc_ctx[[i for i, item in enumerate(fc_ctx_labels) if seed in item], ]
>>> seed_conn_sc = sc_ctx[[i for i, item in enumerate(sc_ctx_labels) if seed in item], ]
>>> # Map parcellated data to the surface
>>> seed_conn_fc_fsa5 = parcel_to_surface(seed_conn_fc, 'aparc_fsa5')
>>> seed_conn_sc_fsa5 = parcel_to_surface(seed_conn_sc, 'aparc_fsa5')
>>> # Project the results on the surface brain
>>> plot_cortical(array_name=seed_conn_fc_fsa5, surface_name="fsa5", size=(800, 400),
... cmap='Reds', color_bar=True, color_range=(0.2, 0.7))
>>> plot_cortical(array_name=seed_conn_sc_fsa5, surface_name="fsa5", size=(800, 400),
... cmap='Blues', color_bar=True, color_range=(2, 10))
% Extract seed-based connectivity
seed = 'L_middletemporal'
seed_conn_fc = fc_ctx(find(strcmp(fc_ctx_labels, seed)), :)
seed_conn_sc = sc_ctx(find(strcmp(sc_ctx_labels, seed)), :)
% Map parcellated data to the surface
seed_conn_fc_fsa5 = parcel_to_surface(seed_conn_fc, 'aparc_fsa5');
seed_conn_sc_fsa5 = parcel_to_surface(seed_conn_sc, 'aparc_fsa5');
% Project the results on the surface brain
f = figure,
plot_cortical(seed_conn_fc_fsa5, 'cmap', 'Reds', 'color_range', [0.2 0.7])
f = figure,
plot_cortical(seed_conn_sc_fsa5, 'cmap', 'Blues', 'color_range', [2 10])

Load subcortical connectivity matrices¶
>>> from enigmatoolbox.datasets import load_sc, load_fc
>>> from nilearn import plotting
>>> # Load subcortico-cortical functional connectivity data
>>> _, _, fc_sctx, fc_sctx_labels = load_fc()
>>> # Load subcortico-cortical structural connectivity data
>>> _, _, sc_sctx, sc_sctx_labels = load_sc()
>>> # Plot subcortico-cortical connectivity matrices
>>> fc_plot = plotting.plot_matrix(fc_sctx, figure=(9, 9), labels=fc_sctx_labels, vmax=0.5, vmin=0, cmap='Reds')
>>> sc_plot = plotting.plot_matrix(sc_sctx, figure=(9, 9), labels=sc_sctx_labels, vmax=10, vmin=0, cmap='Blues')
% Load subcortico-cortical functional connectivity data
[~, ~, fc_sctx, fc_sctx_labels] = load_fc();
% Load subcortico-cortical structural connectivity data
[~, ~, sc_sctx, sc_sctx_labels] = load_sc();
% Plot subcortico-cortical connectivity matrices
f = figure,
imagesc(fc_sctx, [0 0.5]);
axis square;
colormap(Reds);
colorbar;
set(gca, 'YTick', 1:1:length(fc_sctx_labels), ...
'YTickLabel', fc_sctx_labels)
f = figure,
imagesc(sc_sctx, [0 10]);
axis square;
colormap(Blues);
colorbar;
set(gca, 'YTick', 1:1:length(sc_sctx_labels), ...
'YTickLabel', sc_sctx_labels)

As described above, we can also extract seed-based subcortico-cortical connectivity, using the left hippocampus as example seed.
>>> from enigmatoolbox.plotting import plot_cortical
>>> # Extract seed-based connectivity
>>> seed = "Lhippo"
>>> seed_conn_fc = fc_sctx[[i for i, item in enumerate(fc_sctx_labels) if seed in item],]
>>> seed_conn_sc = sc_sctx[[i for i, item in enumerate(sc_sctx_labels) if seed in item],]
>>> # Map parcellated data to the surface
>>> seed_conn_fc_fsa5 = parcel_to_surface(seed_conn_fc, 'aparc_fsa5')
>>> seed_conn_sc_fsa5 = parcel_to_surface(seed_conn_sc, 'aparc_fsa5')
>>> # Project the results on the surface brain
>>> plot_cortical(array_name=seed_conn_fc_fsa5, surface_name="fsa5", size=(800, 400),
... cmap='Reds', color_bar=True, color_range=(0.1, 0.3))
>>> plot_cortical(array_name=seed_conn_sc_fsa5, surface_name="fsa5", size=(800, 400),
... cmap='Blues', color_bar=True, color_range=(1, 10))
% Extract seed-based connectivity
seed = 'Lhippo'
seed_conn_fc = fc_sctx(find(strcmp(fc_sctx_labels, seed)), :)
seed_conn_sc = sc_sctx(find(strcmp(sc_sctx_labels, seed)), :)
% Map parcellated data to the surface
seed_conn_fc_fsa5 = parcel_to_surface(seed_conn_fc, 'aparc_fsa5');
seed_conn_sc_fsa5 = parcel_to_surface(seed_conn_sc, 'aparc_fsa5');
% Project the results on the surface brain
f = figure,
plot_cortical(seed_conn_fc_fsa5, 'cmap', 'Reds', 'color_range', [0.1 0.3])
f = figure,
plot_cortical(seed_conn_sc_fsa5, 'cmap', 'Blues', 'color_range', [1 10])

Gene expression data¶
This page contains descriptions and examples to fetch microarray expression data.
Fetch gene expression data¶
The ENGMA TOOLBOX provides microarray expression data collected from six human donor brains and released by Allen Human Brain Atlas. Microarray expression data were first generated using abagen , a toolbox that provides reproducible workflows for processing and preparing gene co-expression data according to previously established recommendations (Arnatkevic̆iūtė et al., 2019, NeuroImage); preprocessing steps included intensity-based filtering of microarray probes, selection of a representative probe for each gene across both hemispheres, matching of microarray samples to brain parcels from the Desikan-Killiany, Glasser, and Schaefer parcellations, normalization, and aggregation within parcels and across donors. Moreover, genes whose similarity across donors fell below a threshold (r < 0.2) were removed, leaving a total of 12,668 genes for analysis (using the Desikan-Killiany atlas). To accommodate users, we also provide unthresholded gene datasets with varying stability thresholds (r ≥ 0.2, r ≥ 0.4, r ≥ 0.6, r ≥ 0.8) for every parcellation (https://github.com/saratheriver/enigma-extra).
Wanna know where we got those genes? 👖
The Allen Human Brain Atlas microarray expression data loaded as part of the ENIGMA TOOLBOX was originally
fetched from the abagen toolbox using the abagen.get_expression_data()
command. For more flexibility, check out their toolbox!
Got NaNs? 🥛
Please note that two regions (right frontal pole and right temporal pole) in the Desikan-Killiany atlas were not matched to any tissue sample and thus are filled with NaN values in the data matrix.
Slow internet connection? 🐌
The command fetch_ahba()
fetches a large (~24 MB) microarray dataset from the internet and may thus be
incredibly slow to load if you lack a good connection. But don’t you worry: you can download the
relevant file by typing this command in your terminal wget https://github.com/saratheriver/enigma-extra/raw/master/ahba/allgenes_stable_r0.2.csv
and specifying its path in the fetch_ahba()
function as follows:fetch_ahba('/path/to/allgenes_stable_r0.2.csv')
>>> from enigmatoolbox.datasets import fetch_ahba
>>> # Fetch gene expression data
>>> genes = fetch_ahba()
>>> # Obtain region labels
>>> reglabels = genes['label']
>>> # Obtain gene labels
>>> genelabels = list(genes.columns)[1]
% Fetch gene expression data
genes = fetch_ahba();
% Obtain region labels
reglabels = genes.label;
% Obtain gene labels
genelabels = genes.Properties.VariableNames(2:end);

BigBrain moments & gradient¶
This page contains descriptions and examples to stratify and visualize surface-based findings according to BigBrain statistical moments and gradient.
BigBrain is a ultra-high resolution, 3D volumetric reconstruction of a postmortem Merker-stained and sliced human brain from a 65-year-old male, with specialized pial and white matter surface reconstructions (obtained via the open-access BigBrain repository. The postmortem brain was paraffin-embedded, coronally sliced into 7400 20μm sections, silver-stained for cell bodies, and digitized. A 3D reconstruction was implemented with a successive coarse-to-fine hierarchical procedure, resulting in a full brain volume. For the ENIGMA TOOLBOX, we used the highest resolution full brain volume (100μm isotropic voxels), then generated 50 equivolumetric surfaces between the pial and white matter surfaces. The equivolumetric model compensates for cortical folding by varying the Euclidean distance between pairs of intracortical surfaces throughout the cortex, thus preserving the fractional volume between surfaces. Next, staining intensity profiles, representing neuronal density and soma size by cortical depth, were sampled along 327,684 surface points in the direction of cortical columns.
BigBrain statistical moments¶
As part of the ENIGMA Toolbox, we parametrized cytoarchitectural properties of the BigBrain by using two central moments (i.e., mean and skewness) calculated across several cortical depths. In essence, studying the mean of BigBrain intensity profiles across the cortical mantle probes cellular/neuronal density, whereas analysis of skewness contrasts deep and superficial cortical layers, indexing the unevenness of cellular distribution, a critical dimension of laminar differentiation. Finally, the Desikan-Killiany atlas was nonlinearly transformed to the BigBrain histological surfaces Lewis et al., 2019, OHBM and central moments were averaged within each parcels, excluding outlier vertices with values more than three scaled median absolute deviations away from the parcel median.

In the following example, we first threshold and display a cortical thickness map to highlight areas of marked atrophy in patients vs. controls (using left TLE vs. controls for the example below).
Prerequisites ↪ Load summary statistics or example data ↪ Z-score data (mega only)
>>> import numpy as np
>>> from enigmatoolbox.plotting import plot_cortical
>>> from enigmatoolbox.utils.parcellation import parcel_to_surface
>>> # Extract FDR-corrected p-values and find regions with p < 0.01
>>> region_idx = np.where(CT['fdr_p'].to_numpy() <= 0.01)
>>> # Visualize thresholded Cohen's d map
>>> CT_d_thr = np.zeros((68,))
>>> CT_d_thr[region_idx] = CT_d.iloc[region_idx]
>>> plot_cortical(array_name=parcel_to_surface(CT_d_thr, 'aparc_fsa5'), surface_name="fsa5", size=(800, 400),
... cmap='RdBu_r', color_bar=True, color_range=(-0.5, 0.5))
% Extract FDR-corrected p-values and find regions with p < 0.01
region_idx = find(CT.fdr_p <= 0.01);
% Visualize thresholded Cohen's d map
CT_d_thr = zeros(length(CT_d), 1);
CT_d_thr(region_idx) = CT_d(region_idx);
f = figure,
plot_cortical(parcel_to_surface(CT_d_thr), 'color_range', [-0.5 0.5])
>>> import numpy as np
>>> from enigmatoolbox.plotting import plot_cortical
>>> from enigmatoolbox.utils.parcellation import parcel_to_surface
>>> # Extract regions with z-score < -1
>>> region_idx = np.where(CT_z_mean.to_numpy() <= -1)
>>> # Visualize z-score thresholded map
>>> CT_z_mean_thr = np.zeros((68,))
>>> CT_z_mean_thr[region_idx] = CT_z_mean.iloc[region_idx]
>>> plot_cortical(array_name=parcel_to_surface(CT_z_mean_thr, 'aparc_fsa5'), surface_name="fsa5",
... size=(800, 400), cmap='RdBu_r', color_bar=True, color_range=(-1, 1))
% Extract regions with z-score < -1
region_idx = find(CT_z_mean{:, :} <= -1);
% Visualize z-score thresholded map
CT_z_mean_thr = zeros(length(CT_z_mean{:, :}), 1);
CT_z_mean_thr(region_idx) = CT_z_mean{:, :}(region_idx);
f = figure,
plot_cortical(parcel_to_surface(CT_z_mean_thr), 'color_range', [-1 1])
From the following code snippet, we can then contextualize and visualize these marked patterns of atrophy with respect to intensity profiles reflecting microstructural composition (e.g., cellular density, cellular distribution asymmetry) along cortical columns.
Prerequisites ↪ Load summary statistics or example data ↪ Z-score data (mega only) ↪ Threshold surface maps
>>> from enigmatoolbox.histology import bb_moments_raincloud
>>> # Stratify and plot results according to BigBrain statistical moments
>>> bb_moments_raincloud(region_idx=region_idx)
% Stratify and plot results according to BigBrain statistical moments
f = figure,
bb_moments_raincloud(region_idx)

BigBrain gradient¶
In addition to statistical moments, we also incorporated the BigBrain microstructural profile covariance (MPC) gradient from the original publication (Paquola et al., 2019, PLoS Biol). In brief, the authors derived an MPC matrix by correlating BigBrain intensity profiles between every pair of regions in a 1,012 cortical node parcellation, controlling for the average whole-cortex intensity profile. The MPC matrix was thresholded row-wise to retain the top 10% of correlations and converted into a normalized angle matrix. Diffusion map embedding, a nonlinear manifold learning technique, identified the principal axis of variation across cortical areas, i.e., the BigBrain gradient. In this space, cortical nodes that are strongly similar are closer together, whereas nodes with little to no intercovariance are farther apart.
To allow contextualization of ENIGMA-derived surface-based findings, we mapped the BigBrain gradient, which describes a sensory-fugal transition in intracortical microstructure, to the Desikan-Killiany atlas and partitioned it into five equally sized discrete bins. Stratifying cortical findings relative to this gradient can, for example, test whether patterns of changes are conspicuous in cortices with marked laminar differentiation (e.g., 1st bin; sensory and motor cortices) or in those with subtle laminar differentiation (e.g., 5th bin limbic cortices).

In the following example, we can use our thresholded (or unthresholded) a cortical map (e.g., cortical thickness effect sizes) to contextualize and visualize patterns of marked atrophy with respect to each gradient bin.
Prerequisites ↪ Load summary statistics or example data ↪ Z-score data (mega only) ↪ Threshold surface maps
>>> import numpy as np
>>> from enigmatoolbox.histology import bb_gradient_plot
>>> # Stratify and plot results according to the BigBrain gradient
>>> bb_gradient_plot(data=np.where(CT_d_thr == 0, np.nan, CT_d_thr),
... axis_range=(-0.6, 0.25), yaxis_label='Cohen\'s $d$')
% Stratify and plot results according to the BigBrain gradient
CT_d_thr(CT_d_thr == 0) = nan;
f = figure,
bb_gradient_plot(CT_d_thr, 'axis_range', [-0.6 0.25], ...
'yaxis_label', 'Cohen'' {\it d}')
>>> import numpy as np
>>> from enigmatoolbox.histology import bb_gradient_plot
>>> # Stratify and plot results according to the BigBrain gradient
>>> bb_gradient_plot(data=np.where(CT_z_mean_thr == 0, np.nan, CT_z_mean_thr),
... axis_range=(-2, -0.75), yaxis_label='$z$-score')
% Stratify and plot results according to the BigBrain gradient
CT_z_mean_thr(CT_z_mean_thr == 0) = nan;
f = figure,
bb_gradient_plot(CT_z_mean_thr, 'axis_range', [-2 -0.75], ...
'yaxis_label', '{\it z}-score')

Economo-Koskinas cytoarchitectonics¶
This page contains descriptions and examples to stratify and visualize surface-based findings according to cyatoarchitectural variation.
Economo-Koskinas stratification¶
As part of the ENIGMA TOOLBOX, we included a digitized parcellation of von Economo and Koskina’s cytoarchitectonic mapping of the human cerebral cortex, from which five different structural types of cerebral cortex can be described: i) agranular (purple; thick with large cells but sparse layers II and IV), ii) frontal (blue; thick but not rich in cellular substance, visible layers II and IV), iii) parietal (green; thick and rich in cells with dense layers II and IV but small and slender pyramidal cells), iv) polar (orange; thin but rich in cells, particularly in granular layers), and v) granular or koniocortex (yellow; thin but rich in smalls cells, even in layer IV, and a rarified layer V).

In the following example, we contextualized disease-related cortical atrophy patterns with respect to the well-established von Economo and Koskinas cytoarchitectonic atlas by summarizing cortex-wide effects across each of the five structural types of isocortex. To ease interpretation, stratification of findings based on the von Economo and Koskinas atlas are also visualized in a spider plot. Here, negative values (towards the center) represent greater atrophy in disease cases relative to healthy controls.
Prerequisites ↪ Load summary statistics or example data ↪ Z-score data (mega only)
>>> from enigmatoolbox.histology import economo_koskinas_spider
>>> # Stratify cortical atrophy based on Economo-Koskinas classes
>>> class_mean = economo_koskinas_spider(CT_d, axis_range=(-0.4, 0))
% Stratify cortical atrophy based on Economo-Koskinas classes
class_mean = economo_koskinas_spider(CT_d, 'axis_range', [-0.4 0])
>>> from enigmatoolbox.histology import economo_koskinas_spider
>>> # Stratify cortical atrophy based on Economo-Koskinas classes
>>> class_mean = economo_koskinas_spider(CT_z_mean, axis_range=(-1, 0))
% Stratify cortical atrophy based on Economo-Koskinas classes
class_mean = economo_koskinas_spider(CT_z_mean{:, :}, 'axis_range', [-1 0])

Hub susceptibility¶
This page contains descriptions and examples to build hub susceptibility models. For additional details on hub susceptibility models, please see our manuscript entitled Network-based atrophy modeling in the common epilepsies: a worldwide ENIGMA study.
Cortical hubs¶
Normative structural and functional connectomes hold valuable information for relating macroscopic brain network organization to patterns of disease-related atrophy. Using the HCP connectivity data, we can first compute weighted (optimal for unthresholded connectivity matrices) degree centrality to identify structural and functional hub regions (i.e., brain regions with many connections). This is done by simply computing the sum of all weighted cortico-cortical connections for every region. Higher degree centrality denotes increased hubness.
Prerequisites ↪ Load cortico-cortical connectivity matrices
>>> import numpy as np
>>> from enigmatoolbox.plotting import plot_cortical
>>> from enigmatoolbox.utils.parcellation import parcel_to_surface
>>> # Compute weighted degree centrality measures from the connectivity data
>>> fc_ctx_dc = np.sum(fc_ctx, axis=0)
>>> sc_ctx_dc = np.sum(sc_ctx, axis=0)
>>> # Map parcellated data to the surface
>>> fc_ctx_dc_fsa5 = parcel_to_surface(fc_ctx_dc, 'aparc_fsa5')
>>> sc_ctx_dc_fsa5 = parcel_to_surface(sc_ctx_dc, 'aparc_fsa5')
>>> # Project the results on the surface brain
>>> plot_cortical(array_name=fc_ctx_dc_fsa5, surface_name="fsa5", size=(800, 400),
... cmap='Reds', color_bar=True, color_range=(20, 30))
>>> plot_cortical(array_name=sc_ctx_dc_fsa5, surface_name="fsa5", size=(800, 400),
... cmap='Blues', color_bar=True, color_range=(100, 300))
% Compute weighted degree centrality measures from the connectivity data
fc_ctx_dc = sum(fc_ctx);
sc_ctx_dc = sum(sc_ctx);
% Map parcellated data to the surface
fc_ctx_dc_fsa5 = parcel_to_surface(fc_ctx_dc, 'aparc_fsa5');
sc_ctx_dc_fsa5 = parcel_to_surface(sc_ctx_dc, 'aparc_fsa5');
% Project the results on the surface brain
f = figure,
plot_cortical(fc_ctx_dc_fsa5, 'surface_name', 'fsa5', 'color_range', [20 30], ...
'cmap', 'Reds', 'label_text', 'Functional degree centrality')
f = figure,
plot_cortical(sc_ctx_dc_fsa5, 'surface_name', 'fsa5', 'color_range', [100 300], ...
'cmap', 'Blues', 'label_text', 'Structural degree centrality')

Subcortical hubs¶
The HCP connectivity data can also be used to identify structural and functional subcortico-cortical hub regions. As above, we simply compute the sum of all weighted subcortico-cortical connections for every subcortical area. Once again, higher degree centrality denotes increased hubness.
No ventricles, no problem 👌🏼
Because we do not have connectivity values for the ventricles, do make sure to set
the “ventricles” flag to False
when displaying the findings on the subcortical surfaces.
Prerequisites ↪ Load subcortico-cortical connectivity matrices
>>> import numpy as np
>>> from enigmatoolbox.plotting import plot_subcortical
>>> # Compute weighted degree centrality measures from the connectivity data
>>> fc_sctx_dc = np.sum(fc_sctx, axis=1)
>>> sc_sctx_dc = np.sum(sc_sctx, axis=1)
>>> # Project the results on the surface brain
>>> plot_subcortical(array_name=fc_sctx_dc, ventricles=False, size=(800, 400),
... cmap='Reds', color_bar=True, color_range=(5, 10))
>>> plot_subcortical(array_name=sc_sctx_dc, ventricles=False, size=(800, 400),
... cmap='Blues', color_bar=True, color_range=(100, 300))
% Compute weighted degree centrality measures from the connectivity data
fc_sctx_dc = sum(fc_sctx, 2);
sc_sctx_dc = sum(sc_sctx, 2);
% Project the results on the surface brain
f = figure,
plot_subcortical(fc_sctx_dc, 'ventricles', 'False', 'color_range', [5 10], ...
'cmap', 'Reds', 'label_text', 'Functional degree centrality')
f = figure,
plot_subcortical(sc_sctx_dc, 'ventricles', 'False', 'color_range', [100 300], ...
'cmap', 'Blues', 'label_text', 'Structural degree centrality')

Hub-atrophy correlations¶
Now that we have established the spatial distribution of hubs in the brain, we can then assess whether these hub regions are selectively vulnerable to syndrome-specific atrophy patterns. For simplicity, in the following example, we will spatially correlate degree centrality measures to measures of cortical and subcortical atrophy (where lower values indicate greater atrophy relative to controls).
Prerequisites ↪ Load summary statistics or example data ↪ Re-order subcortical data (mega only) ↪ Z-score data (mega only) ↪ Load cortico-cortical and subcortico-cortical connectivity matrices ↪ Compute cortical-cortical and subcortico-cortical degree centrality
>>> import numpy as np
>>> # Remove subcortical values corresponding to the ventricles
>>> # (as we don't have connectivity values for them!)
>>> SV_d_noVent = SV_d.drop([np.where(SV['Structure'] == 'LLatVent')[0][0],
... np.where(SV['Structure'] == 'RLatVent')[0][0]]).reset_index(drop=True)
>>> # Perform spatial correlations between functional hubs and Cohen's d
>>> fc_ctx_r = np.corrcoef(fc_ctx_dc, CT_d)[0, 1]
>>> fc_sctx_r = np.corrcoef(fc_sctx_dc, SV_d_noVent)[0, 1]
>>> # Perform spatial correlations between structural hubs and Cohen's d
>>> sc_ctx_r = np.corrcoef(sc_ctx_dc, CT_d)[0, 1]
>>> sc_sctx_r = np.corrcoef(sc_sctx_dc, SV_d_noVent)[0, 1]
>>> # Store correlation coefficients
>>> rvals = {'functional cortical hubs': fc_ctx_r, 'functional subcortical hubs': fc_sctx_r,
... 'structural cortical hubs': sc_ctx_r, 'structural subcortical hubs': sc_sctx_r}
% Remove subcortical values corresponding the ventricles
% (as we don't have connectivity values for them!)
SV_d_noVent = SV_d;
SV_d_noVent([find(strcmp(SV.Structure, 'LLatVent')); ...
find(strcmp(SV.Structure, 'RLatVent'))], :) = [];
% Perform spatial correlations between cortical hubs and Cohen's d
fc_ctx_r = corrcoef(fc_ctx_dc, CT_d);
sc_ctx_r = corrcoef(sc_ctx_dc, CT_d);
% Perform spatial correlations between structural hubs and Cohen's d
fc_sctx_r = corrcoef(fc_sctx_dc, SV_d_noVent);
sc_sctx_r = corrcoef(sc_sctx_dc, SV_d_noVent);
% Store correlation coefficients
rvals = cell2struct({fc_ctx_r(1, 2), fc_sctx_r(1, 2), sc_ctx_r(1, 2), sc_sctx_r(1, 2)}, ...
{'functional_cortical_hubs', 'functional_subcortical_hubs', ...
'structural_cortical_hubs', 'structural_subcortical_hubs'}, 2);
>>> import numpy as np
>>> # Remove subcortical values corresponding to the ventricles
>>> # (as we don't have connectivity values for them!)
>>> SV_z_mean_noVent = SV_z_mean.drop(['LLatVent', 'RLatVent']).reset_index(drop=True)
>>> # Perform spatial correlations between functional hubs and z-scores
>>> fc_ctx_r = np.corrcoef(fc_ctx_dc, CT_z_mean)[0, 1]
>>> fc_sctx_r = np.corrcoef(fc_sctx_dc, SV_z_mean_noVent)[0, 1]
>>> # Perform spatial correlations between structural hubs and z-scores
>>> sc_ctx_r = np.corrcoef(sc_ctx_dc, CT_z_mean)[0, 1]
>>> sc_sctx_r = np.corrcoef(sc_sctx_dc, SV_z_mean_noVent)[0, 1]
>>> # Store correlation coefficients
>>> rvals = {'functional cortical hubs': fc_ctx_r, 'functional subcortical hubs': fc_sctx_r,
... 'structural cortical hubs': sc_ctx_r, 'structural subcortical hubs': sc_sctx_r}
% Remove subcortical values corresponding the ventricles
% (as we don't have connectivity values for them!)
SV_z_mean_noVent = SV_z_mean;
SV_z_mean_noVent.LLatVent = [];
SV_z_mean_noVent.RLatVent = [];
% Perform spatial correlations between cortical hubs and Cohen's d
fc_ctx_r = corrcoef(fc_ctx_dc, CT_z_mean{:, :});
sc_ctx_r = corrcoef(sc_ctx_dc, CT_z_mean{:, :});
% Perform spatial correlations between structural hubs and Cohen's d
fc_sctx_r = corrcoef(fc_sctx_dc, SV_z_mean_noVent{:, :});
sc_sctx_r = corrcoef(sc_sctx_dc, SV_z_mean_noVent{:, :});
% Store correlation coefficients
rvals = cell2struct({fc_ctx_r(1, 2), fc_sctx_r(1, 2), sc_ctx_r(1, 2), sc_sctx_r(1, 2)}, ...
{'functional_cortical_hubs', 'functional_subcortical_hubs', ...
'structural_cortical_hubs', 'structural_subcortical_hubs'}, 2);
Plot hub-atrophy correlations¶
Now that we have done all the necessary analyses, we can finally display our correlations. Here, a negative correlation indicates that greater atrophy correlates with the spatial distribution of hub regions (greater degree centrality).
Prerequisites The script below can be used to show relationships between any two variables, as for example: degree centrality vs. atrophy ↪ Load summary statistics or example data ↪ Re-order subcortical data (mega only) ↪ Z-score data (mega only) ↪ Load cortico-cortical and subcortico-cortical connectivity matrices ↪ Compute cortical-cortical and subcortico-cortical degree centrality ↪ Assess statistical significance via spin permutation tests
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> # Store degree centrality and atrophy measures
>>> meas = {('functional cortical hubs', 'cortical thickness'): [fc_ctx_dc, CT_d],
... ('functional subcortical hubs', 'subcortical volume'): [fc_sctx_dc, SV_d_noVent],
... ('structural cortical hubs', 'cortical thickness'): [sc_ctx_dc, CT_d],
... ('structural subcortical hubs', 'subcortical volume'): [sc_sctx_dc, SV_d_noVent]}
>>> fig, axs = plt.subplots(1, 4, figsize=(15, 3))
>>> for k, (fn, dd) in enumerate(meas.items()):
>>> # Define scatter colors
>>> if k <= 1:
>>> col = '#A8221C'
>>> else:
>>> col = '#324F7D'
>>> # Plot relationships between hubs and atrophy
>>> axs[k].scatter(meas[fn][0], meas[fn][1], color=col,
... label='$r$={:.2f}'.format(rvals[fn[0]]) + '\n$p$={:.3f}'.format(p_and_d[fn[0]][0]))
>>> m, b = np.polyfit(meas[fn][0], meas[fn][1], 1)
>>> axs[k].plot(meas[fn][0], m * meas[fn][0] + b, color=col)
>>> axs[k].set_ylim((-1, 0.5))
>>> axs[k].set_xlabel('{}'.format(fn[0].capitalize()))
>>> axs[k].set_ylabel('{}'.format(fn[1].capitalize()))
>>> axs[k].spines['top'].set_visible(False)
>>> axs[k].spines['right'].set_visible(False)
>>> axs[k].legend(loc=1, frameon=False, markerscale=0)
>>> fig.tight_layout()
>>> plt.show()
% Store degree centrality measures
meas = cell2struct({fc_ctx_dc.', fc_sctx_dc, sc_ctx_dc.', sc_sctx_dc}, ...
{'Functional_cortical_hubs', 'Functional_subcortical_hubs', ...
'Structural_cortical_hubs', 'Structural_subcortical_hubs'}, 2);
fns = fieldnames(meas);
% Store atrophy measures
meas2 = cell2struct({CT_d, SV_d_noVent}, {'Cortical_thickness', 'Subcortical_volume'}, 2);
fns2 = fieldnames(meas2);
f = figure,
set(gcf,'color','w');
set(gcf,'units','normalized','position',[0 0 1 0.3])
k2 = [1 2 1 2];
for k = 1:numel(fieldnames(meas))
j = k2(k);
% Define plot colors
if k <= 2; col = [0.66 0.13 0.11]; else; col = [0.2 0.33 0.49]; end
% Plot relationships between hubs and atrophy
axs = subplot(1, 4, k); hold on
s = scatter(meas.(fns{k}), meas2.(fns2{j}), 88, col, 'filled');
P1 = polyfit(meas.(fns{k}), meas2.(fns2{j}), 1);
yfit_1 = P1(1) * meas.(fns{k}) + P1(2);
plot(meas.(fns{k}), yfit_1, 'color',col, 'LineWidth', 3)
ylim([-1 0.5])
xlabel(strrep(fns{k}, '_', ' '))
ylabel(strrep(fns2{j}, '_', ' '))
legend(s, ['{\it r}=' num2str(round(rvals.(lower(fns{k})), 2)) newline ...
'{\it p}=' num2str(round(p_and_d.(lower(fns{k}))(1), 3))])
legend boxoff
end
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> # Store degree centrality and atrophy measures
>>> meas = {('functional cortical hubs', 'cortical thickness'): [fc_ctx_dc, CT_z_mean],
... ('functional subcortical hubs', 'subcortical volume'): [fc_sctx_dc, SV_z_mean_noVent],
... ('structural cortical hubs', 'cortical thickness'): [sc_ctx_dc, CT_z_mean],
... ('structural subcortical hubs', 'subcortical volume'): [sc_sctx_dc, SV_z_mean_noVent]}
>>> fig, axs = plt.subplots(1, 4, figsize=(15, 3))
>>> for k, (fn, dd) in enumerate(meas.items()):
>>> # Define scatter colors
>>> if k <= 1:
>>> col = '#A8221C'
>>> else:
>>> col = '#324F7D'
>>> # Plot relationships between hubs and atrophy
>>> axs[k].scatter(meas[fn][0], meas[fn][1], color=col,
... label='$r$={:.2f}'.format(rvals[fn[0]]) + '\n$p$={:.3f}'.format(p_and_d[fn[0]][0]))
>>> m, b = np.polyfit(meas[fn][0], meas[fn][1], 1)
>>> axs[k].plot(meas[fn][0], m * meas[fn][0] + b, color=col)
>>> axs[k].set_ylim((-3.5, 1.5))
>>> axs[k].set_xlabel('{}'.format(fn[0].capitalize()))
>>> axs[k].set_ylabel('{}'.format(fn[1].capitalize()))
>>> axs[k].spines['top'].set_visible(False)
>>> axs[k].spines['right'].set_visible(False)
>>> axs[k].legend(loc=1, frameon=False, markerscale=0)
>>> fig.tight_layout()
>>> plt.show()
% Store degree centrality measures
meas = cell2struct({fc_ctx_dc, fc_sctx_dc.', sc_ctx_dc, sc_sctx_dc.'}, ...
{'Functional_cortical_hubs', 'Functional_subcortical_hubs', ...
'Structural_cortical_hubs', 'Structural_subcortical_hubs'}, 2);
fns = fieldnames(meas);
% Store atrophy measures
meas2 = cell2struct({CT_z_mean{:, :}, SV_z_mean_noVent{:, :}}, ...
{'Cortical_thickness', 'Subcortical_volume'}, 2);
fns2 = fieldnames(meas2);
f = figure,
set(gcf,'color','w');
set(gcf,'units','normalized','position',[0 0 1 0.3])
k2 = [1 2 1 2];
for k = 1:numel(fieldnames(meas))
j = k2(k);
% Define plot colors
if k <= 2; col = [0.66 0.13 0.11]; else; col = [0.2 0.33 0.49]; end
% Plot relationships between hubs and atrophy
axs = subplot(1, 4, k); hold on
s = scatter(meas.(fns{k}), meas2.(fns2{j}), 88, col, 'filled');
P1 = polyfit(meas.(fns{k}), meas2.(fns2{j}), 1);
yfit_1 = P1(1) * meas.(fns{k}) + P1(2);
plot(meas.(fns{k}), yfit_1, 'color',col, 'LineWidth', 3)
ylim([-3 1.5])
xlabel(strrep(fns{k}, '_', ' '))
ylabel(strrep(fns2{j}, '_', ' '))
legend(s, ['{\it r}=' num2str(round(rvals.(lower(fns{k})), 2)) newline ...
'{\it p}=' num2str(round(p_and_d.(lower(fns{k}))(1), 3))])
legend boxoff
end

Epicenter mapping¶
This page contains descriptions and examples to identify disease epicenters. For additional details on disease epicenter mapping, please see our manuscript entitled Network-based atrophy modeling in the common epilepsies: a worldwide ENIGMA study.
Cortical epicenters¶
Using the HCP connectivity data, we can also identify epicenters of cortical atrophy. Disease epicenters are regions whose functional and/or structural connectivity profile spatially resembles a given disease-related atrophy map. Hence, disease epicenters can be identified by spatially correlating every region’s healthy functional and/or structural connectivity profiles (i.e., cortico-cortical connectivity) to whole-brain atrophy patterns in a given disease. This approach must be repeated systematically across the whole brain, assessing the statistical significance of the spatial similarity of every region’s functional and/or structural connectivity profiles to disease-specific abnormality maps with spatial permutation tests. Cortical and subcortical (see tutorial below) epicenter regions can then be identified if their connectivity profiles are significantly correlated with the disease-specific abnormality map. Regardless of its atrophy level, a cortical (or subcortical) region could potentially be an epicenter if it is (i) strongly connected to other high-atrophy regions and (ii) weakly connected to low-atrophy regions.
In this tutorial, our atrophy map was derived from cortical thickness decreases in individuals with left TLE.
Epicenters? 🤔
Cortical and subcortical epicenter regions are identified if their connectivity profiles correlate with a disease-specific cortical atrophy map. In the following examples, regions with strong negative correlations represent disease epicenters. Moreover, and regardless of its atrophy level, a cortical or subcortical region can be an epicenter if it is (i) strongly connected to other high-atrophy cortical regions and (ii) weakly connected to low-atrophy cortical regions.
Prerequisites ↪ Load summary statistics or example data ↪ Z-score data (mega only) ↪ Load cortico-cortical connectivity matrices
>>> import numpy as np
>>> from enigmatoolbox.permutation_testing import spin_test
>>> # Identify cortical epicenters (from functional connectivity)
>>> fc_ctx_epi = []
>>> fc_ctx_epi_p = []
>>> for seed in range(fc_ctx.shape[0]):
>>> seed_con = fc_ctx[:, seed]
>>> fc_ctx_epi = np.append(fc_ctx_epi, np.corrcoef(seed_con, CT_d)[0, 1])
>>> fc_ctx_epi_p = np.append(fc_ctx_epi_p,
... spin_test(seed_con, CT_d, surface_name='fsa5', parcellation_name='aparc',
... type='pearson', n_rot=1000, null_dist=False))
>>> # Identify cortical epicenters (from structural connectivity)
>>> sc_ctx_epi = []
>>> sc_ctx_epi_p = []
>>> for seed in range(sc_ctx.shape[0]):
>>> seed_con = sc_ctx[:, seed]
>>> sc_ctx_epi = np.append(sc_ctx_epi, np.corrcoef(seed_con, CT_d)[0, 1])
>>> sc_ctx_epi_p = np.append(sc_ctx_epi_p,
... spin_test(seed_con, CT_d, surface_name='fsa5', parcellation_name='aparc',
... type='pearson', n_rot=1000, null_dist=False))
% Identify cortical epicenter values (from functional connectivity)
fc_ctx_epi = zeros(size(fc_ctx, 1), 1);
fc_ctx_epi_p = zeros(size(fc_ctx, 1), 1);
for seed = 1:size(fc_ctx, 1)
seed_conn = fc_ctx(:, seed);
r_tmp = corrcoef(seed_conn, CT_d);
fc_ctx_epi(seed) = r_tmp(1, 2);
fc_ctx_epi_p(seed) = spin_test(seed_conn, CT_d, 'surface_name', 'fsa5', 'parcellation_name', ...
'aparc', 'n_rot', 1000, 'type', 'pearson');
end
% Identify cortical epicenter values (from structural connectivity)
sc_ctx_epi = zeros(size(sc_ctx, 1), 1);
sc_ctx_epi_p = zeros(size(sc_ctx, 1), 1);
for seed = 1:size(sc_ctx, 1)
seed_conn = sc_ctx(:, seed);
r_tmp = corrcoef(seed_conn, CT_d);
sc_ctx_epi(seed) = r_tmp(1, 2);
sc_ctx_epi_p(seed) = spin_test(seed_conn, CT_d, 'surface_name', 'fsa5', 'parcellation_name', ...
'aparc', 'n_rot', 1000, 'type', 'pearson');
end
>>> import numpy as np
>>> from enigmatoolbox.permutation_testing import spin_test
>>> # Identify cortical epicenters (from functional connectivity)
>>> fc_ctx_epi = []
>>> fc_ctx_epi_p = []
>>> for seed in range(fc_ctx.shape[0]):
>>> seed_con = fc_ctx[:, seed]
>>> fc_ctx_epi = np.append(fc_ctx_epi, np.corrcoef(seed_con, CT_z_mean)[0, 1])
>>> fc_ctx_epi_p = np.append(fc_ctx_epi_p,
... spin_test(seed_con, CT_z_mean, surface_name='fsa5', parcellation_name='aparc',
... type='pearson', n_rot=1000, null_dist=False))
>>> # Identify cortical epicenters (from structural connectivity)
>>> sc_ctx_epi = []
>>> sc_ctx_epi_p = []
>>> for seed in range(sc_ctx.shape[0]):
>>> seed_con = sc_ctx[:, seed]
>>> sc_ctx_epi = np.append(sc_ctx_epi, np.corrcoef(seed_con, CT_z_mean)[0, 1])
>>> sc_ctx_epi_p = np.append(sc_ctx_epi_p,
... spin_test(seed_con, CT_z_mean, surface_name='fsa5', parcellation_name='aparc',
... type='pearson', n_rot=1000, null_dist=False))
% Identify cortical epicenter values (from functional connectivity)
fc_ctx_epi = zeros(size(fc_ctx, 1), 1);
fc_ctx_epi_p = zeros(size(fc_ctx, 1), 1);
for seed = 1:size(fc_ctx, 1)
seed_conn = fc_ctx(:, seed);
r_tmp = corrcoef(seed_conn, CT_z_mean{:, :});
fc_ctx_epi(seed) = r_tmp(1, 2);
fc_ctx_epi_p(seed) = spin_test(seed_conn, CT_z_mean{:, :}, 'surface_name', 'fsa5', ...
'parcellation_name', 'aparc', 'n_rot', 1000, 'type', 'pearson');
end
% Identify cortical epicenter values (from structural connectivity)
sc_ctx_epi = zeros(size(sc_ctx, 1), 1);
sc_ctx_epi_p = zeros(size(sc_ctx, 1), 1);
for seed = 1:size(sc_ctx, 1)
seed_conn = sc_ctx(:, seed);
r_tmp = corrcoef(seed_conn, CT_z_mean{:, :});
sc_ctx_epi(seed) = r_tmp(1, 2);
sc_ctx_epi_p(seed) = spin_test(seed_conn, CT_z_mean{:, :}, 'surface_name', 'fsa5', ...
'parcellation_name', 'aparc', 'n_rot', 1000, 'type', 'pearson');
end
As we have assessed the significance of every spatial correlation between seed-based cortico-cortical connectivity and cortical atrophy measures using spin permutation tests, we can set a significance threshold to identify disease epicenters. In the following example, we set a lenient threshold of p < 0.1 (i.e., correlation coefficients were set to zeros for regions whose p-values were greater than 0.1). We are, thus, displaying only correlation coefficients whose significances passes at least these lenient thresholds.
>>> import numpy as np
>>> from enigmatoolbox.utils.parcellation import parcel_to_surface
>>> from enigmatoolbox.plotting import plot_cortical
>>> # Project the results on the surface brain
>>> # Selecting only regions with p < 0.1 (functional epicenters)
>>> fc_ctx_epi_p_sig = np.zeros_like(fc_ctx_epi_p)
>>> fc_ctx_epi_p_sig[np.argwhere(fc_ctx_epi_p < 0.1)] = fc_ctx_epi[np.argwhere(fc_ctx_epi_p < 0.1)]
>>> plot_cortical(array_name=parcel_to_surface(fc_ctx_epi_p_sig, 'aparc_fsa5'), surface_name="fsa5", size=(800, 400),
... cmap='GyRd_r', color_bar=True, color_range=(-0.5, 0.5))
>>> # Selecting only regions with p < 0.1 (structural epicenters)
>>> sc_ctx_epi_p_sig = np.zeros_like(sc_ctx_epi_p)
>>> sc_ctx_epi_p_sig[np.argwhere(sc_ctx_epi_p < 0.1)] = sc_ctx_epi[np.argwhere(sc_ctx_epi_p < 0.1)]
>>> plot_cortical(array_name=parcel_to_surface(sc_ctx_epi_p_sig, 'aparc_fsa5'), surface_name="fsa5", size=(800, 400),
... cmap='GyBu_r', color_bar=True, color_range=(-0.5, 0.5))
% Project the results on the surface brain
% Selecting only regions with p < 0.1 (functional epicenters)
fc_ctx_epi_p_sig = zeros(length(fc_ctx_epi_p), 1);
fc_ctx_epi_p_sig(find(fc_ctx_epi_p < 0.1)) = fc_ctx_epi(fc_ctx_epi_p<0.1);
f = figure,
plot_cortical(parcel_to_surface(fc_ctx_epi_p_sig, 'aparc_fsa5'), ...
'color_range', [-0.5 0.5], 'cmap', 'GyRd_r')
% Selecting only regions with p < 0.1 (structural epicenters)
sc_ctx_epi_p_sig = zeros(length(sc_ctx_epi_p), 1);
sc_ctx_epi_p_sig(find(sc_ctx_epi_p < 0.1)) = sc_ctx_epi(sc_ctx_epi_p<0.1);
f = figure,
plot_cortical(parcel_to_surface(sc_ctx_epi_p_sig, 'aparc_fsa5'), ...
'color_range', [-0.5 0.5], 'cmap', 'GyBu_r')

Subcortical epicenters¶
Similar to the cortical epicenter approach, we can identify subcortical epicenters of cortical atrophy by correlating every subcortical region’s seed-based connectivity profile (e.g., subcortico-cortical connectivity) with a whole-brain cortical atrophy map. As above, our atrophy map was derived from cortical thickness decreases in individuals with left TLE.
Prerequisites ↪ Load summary statistics or example data ↪ Z-score data (mega only) ↪ Load subcortico-cortical connectivity matrices
>>> import numpy as np
>>> from enigmatoolbox.permutation_testing import spin_test
>>> # Identify subcortical epicenters (from functional connectivity)
>>> fc_sctx_epi = []
>>> fc_sctx_epi_p = []
>>> for seed in range(fc_sctx.shape[0]):
>>> seed_con = fc_sctx[seed, :]
>>> fc_sctx_epi = np.append(fc_sctx_epi, np.corrcoef(seed_con, CT_d)[0, 1])
>>> fc_sctx_epi_p = np.append(fc_sctx_epi_p,
... spin_test(seed_con, CT_d, surface_name='fsa5', n_rot=1000))
>>> # Identify subcortical epicenters (from structural connectivity)
>>> sc_sctx_epi = []
>>> sc_sctx_epi_p = []
>>> for seed in range(sc_sctx.shape[0]):
>>> seed_con = sc_sctx[seed, :]
>>> sc_sctx_epi = np.append(sc_sctx_epi, np.corrcoef(seed_con, CT_d)[0, 1])
>>> sc_sctx_epi_p = np.append(sc_sctx_epi_p,
... spin_test(seed_con, CT_d, surface_name='fsa5', n_rot=1000))
% Identify subcortical epicenter values (from functional connectivity)
fc_sctx_epi = zeros(size(fc_sctx, 1), 1);
fc_sctx_epi_p = zeros(size(fc_sctx, 1), 1);
for seed = 1:size(fc_sctx, 1)
seed_conn = fc_sctx(seed, :);
r_tmp = corrcoef(seed_conn, CT_d);
fc_sctx_epi(seed) = r_tmp(1, 2);
fc_sctx_epi_p(seed) = spin_test(seed_conn, CT_d, 'surface_name', 'fsa5', 'parcellation_name', ...
'aparc', 'n_rot', 1000, 'type', 'pearson');
end
% Identify subcortical epicenter values (from structural connectivity)
sc_sctx_epi = zeros(size(sc_sctx, 1), 1);
sc_sctx_epi_p = zeros(size(sc_sctx, 1), 1);
for seed = 1:size(sc_sctx, 1)
seed_conn = sc_sctx(seed, :);
r_tmp = corrcoef(seed_conn, CT_d);
sc_sctx_epi(seed) = r_tmp(1, 2);
sc_sctx_epi_p(seed) = spin_test(seed_conn, CT_d, 'surface_name', 'fsa5', 'parcellation_name', ...
'aparc', 'n_rot', 1000, 'type', 'pearson');
end
>>> import numpy as np
>>> from enigmatoolbox.permutation_testing import spin_test
>>> # Identify subcortical epicenters (from functional connectivity)
>>> fc_sctx_epi = []
>>> fc_sctx_epi_p = []
>>> for seed in range(fc_sctx.shape[0]):
>>> seed_con = fc_sctx[seed, :]
>>> fc_sctx_epi = np.append(fc_sctx_epi, np.corrcoef(seed_con, CT_z_mean)[0, 1])
>>> fc_sctx_epi_p = np.append(fc_sctx_epi_p,
... spin_test(seed_con, CT_z_mean, surface_name='fsa5', n_rot=1000))
>>> # Identify subcortical epicenters (from structural connectivity)
>>> sc_sctx_epi = []
>>> sc_sctx_epi_p = []
>>> for seed in range(sc_sctx.shape[0]):
>>> seed_con = sc_sctx[seed, :]
>>> sc_sctx_epi = np.append(sc_sctx_epi, np.corrcoef(seed_con, CT_z_mean)[0, 1])
>>> sc_sctx_epi_p = np.append(sc_sctx_epi_p,
... spin_test(seed_con, CT_z_mean, surface_name='fsa5', n_rot=1000))
% Identify subcortical epicenter values (from functional connectivity)
fc_sctx_epi = zeros(size(fc_sctx, 1), 1);
fc_sctx_epi_p = zeros(size(fc_sctx, 1), 1);
for seed = 1:size(fc_sctx, 1)
seed_conn = fc_sctx(seed, :);
r_tmp = corrcoef(seed_conn, CT_z_mean{:, :});
fc_sctx_epi(seed) = r_tmp(1, 2);
fc_sctx_epi_p(seed) = spin_test(seed_conn, CT_z_mean{:, :}, 'surface_name', 'fsa5', ...
'parcellation_name', 'aparc', 'n_rot', 1000, 'type', 'pearson');
end
% Identify subcortical epicenter values (from structural connectivity)
sc_sctx_epi = zeros(size(sc_sctx, 1), 1);
sc_sctx_epi_p = zeros(size(sc_sctx, 1), 1);
for seed = 1:size(sc_sctx, 1)
seed_conn = sc_sctx(seed, :);
r_tmp = corrcoef(seed_conn, CT_z_mean{:, :});
sc_sctx_epi(seed) = r_tmp(1, 2);
sc_sctx_epi_p(seed) = spin_test(seed_conn, CT_z_mean{:, :}, 'surface_name', 'fsa5', ...
'parcellation_name', 'aparc', 'n_rot', 1000, 'type', 'pearson');
end
As in the cortical epicenters example above, we have assessed the significance of every spatial correlation between seed-based subcortico-cortical connectivity and cortical atrophy measures using spin permutation tests, and set a lenient threshold of p < 0.1 (i.e., correlation coefficients were set to zeros for regions whose p-values were greater than 0.1). We are, thus, displaying only correlation coefficients whose significances passes at least these lenient thresholds.
>>> import numpy as np
>>> from enigmatoolbox.plotting import plot_subcortical
>>> # Project the results on the surface brain
>>> # Selecting only regions with p < 0.1 (functional epicenters)
>>> fc_sctx_epi_p_sig = np.zeros_like(fc_sctx_epi_p)
>>> fc_sctx_epi_p_sig[np.argwhere(fc_sctx_epi_p < 0.1)] = fc_sctx_epi[np.argwhere(fc_sctx_epi_p < 0.1)]
>>> plot_subcortical(fc_sctx_epi_p_sig, ventricles=False, size=(800, 400),
... cmap='GyRd_r', color_bar=True, color_range=(-0.5, 0.5))
>>> # Selecting only regions with p < 0.1 (functional epicenters)
>>> sc_sctx_epi_p_sig = np.zeros_like(sc_sctx_epi_p)
>>> sc_sctx_epi_p_sig[np.argwhere(sc_sctx_epi_p < 0.1)] = sc_sctx_epi[np.argwhere(sc_sctx_epi_p < 0.1)]
>>> plot_subcortical(sc_sctx_epi_p_sig, ventricles=False, size=(800, 400),
... cmap='GyBu_r', color_bar=True, color_range=(-0.5, 0.5))
% Project the results on the surface brain
% Selecting only regions with p < 0.1 (functional epicenters)
fc_sctx_epi_p_sig = zeros(length(fc_sctx_epi_p), 1);
fc_sctx_epi_p_sig(find(fc_sctx_epi_p < 0.1)) = fc_sctx_epi(fc_sctx_epi_p<0.1);
f = figure,
plot_subcortical(fc_sctx_epi_p_sig, 'ventricles', 'False', ...
'color_range', [-0.5 0.5], 'cmap', 'GyRd_r')
% Selecting only regions with p < 0.1 (structural epicenters)
sc_sctx_epi_p_sig = zeros(length(sc_sctx_epi_p), 1);
sc_sctx_epi_p_sig(find(sc_sctx_epi_p < 0.1)) = sc_sctx_epi(sc_sctx_epi_p<0.1);
f = figure,
plot_subcortical(sc_sctx_epi_p_sig, 'ventricles', 'False', ...
'color_range', [-0.5 0.5], 'cmap', 'GyBu_r')

Spin permutation tests¶
This page contains descriptions and examples to assess statistical significance of two spatial maps.
Assess statistical significance¶
The intrinsic spatial smoothness in two given cortical maps may inflate the significance of their spatial correlation. To overcome this challenge, we assess statistical significance using spin permutation tests, a framework proposed by Alexander-Bloch and colleagues. To do so, we generate null models of overlap between cortical maps by projecting the spatial coordinates of cortical data onto the surface spheres, apply randomly sampled rotations, and reassign cortical values. We then compare the original correlation coefficients against the empirical distribution determined by the ensemble of spatially permuted correlation coefficients.
Prerequisites Two brain maps from which you want to assess the signifcance of their correlations, as for example: degree centrality vs. atrophy ↪ Load summary statistics or example data ↪ Re-order subcortical data (mega only) ↪ Z-score data (mega only) ↪ Load cortico-cortical and subcortico-cortical connectivity matrices ↪ Compute cortical-cortical and subcortico-cortical degree centrality
>>> from enigmatoolbox.permutation_testing import spin_test, shuf_test
>>> # Remove subcortical values corresponding to the ventricles
>>> # (as we don't have connectivity values for them!)
>>> SV_d_noVent = SV_d.drop([np.where(SV['Structure'] == 'LLatVent')[0][0],
... np.where(SV['Structure'] == 'RLatVent')[0][0]]).reset_index(drop=True)
>>> # Spin permutation testing for two cortical maps
>>> fc_ctx_p, fc_ctx_d = spin_test(fc_ctx_dc, CT_d, surface_name='fsa5', parcellation_name='aparc',
... type='pearson', n_rot=1000, null_dist=True)
>>> sc_ctx_p, sc_ctx_d = spin_test(sc_ctx_dc, CT_d, surface_name='fsa5', parcellation_name='aparc',
... type='pearson', n_rot=1000, null_dist=True)
>>> # Shuf permutation testing for two subcortical maps
>>> fc_sctx_p, fc_sctx_d = shuf_test(fc_sctx_dc, SV_d_noVent, n_rot=1000,
... type='pearson', null_dist=True)
>>> sc_sctx_p, sc_sctx_d = shuf_test(sc_sctx_dc, SV_d_noVent, n_rot=1000,
... type='pearson', null_dist=True)
>>> # Store p-values and null distributions
>>> p_and_d = {'functional cortical hubs': [fc_ctx_p, fc_ctx_d], 'functional subcortical hubs': [fc_sctx_p, fc_sctx_d],
... 'structural cortical hubs': [sc_ctx_p, sc_ctx_d], 'structural subcortical hubs': [sc_sctx_p, sc_sctx_d]}
% Remove subcortical values corresponding to the ventricles
% (as we don't have connectivity values for them!)
SV_d_noVent = SV_d;
SV_d_noVent([find(strcmp(SV.Structure, 'LLatVent')); ...
find(strcmp(SV.Structure, 'RLatVent'))], :) = [];
% Spin permutation testing for two cortical maps
[fc_ctx_p, fc_ctx_d] = spin_test(fc_ctx_dc, CT_d, 'surface_name', 'fsa5', ...
'parcellation_name', 'aparc', 'n_rot', 1000, ...
'type', 'pearson');
[sc_ctx_p, sc_ctx_d] = spin_test(sc_ctx_dc, CT_d, 'surface_name', 'fsa5', ...
'parcellation_name', 'aparc', 'n_rot', 1000, ...
'type', 'pearson');
% Shuf permutation testing for two subcortical maps
[fc_sctx_p, fc_sctx_d] = shuf_test(fc_sctx_dc, SV_d_noVent, ...
'n_rot', 1000, 'type', 'pearson');
[sc_sctx_p, sc_sctx_d] = shuf_test(sc_sctx_dc, SV_d_noVent, ...
'n_rot', 1000, 'type', 'pearson');
% Store p-values and null distributions
p_and_d = cell2struct({[fc_ctx_p; fc_ctx_d], [fc_sctx_p; fc_sctx_d], [sc_ctx_p; sc_ctx_d], [sc_sctx_p; sc_sctx_d]}, ...
{'functional_cortical_hubs', 'functional_subcortical_hubs', ...
'structural_cortical_hubs', 'structural_subcortical_hubs'}, 2);
>>> from enigmatoolbox.permutation_testing import spin_test, shuf_test
>>> # Remove subcortical values corresponding to the ventricles
>>> # (as we don't have connectivity values for them!)
>>> SV_z_mean_noVent = SV_z_mean.drop(['LLatVent', 'RLatVent']).reset_index(drop=True)
>>> # Spin permutation testing for two cortical maps
>>> fc_ctx_p, fc_ctx_d = spin_test(fc_ctx_dc, CT_z_mean, surface_name='fsa5', parcellation_name='aparc',
... type='pearson', n_rot=1000, null_dist=True)
>>> sc_ctx_p, sc_ctx_d = spin_test(sc_ctx_dc, CT_z_mean, surface_name='fsa5', parcellation_name='aparc',
... type='pearson', n_rot=1000, null_dist=True)
>>> # Shuf permutation testing for two subcortical maps
>>> fc_sctx_p, fc_sctx_d = shuf_test(fc_sctx_dc, SV_z_mean_noVent, n_rot=1000,
... type='pearson', null_dist=True)
>>> sc_sctx_p, sc_sctx_d = shuf_test(sc_sctx_dc, SV_z_mean_noVent, n_rot=1000,
... type='pearson', null_dist=True)
>>> # Store p-values and null distributions
>>> p_and_d = {'functional cortical hubs': [fc_ctx_p, fc_ctx_d], 'functional subcortical hubs': [fc_sctx_p, fc_sctx_d],
... 'structural cortical hubs': [sc_ctx_p, sc_ctx_d], 'structural subcortical hubs': [sc_sctx_p, sc_sctx_d]}
% Remove subcortical values corresponding to the ventricles
% (as we don't have connectivity values for them!)
SV_z_mean_noVent = SV_z_mean;
SV_z_mean_noVent.LLatVent = [];
SV_z_mean_noVent.RLatVent = [];
% Spin permutation testing for two cortical maps
[fc_ctx_p, fc_ctx_d] = spin_test(fc_ctx_dc, CT_z_mean{:, :}, 'surface_name', ...
'fsa5', 'parcellation_name', 'aparc', 'n_rot', ...
1000, 'type', 'pearson');
[sc_ctx_p, sc_ctx_d] = spin_test(sc_ctx_dc, CT_z_mean{:, :}, 'surface_name', ...
'fsa5', 'parcellation_name', 'aparc', 'n_rot', ...
1000, 'type', 'pearson');
% Shuf permutation testing for two subcortical maps
[fc_sctx_p, fc_sctx_d] = shuf_test(fc_sctx_dc, SV_z_mean_noVent{:, :}, ...
'n_rot', 1000, 'type', 'pearson');
[sc_sctx_p, sc_sctx_d] = shuf_test(sc_sctx_dc, SV_z_mean_noVent{:, :}, ...
'n_rot', 1000, 'type', 'pearson');
% Store p-values and null distributions
p_and_d = cell2struct({[fc_ctx_p; fc_ctx_d], [fc_sctx_p; fc_sctx_d], [sc_ctx_p; sc_ctx_d], [sc_sctx_p; sc_sctx_d]}, ...
{'functional_cortical_hubs', 'functional_subcortical_hubs', ...
'structural_cortical_hubs', 'structural_subcortical_hubs'}, 2);
Plot null distributions¶
To better interpret statistical significance, we can plot the null distribution of generated correlations (i.e., “spun” or “shuffled” correlations) and overlay the correlation coefficient obtained from the empirical (i.e., real) brain maps.
>>> import matplotlib.pyplot as plt
>>> fig, axs = plt.subplots(1, 4, figsize=(15, 3))
>>> for k, (fn, dd) in enumerate(p_and_d.items()):
>>> # Define plot colors
>>> if k <= 1:
>>> col = '#A8221C' # red for functional hubs
>>> else:
>>> col = '#324F7D' # blue for structural hubs
>>> # Plot null distributions
>>> axs[k].hist(dd[1], bins=50, density=True, color=col, edgecolor='white', lw=0.5)
>>> axs[k].axvline(rvals[fn], lw=1.5, ls='--', color='k', dashes=(2, 3),
... label='$r$={:.2f}'.format(rvals[fn]) + '\n$p$={:.3f}'.format(dd[0]))
>>> axs[k].set_xlabel('Null correlations \n ({})'.format(fn))
>>> axs[k].set_ylabel('Density')
>>> axs[k].spines['top'].set_visible(False)
>>> axs[k].spines['right'].set_visible(False)
>>> axs[k].legend(loc=1, frameon=False)
>>> fig.tight_layout()
>>> plt.show()
f = figure,
set(gcf,'color','w');
set(gcf,'units','normalized','position',[0 0 1 0.3])
fns = fieldnames(p_and_d);
for k = 1:numel(fieldnames(rvals))
% Define plot colors
if k <= 2; col = [0.66 0.13 0.11]; else; col = [0.2 0.33 0.49]; end
% Plot null distributions
axs = subplot(1, 4, k); hold on
h = histogram(p_and_d.(fns{k})(2:end), 50, 'Normalization', 'pdf', 'edgecolor', 'w', ...
'facecolor', col, 'facealpha', 1, 'linewidth', 0.5);
l = line([rvals.(fns{k}) rvals.(fns{k})], get(gca, 'ylim'), 'linestyle', '--', ...
'color', 'k', 'linewidth', 1.5);
xlabel(['Null correlations' newline '(' strrep(fns{k}, '_', ' ') ')'])
ylabel('Density')
legend(l,['{\it r}=' num2str(round(rvals.(fns{k}), 2)) newline ...
'{\it p}=' num2str(round(p_and_d.(fns{k})(1), 3))])
legend boxoff
end

Citing the ENIGMA TOOLBOX¶
- Please cite the following when referring to your use of our Toolbox:
Larivière, S., Paquola, C., Park, By. et al. The ENIGMA Toolbox: multiscale neural contextualization of multisite neuroimaging datasets. Nat Methods 18, 698–700 (2021). https://doi.org/10.1038/s41592-021-01186-4
Python API¶
This is the function reference of the ENIGMA TOOLBOX
enigmatoolbox.datasets
¶
ENIGMA datasets¶
Loads the ENIGMA example dataset (from one site - MICA-MNI Montreal; author: @saratheriver) |
|
Outputs summary statistics for a given disorder (author: @saratheriver) |
Export functions¶
|
Returns number of faces/triangles for a surface (author: @saratheriver) |
Returns vox2ras transform for a surface (author: @saratheriver) |
|
|
Writes cifti file (authors: @NicoleEic, @saratheriver) |
Connectivity matrices¶
|
Load functional connectivity data (author: @saratheriver) |
Load functional connectivity data (cortical + subcortical in one matrix; author: @saratheriver) |
|
|
Load structural connectivity data (author: @saratheriver) |
Load structural connectivity data (cortical + subcortical in one matrix; author: @saratheriver) |
Gene co-expression data¶
|
Fetch Allen Human Brain Atlas microarray expression data from all donors and all genes (author: @saratheriver) |
|
Outputs names of GWAS-derived risk genes for a given disorder (author: @saratheriver) |
Surface templates¶
Load fsaverage5 surfaces (author: @saratheriver) |
|
Load conte69 surfaces (author: @OualidBenkarim) |
|
Load subcortical surfaces (author: @saratheriver) |
enigmatoolbox.cross_disorder
¶
enigmatoolbox.mesh
¶
Read/write functionality¶
Read surface data (author: @OualidBenkarim) |
|
Write surface data (author: @OualidBenkarim) |
enigmatoolbox.permutation_testing
¶
Spin permutations¶
Spin permutation (author: @saratheriver) |
|
|
Extract centroids of a cortical parcellation on a surface sphere (author: @saratheriver) |
Rotate parcellation (author: @saratheriver) |
|
Generate a p-value for the spatial correlation between two parcellated cortical surface maps (author: @saratheriver) |
enigmatoolbox.plotting.surface_plotting
¶
Surface plotting¶
Plot cortical surface with lateral and medial views (authors: @OualidBenkarim, @saratheriver) |
|
|
Plot subcortical surface with lateral and medial views (author: @saratheriver) |
Build plotter arranged according to the layout (author: @OualidBenkarim) |
|
Plot surfaces arranged according to the layout (author: @OualidBenkarim) |
enigmatoolbox.utils
¶
Re-order subcortical data matrix¶
Re-order subcortical volume data alphabetically and by hemisphere (left then right; author: @saratheriver) |
Z-score data matrix¶
Z-score data relative to a given group (author: @saratheriver) |
Parcellation¶
Map data in source to target according to their labels (authors: @OualidBenkarim, @saratheriver) |
|
Summarize data in values according to labels (author: @OualidBenkarim) |
|
Map one value per subcortical area to surface vertices (author: @saratheriver) |
Matlab API¶
enigma datasets
¶
load_example_data()¶
- Usage [source]:
-
[cov, metr1_SubVol, metr2_CortThick, metr3_CortSurf] = load_example_data()
-
- Description:
Loads the ENIGMA example dataset (from one site - MICA-MNI Montreal; author: @saratheriver)
- Outputs:
cov (table) – Contains information on covariates
metr1 (table) – Contains information on subcortical volume
metr2 (table) – Contains information on cortical thickness
metr3 (table) – Contains information on surface area
load_summary_stats(disorder)¶
- Usage [source]:
-
summary_stats = load_summary_stats(disorder)
-
- Description:
Outputs summary statistics for a given disorder (author: @saratheriver)
- Inputs:
disorder ({‘22q’, ‘adhd’, ‘asd’, ‘bipolar’, ‘depression’, ‘epilepsy’, ‘ocd’, ‘schizophrenia’}) – Disorder name, must pick one.
- Outputs:
summary_stats (table) – Available summary statistics
Loads the ENIGMA example dataset (from one site - MICA-MNI Montreal; author: @saratheriver) |
|
Outputs summary statistics for a given disorder (author: @saratheriver) |
import / export
¶
nfaces()¶
- Usage [source]:
-
M = nfaces(surface_name, hemisphere)
-
- Description:
Returns number of faces/triangles for a surface (author: @saratheriver)
- Inputs:
surface_name (string) - Name of surface {‘fsa5’, ‘conte69’}
hemisphere (string) - Name of hemisphere {‘lh’, ‘rh’, ‘both’}
- Outputs:
numfaces (double) – Number of faces/triangles
getaffine()¶
- Usage [source]:
-
M = getaffine(surface_name, hemisphere)
-
- Description:
Returns vox2ras transform for a surface (author: @saratheriver)
- Inputs:
surface_name (string) - Name of surface {‘fsa5’, ‘conte69’}
hemisphere (string) - Name of hemisphere {‘lh’, ‘rh’, ‘both’}
- Outputs:
M (double) – Vox2ras transform, size = [4 x 4]
write_cifti()¶
- Usage [source]:
-
write_cifti
(data, varargin)¶
-
- Description:
Writes cifti file (authors: @NicoleEic, @saratheriver)
- Inputs:
data (double or single) – Data to be saved
- Name/value pairs:
dpath (string, optional) – Path to location for saving file (e.g., ‘/Users/bonjour/’). Default is ‘’.
fname (string, optional) – Name of file (e.g., ‘ello.dscalar.nii’). Default is ‘’.
surface_name (string, optional) – Surface name {‘fsa5’, ‘conte69’}. Default is ‘conte69’.
hemi (string, optional) – Name of hemisphere {‘lh’, ‘rh’}. Default is ‘lh’.
Returns number of faces/triangles for a surface (author: @saratheriver) |
|
Returns vox2ras transform for a surface (author: @saratheriver) |
|
Writes cifti file (authors: @NicoleEic, @saratheriver) |
connectivity matrices
¶
load_fc()¶
- Usage [source]:
-
[funcMatrix_ctx, funcLabels_ctx, funcMatrix_sctx, funcLabels_sctx] = load_fc(parcellation)
-
- Description:
Load functional connectivity data (author: @saratheriver)
- Inputs:
parcellation (string, optional) - Name of parcellation (with n cortical parcels). Default is
‘aparc’. Other options are ‘schaefer_100’, ‘schaefer_200’, ‘schaefer_300’, ‘schaefer_400’.
- Outputs:
funcMatrix_ctx (double array) – Cortico-cortical connectivity, size = [n x n]
funcLabels_ctx (cell array) – Cortical labels, size = [1 x n]
funcMatrix_sctx (double array) – Subcortico-cortical connectivity, size = [14 x n]
funcLabels_sctx (cell array) – Subcortical labels, size = [1 x 14]
load_fc_as_one()¶
- Usage [source]:
-
[funcMatrix, funcLabels] = load_fc_as_one(parcellation)
-
- Description:
Load functional connectivity data (cortical + subcortical in one matrix; author: @saratheriver)
- Inputs:
parcellation (string, optional) - Name of parcellation (with n cortical parcels). Default is
‘aparc’. Other options are ‘schaefer_100’, ‘schaefer_200’, ‘schaefer_300’, ‘schaefer_400’.
- Outputs:
funcMatrix (double array) – Functional connectivity, size = [n+14 x n+14]
funcLabels (cell array) – Region labels, size = [1 x n+14]
load_sc()¶
- Usage [source]:
-
[strucMatrix_ctx, strucLabels_ctx, strucMatrix_sctx, strucLabels_sctx] = load_sc(parcellation)
-
- Description:
Load structural connectivity data (author: @saratheriver)
- Inputs:
parcellation (string, optional) - Name of parcellation (with n cortical parcels). Default is
‘aparc’. Other options are ‘schaefer_100’, ‘schaefer_200’, ‘schaefer_300’, ‘schaefer_400’.
- Outputs:
strucMatrix_ctx (double array) – Cortico-cortical connectivity, size = [n x n]
strucLabels_ctx (cell array) – Cortical labels, size = [1 x n]
strucMatrix_sctx (double array) – Subcortico-cortical connectivity, size = [14 x n]
strucLabels_sctx (cell array) – Subcortical labels, size = [1 x 14]
load_sc_as_one()¶
- Usage [source]:
-
[strucMatrix, strucLabels] = load_sc_as_one(parcellation)
-
- Description:
Load structural connectivity data (cortical + subcortical in one matrix; author: @saratheriver)
- Inputs:
parcellation (string, optional) - Name of parcellation (with n cortical parcels). Default is
‘aparc’. Other options are ‘schaefer_100’, ‘schaefer_200’, ‘schaefer_300’, ‘schaefer_400’.
- Outputs:
strucMatrix (double array) – Structural connectivity, size = [n+14 x n+14]
strucLabels (cell array) – Region labels, size = [1 x n+14]
Load functional connectivity data (author: @saratheriver) |
|
Load functional connectivity data (cortical + subcortical in one matrix; author: @saratheriver) |
|
Load structural connectivity data (author: @saratheriver) |
|
Load structural connectivity data (cortical + subcortical in one matrix; author: @saratheriver) |
gene co-expression data
¶
fetch_ahba()¶
- Usage [source]:
-
genes = fetch_ahba();
-
- Description:
Fetch Allen Human Brain Atlas microarray expression data from all donors and all genes (author: @saratheriver)
- Inputs:
csvfile (empty or string) – Path to downloaded csvfile. If empty (default), fetches microarray
expression data from the internet. For more threshold and parcellation options, download csvfile from here: https://github.com/saratheriver/enigma-extra/tree/master/ahba. If empty, fetches microarray expression data from the internet (aparc and stable r > 0.2).
- Outputs:
genes (table) - Gene co-expression data, size = [82 x 15633]
risk_genes(disorder)¶
- Usage [source]:
-
risk_genes = risk_genes(disorder)
-
- Description:
Outputs names of GWAS-derived risk genes for a given disorder (author: @saratheriver)
- Inputs:
disorder ({‘adhd’, ‘asd’, ‘bipolar’, ‘depression’, ‘epilepsy’, ‘hippocampal_volume’, ‘ocd’, ‘schizophrenia’, ‘tourette’}) –
Disorder name, must pick one.
- Outputs:
risk_genes (cell array) – Names of genes for a given disorder
Fetch Allen Human Brain Atlas microarray expression data from all donors and all genes (author: @saratheriver) |
|
Outputs names of GWAS-derived risk genes for a given disorder |
cross-disorder effect
¶
cross_disorder_effect()¶
- Usage [source]:
-
[components, variance, ~, names] = cross_disorder_effect(varargin)
-
[~, ~, correlation_matrix, names] = cross_disorder_effect(varargin)
-
- Description:
Cross-disorder effect (authors: @boyongpark, @saratheriver)
- Name/value pairs:
disorder (cell array, optional) - Any combination of disorder name. Default is all available disorders, except ‘adhd’ due to NaNs. Options are any combination of {‘22q’, ‘adhd’, ‘asd’, ‘bipolar’, ‘depression’, ‘epilepsy’, ‘ocd’, ‘schizophrenia’}.
measure (cell array, optional) - Any combination of measure names. Default is {‘CortThick’, ‘CortSurf’}.
additional_data_cortex (double array, optional) - Name for additional cortical ENIGMA-type data. Must also provide ‘additional_name_cortex’.
additional_name_cortex (cell array, optional) - Additional cortical ENIGMA-type data (n, 68). Must also provide ‘additional_data_cortex’.
additional_data_subcortex (double array, optional) - Name for additional subcortical ENIGMA-type data. Must also provide ‘additional_name_subcortex’.
additional_name_subcortex (cell array, optional) - Additional subcortical ENIGMA-type data (n, 16). Must also provide ‘additional_data_subcortex’.
ignore (cell array, optional) - Ignore summary statistics with these expressions. Default is (‘mega’) as it contains NaNs.
include (c*ell array, optional*) - Include only summary statistics with these expressions. Default is empty.
method (string, optional) - Analysis method {‘pca’, ‘correlation’}. Default is ‘pca’.
- Outputs:
components (structure) - Principal components of shared effects in descending order in terms of component variance. Only is method is ‘pca’.
variance (structure) - Variance of components. Only is method is ‘pca’.
correlation_matrix (structure) - Correlation matrix of for every pair of shared effect maps. Only is method is ‘correlation’.
names (structure) - Name of disorder and case-control effect maps included in analysis.
Cross-disorder effect (authors: @boyongpark, @saratheriver) |
spin permutations
¶
spin_test()¶
- Usage [source]:
-
[p_spin, r_dist] = spin_test(map1, map2, varargin)
-
- Description:
Spin permutation (author: @saratheriver)
- Inputs:
map1 (double array) – One of two map to be correlated
map2 (double array) – The other map to be correlated
- Name/value pairs:
surface_name (string, optional) – Surface name {‘fsa5’, ‘fsa5_with_sctx’}. Use ‘fsa5’ for Conte69. Default is ‘fsa5’.
parcellation_name (string, optional) – Parcellation name {‘aparc’, ‘aparc_aseg’}. Default is ‘aparc’.
n_rot (int, optional) – Number of spin rotations. Default is 100.
type (string, optional) – Correlation type {‘pearson’, ‘spearman’}. Default is ‘pearson’.
ventricles (string, optional) – Whether ventricles are present in map1, map2. Only used when
parcellation_name is 'aparc_aseg'
. Default is ‘False’ (other option is ‘True’)
- Outputs:
p_spin (double) – Permutation p-value
r_dist (double array) - Null correlations, size = [n_rot*2 x 1].
- References:
Alexander-Bloch A, Shou H, Liu S, Satterthwaite TD, Glahn DC, Shinohara RT, Vandekar SN and Raznahan A (2018). On testing for spatial correspondence between maps of human brain structure and function. NeuroImage, 178:540-51.
Váša F, Seidlitz J, Romero-Garcia R, Whitaker KJ, Rosenthal G, Vértes PE, Shinn M, Alexander-Bloch A, Fonagy P, Dolan RJ, Goodyer IM, the NSPN consortium, Sporns O, Bullmore ET (2017). Adolescent tuning of association cortex in human structural brain networks. Cerebral Cortex, 28(1):281–294.
centroid_extraction_sphere()¶
- Usage [source]:
-
centroid = centroid_extraction_sphere(sphere_coords, annotfile)
-
- Description:
Extract centroids of a cortical parcellation on a surface sphere (authors: @frantisekvasa, @saratheriver)
- Inputs:
sphere_coords (double array) – Sphere coordinates, size = [n x 3]
annotfile (string) – Name of annotation file {‘fsa5_lh_aparc.annot’, ‘fsa5_rh_aparc.annot, ‘fsa5_with_sctx_lh_aparc_aseg.csv’, etc.}
ventricles (string, optional) – Whether ventricle data are present. Only used when ‘annotfile’ is fsa5_with_sctx_lh_aparc_aseg or fsa5_with_sctx_lh_aparc_aseg. Default is ‘False’.
- Outputs:
coord (double array) – Coordinates of the centroid of each region on the sphere, size = [m x 3].
- References:
Alexander-Bloch A, Shou H, Liu S, Satterthwaite TD, Glahn DC, Shinohara RT, Vandekar SN and Raznahan A (2018). On testing for spatial correspondence between maps of human brain structure and function. NeuroImage, 178:540-51.
Váša F, Seidlitz J, Romero-Garcia R, Whitaker KJ, Rosenthal G, Vértes PE, Shinn M, Alexander-Bloch A, Fonagy P, Dolan RJ, Goodyer IM, the NSPN consortium, Sporns O, Bullmore ET (2017). Adolescent tuning of association cortex in human structural brain networks. Cerebral Cortex, 28(1):281–294.
rotate_parcellation()¶
- Usage [source]:
-
perm_id = rotate_parcellation(coord_l, coord_r, nrot)
-
- Description:
Rotate parcellation (authors: @frantisekvasa, @saratheriver)
- Inputs:
coord_l (double array) – Coordinates of left hemisphere regions on the sphere, size = [m x 3]
coord_r (double array) – Coordinates of right hemisphere regions on the sphere, size = [m x 3]
nrot (int, optional) – Number of rotations. Default is 100.
- Outputs:
perm_id (double array) – Array of permutations, size = [m x nrot]
- References:
Alexander-Bloch A, Shou H, Liu S, Satterthwaite TD, Glahn DC, Shinohara RT, Vandekar SN and Raznahan A (2018). On testing for spatial correspondence between maps of human brain structure and function. NeuroImage, 178:540-51.
Váša F, Seidlitz J, Romero-Garcia R, Whitaker KJ, Rosenthal G, Vértes PE, Shinn M, Alexander-Bloch A, Fonagy P, Dolan RJ, Goodyer IM, the NSPN consortium, Sporns O, Bullmore ET (2017). Adolescent tuning of association cortex in human structural brain networks. Cerebral Cortex, 28(1):281–294.
perm_sphere_p()¶
- Usage [source]:
-
[p_perm, null_dist] = perm_sphere_p(x, y, perm_id, corr_type)
-
- Description:
Generate a p-value for the spatial correlation between two parcellated cortical surface maps (authors: @frantisekvasa, @saratheriver)
- Inputs:
x (double array) – One of two map to be correlated
y (double array) – The other map to be correlated
perm_id (double array) – Array of permutations, size = [m x nrot]
corr_type (string, optional) - Correlation type {‘pearson’, ‘spearman’}. Default is ‘pearson’.
- Outputs:
p_perm (double) – Permutation p-value
null_dist (double array) - Null correlations, size = [n_rot*2 x 1].
- References:
Alexander-Bloch A, Shou H, Liu S, Satterthwaite TD, Glahn DC, Shinohara RT, Vandekar SN and Raznahan A (2018). On testing for spatial correspondence between maps of human brain structure and function. NeuroImage, 178:540-51.
Váša F, Seidlitz J, Romero-Garcia R, Whitaker KJ, Rosenthal G, Vértes PE, Shinn M, Alexander-Bloch A, Fonagy P, Dolan RJ, Goodyer IM, the NSPN consortium, Sporns O, Bullmore ET (2017). Adolescent tuning of association cortex in human structural brain networks. Cerebral Cortex, 28(1):281–294.
Spin permutation (author: @saratheriver) |
|
Extract centroids of a cortical parcellation on a surface sphere (authors: @frantisekvasa, @saratheriver) |
|
Rotate parcellation (authors: @frantisekvasa, @saratheriver) |
|
Generate a p-value for the spatial correlation between two parcellated cortical surface maps (authors: @frantisekvasa, @saratheriver) |
shuf permutations
¶
shuf_test()¶
- Usage [source]:
-
[p_shuf, r_dist] = shuf_test(map1, map2, varargin)
-
- Description:
Shuf permuation (author: @saratheriver)
- Inputs:
map1 (double array) – One of two map to be correlated
map2 (double array) – The other map to be correlated
- Name/value pairs:
n_rot (int, optional) – Number of shuffles. Default is 100.
type (string, optional) – Correlation type {‘pearson’, ‘spearman’}. Default is ‘pearson’.
- Outputs:
p_shuf (double) – Permutation p-value
r_dist (double array) - Null correlations, size = [n_rot*2 x 1].
Shuf permuation (author: @saratheriver) |
surface plotting
¶
plot_cortical()¶
- Usage [source]:
-
[a, cb] = plot_cortical(data, varargin);
-
- Description:
Plot cortical surface with lateral and medial views (authors: @MICA-MNI, @saratheriver)
- Inputs:
data (double array) – vector of data, size = [1 x v]
- Name/value pairs:
surface_name (string, optional) – Name of surface {‘fsa5’, ‘conte69}. Default is ‘fsa5’.
label_text (string, optional) – Label text for colorbar. Default is empty.
background (string, double array, optional) – Background color. Default is ‘white’.
color_range (double array, optional) – Range of colorbar. Default is [min(data) max(data)].
cmap (string, double array, optional) – Colormap name. Default is ‘RdBu_r’.
- Outputs:
a (axes) – vector of handles to the axes, left to right, top to bottom
cb (colorbar) - colorbar handle
plot_subcortical()¶
- Usage [source]:
-
[a, cb] = plot_subcortical(data, varargin);
-
- Description:
Plot subcortical surface with lateral and medial views (author: @saratheriver)
- Inputs:
data (double array) – vector of data, size = [1 x v]. One value per subcortical structure, in this order: L-accumbens, L-amygdala, L-caudate, L-hippocampus, L-pallidum L-putamen, L-thalamus, L-ventricle, R-accumbens, R-amygdala, R-caudate, R-hippocampus, R-pallidum, R-putamen, R-thalamus, R-ventricle
- Name/value pairs:
ventricles (string, optional) – If ‘True’ (default) shows the ventricles (
data must be size = [1 x 16]
). If ‘False’, then ventricles are not shown anddata must be size = [1 x 14]
.label_text (string, optional) – Label text for colorbar. Default is empty.
background (string, double array, optional) – Background color. Default is ‘white’.
color_range (double array, optional) – Range of colorbar. Default is [min(data) max(data)].
cmap (string, double array, optional) – Colormap name. Default is ‘RdBu_r’.
- Outputs:
a (axes) – vector of handles to the axes, left to right, top to bottom
cb (colorbar) - colorbar handle
Plot cortical surface with lateral and medial views (authors: @MICA-MNI, @saratheriver) |
|
Plot subcortical surface with lateral and medial views (author: @saratheriver) |
histology
¶
bb_moments_raincloud()¶
- Usage [source]:
-
bb_moments_raincloud
(region_idx, title)¶
-
- Description:
Stratify regional data according to BigBrain statistical moments (authors: @caseypaquola, @saratheriver)
- Inputs:
region_idx (double array) - Vector of data. Indices of regions to be included in analysis
parcellation (string, optional) - Name of parcellation. Options are ‘aparc’, ‘schaefer_100’, ‘schaefer_200’, ‘schaefer_300’, ‘schaefer_400’, ‘glasser_360’. Default is ‘aparc’.
title (string, optional) - Title of raincloud plot. Default is empty.
- Outputs:
figure (figure) – Raincloud plot.
bb_gradient_plot()¶
- Usage [source]:
-
bb_gradient_plot
(data, varargin)¶
-
- Description:
Stratify parcellated data according to the BigBrain gradient (authors: @caseypaquola, @saratheriver)
- Inputs:
data (double array) – vector of data. Parcellated data.
- Name/value pairs:
parcellation (string, optional) - Name of parcellation. Options are: ‘aparc’, ‘schaefer_100’, ‘schaefer_200’, ‘schaefer_300’, ‘schaefer_400’, ‘glasser_360’. Default is ‘aparc’.
title (string, optional) – Title of spider plot. Default is empty.
axis_range (double array, optional) - Range of spider plot axes. Default is (min, max).
yaxis_label (string, optional) - Label for y-axis. Default is empty.
xaxis_label (string, optional) - Label for x-axis. Default is empty.
- Outputs:
figure (figure) – Gradient plot.
economo_koskinas_spider()¶
- Usage [source]:
-
class_mean = economo_koskinas_spider(parcel_data, varargin)
-
- Description:
Stratify parcellated data according to von Economo-Koskinas cytoarchitectonic classes (authors: @caseypaquola, @saratheriver)
- Inputs:
parcel_data (double array) – vector of data. Parcellated data.
- Name/value pairs:
parcellation (string, optional) - Parcellation to go from parcel_data to surface. Default is ‘aparc_fsa5’.
fill (double, optional) - Value for mask. Default is 0.
title (string, optional) – Title of spider plot. Default is empty.
axis_range (double array, optional) - Range of spider plot axes. Default is (min, max).
label (cell array, optional) - List of axis labels. Length = 5. Default is names of von Economo-Koskinas cytoarchitectonic classes.
color (double array, optional) - Color of line. Default is [0 0 0].
- Outputs:
figure (figure) – Spider plot.
Stratify regional data according to BigBrain statistical moments (authors: @caseypaquola, @saratheriver) |
|
Stratify parcellated data according to the BigBrain gradient (authors: @caseypaquola, @saratheriver) |
|
Stratify parcellated data according to von Economo-Koskinas cytoarchitectonic classes (authors: @caseypaquola, @saratheriver) |
re-order subcortical data matrix
¶
Re-order subcortical volume data alphabetically and by hemisphere (left then rightl; author: @saratheriver) |
z-score data matrix
¶
zscore_matrix()¶
- Usage [source]:
-
Z = zscore_matrix(data, group, controlCode)
-
- Description:
Z-score data relative to a given group (author: @saratheriver)
- Inputs:
data (double array) - Data matrix (e.g., thickness data), size = [n_subject x n_region]
group (double array) - Vector of values for group assignment (e.g, [0 0 0 1 1 1], same length as n_subject.
controlCode (int) - Value that corresponds to “baseline” group.
- Outputs:
Z (doule array) – Z-scored data relative to control code
Z-score data relative to a given group (author: @saratheriver) |
parcellation
¶
parcel_to_surface()¶
- Usage [source]:
-
parcel2surf = parcel_to_surface(parcel_data, parcellation, fill)
-
- Description:
Map parcellated data to the surface (authors : @MICA-MNI, @saratheriver)
- Inputs:
parcel_data (double array) - Parcel vector, size = [p x 1]. For example, if Desikan Killiany from ENIGMA data, then parcel_data is size = [68 x 1].
parcellation (string, optional) - Default is ‘aparc_fsa5’
fill (double, optional) - Value for mask. Default is 0.
- Outputs:
parcel2surf (double array) – Vector of values mapped from a parcellation to the surface
surface_to_parcel()¶
- Usage [source]:
-
surf2parcel = surface_to_parcel(surf_data, parcellation)
-
- Description:
Map surface data to a parcellation (authors : @MICA-MNI, @saratheriver)
- Inputs:
surf_data (double array) - Surface vector, size = [v x 1].
parcellation (string, optional) - Default is ‘aparc_fsa5’
- Outputs:
surf2parcel (double array) – Vector of values mapped from a surface to a parcellation
Map parcellated data to the surface (authors : @MICA-MNI, @saratheriver) |
|
Map surface data to a parcellation (authors : @MICA-MNI, @saratheriver) |
References¶
ENIGMA datasets¶
Sun, D., Ching, C. R., Lin, A., Forsyth, J. K., Kushan, L., Vajdi, A., … & Jonas, R. K. (2018). Large-scale mapping of cortical alterations in 22q11. 2 deletion syndrome: convergence with idiopathic psychosis and effects of deletion size. Molecular psychiatry, 1-13.
Boedhoe, P. S., Van Rooij, D., Hoogman, M., Twisk, J. W., Schmaal, L., Abe, Y., … & Arango, C. (2020). Subcortical brain volume, regional cortical thickness, and cortical surface area across disorders: findings from the ENIGMA ADHD, ASD, and OCD working groups. American Journal of Psychiatry, 177(9), 834-843.
Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Busatto, G. F., … & Dinstein, I. (2018). Cortical and subcortical brain morphometry differences between patients with autism spectrum disorder and healthy individuals across the lifespan: results from the ENIGMA ASD Working Group. American Journal of Psychiatry, 175(4), 359-369.
Hibar, D. P., Westlye, L. T., Doan, N. T., Jahanshad, N., Cheung, J. W., Ching, C. R., … & Krämer, B. (2018). Cortical abnormalities in bipolar disorder: an MRI analysis of 6503 individuals from the ENIGMA Bipolar Disorder Working Group. Molecular psychiatry, 23(4), 932-942.
Whelan, C. D., Altmann, A., Botía, J. A., Jahanshad, N., Hibar, D. P., Absil, J., … & Bergo, F. P. (2018). Structural brain abnormalities in the common epilepsies assessed in a worldwide ENIGMA study. Brain, 141(2), 391-408.
Schmaal, L., Hibar, D. P., Sämann, P. G., Hall, G. B., Baune, B. T., Jahanshad, N., … & Vernooij, M. W. (2017). Cortical abnormalities in adults and adolescents with major depression based on brain scans from 20 cohorts worldwide in the ENIGMA Major Depressive Disorder Working Group. Molecular psychiatry, 22(6), 900-909.
Schmaal, L., Veltman, D. J., van Erp, T. G., Sämann, P. G., Frodl, T., Jahanshad, N., … & Vernooij, M. W. (2016). Subcortical brain alterations in major depressive disorder: findings from the ENIGMA Major Depressive Disorder working group. Molecular psychiatry, 21(6), 806-812.
Boedhoe, P. S., Schmaal, L., Abe, Y., Alonso, P., Ameis, S. H., Anticevic, A., … & Bollettini, I. (2018). Cortical abnormalities associated with pediatric and adult obsessive-compulsive disorder: findings from the ENIGMA Obsessive-Compulsive Disorder Working Group. American Journal of Psychiatry, 175(5), 453-462.
Van Erp, T. G., Walton, E., Hibar, D. P., Schmaal, L., Jiang, W., Glahn, D. C., … & Okada, N. (2018). Cortical brain abnormalities in 4474 individuals with schizophrenia and 5098 control subjects via the Enhancing Neuro Imaging Genetics Through Meta Analysis (ENIGMA) Consortium. Biological psychiatry, 84(9), 644-654.
van Erp, T. G., Hibar, D. P., Rasmussen, J. M., Glahn, D. C., Pearlson, G. D., Andreassen, O. A., … & Melle, I. (2016). Subcortical brain volume abnormalities in 2028 individuals with schizophrenia and 2540 healthy controls via the ENIGMA consortium. Molecular psychiatry, 21(4), 547-553.
Connectivity data¶
Van Essen, D. C., Smith, S. M., Barch, D. M., Behrens, T. E., Yacoub, E., Ugurbil, K., & Wu-Minn HCP Consortium. (2013). The WU-Minn human connectome project: an overview. Neuroimage, 80, 62-79.
Glasser, M. F., Sotiropoulos, S. N., Wilson, J. A., Coalson, T. S., Fischl, B., Andersson, J. L., … & Van Essen, D. C. (2013). The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage, 80, 105-124.
Gene co-expression data¶
Arnatkevic̆iūtė, A., Fulcher, B. D., & Fornito, A. (2019). A practical guide to linking brain-wide gene expression and neuroimaging data. Neuroimage, 189, 353-367.
Hawrylycz, M. J., Lein, E. S., Guillozet-Bongaarts, A. L., Shen, E. H., Ng, L., Miller, J. A., … & Abajian, C. (2012). An anatomically comprehensive atlas of the adult human brain transcriptome. Nature, 489(7416), 391-399.
Markello, Ross, Shafiei, Golia, Zheng, Ying-Qiu, Mišić, Bratislav. abagen: A toolbox for the Allen Brain Atlas genetics data. Zenodo; 2020. Available from: https://doi.org/10.5281/zenodo.3726257.
GWAS¶
Demontis, D., Walters, R. K., Martin, J., Mattheisen, M., Als, T. D., Agerbo, E., … & Cerrato, F. (2019). Discovery of the first genome-wide significant risk loci for attention deficit/hyperactivity disorder. Nature genetics, 51(1), 63-75.
Grove, J., Ripke, S., Als, T. D., Mattheisen, M., Walters, R. K., Won, H., … & Awashti, S. (2019). Identification of common genetic risk variants for autism spectrum disorder. Nature genetics, 51(3), 431-444.
Stahl, E. A., Breen, G., Forstner, A. J., McQuillin, A., Ripke, S., Trubetskoy, V., … & de Leeuw, C. A. (2019). Genome-wide association study identifies 30 loci associated with bipolar disorder. Nature genetics, 51(5), 793-803.
Howard, D. M., Adams, M. J., Clarke, T. K., Hafferty, J. D., Gibson, J., Shirali, M., … & Alloza, C. (2019). Genome-wide meta-analysis of depression identifies 102 independent variants and highlights the importance of the prefrontal brain regions. Nature neuroscience, 22(3), 343-352.
Consortium, T. I. L. A. E. (2018). Genome-wide mega-analysis identifies 16 loci and highlights diverse biological mechanisms in the common epilepsies. Nature communications, 9.
Pardiñas, A. F., Holmans, P., Pocklington, A. J., Escott-Price, V., Ripke, S., Carrera, N., … & Han, J. (2018). Common schizophrenia alleles are enriched in mutation-intolerant genes and in regions under strong background selection. Nature genetics, 50(3), 381-389.
Yu, D., Sul, J. H., Tsetsos, F., Nawaz, M. S., Huang, A. Y., Zelaya, I., … & Greenberg, E. (2019). Interrogating the genetic determinants of Tourette’s syndrome and other tic disorders through genome-wide association studies. American Journal of Psychiatry, 176(3), 217-227.
BigBrain¶
Amunts, K., Lepage, C., Borgeat, L., Mohlberg, H., Dickscheid, T., Rousseau, M. É., … & Shah, N. J. (2013). BigBrain: an ultrahigh-resolution 3D human brain model. Science, 340(6139), 1472-1475.
vPapoulis, A., & Pillai, S. U. (2002). Probability, random variables, and stochastic processes. Tata McGraw-Hill Education.
Paquola, C., De Wael, R. V., Wagstyl, K., Bethlehem, R. A., Hong, S. J., Seidlitz, J., … & Smallwood, J. (2019). Microstructural and functional gradients are increasingly dissociated in transmodal cortices. PLoS biology, 17(5), e3000284.
von Economo and Koskinas atlas¶
von Economo, C. F., & Koskinas, G. N. (1925). Die cytoarchitektonik der hirnrinde des erwachsenen menschen. J. Springer.
Scholtens, L. H., de Reus, M. A., de Lange, S. C., Schmidt, R., & van den Heuvel, M. P. (2018). An mri von economo–koskinas atlas. NeuroImage, 170, 249-256.
Triarhou, L. C. (2007). The Economo-Koskinas atlas revisited: cytoarchitectonics and functional context. Stereotactic and functional neurosurgery, 85(5), 195-203.
Network-based atrophy models (hubs)¶
Fornito, A., Zalesky, A., & Breakspear, M. (2015). The connectomics of brain disorders. Nature Reviews Neuroscience, 16(3), 159-172.
van den Heuvel, M. P., & Sporns, O. (2013). Network hubs in the human brain. Trends in cognitive sciences, 17(12), 683-696.
Crossley, N. A., Mechelli, A., Scott, J., Carletti, F., Fox, P. T., McGuire, P., & Bullmore, E. T. (2014). The hubs of the human connectome are generally implicated in the anatomy of brain disorders. Brain, 137(8), 2382-2395.
Network-based atrophy models (disease epicenters)¶
Larivière, S., Rodríguez-Cruces, R., Royer, J., Caligiuri, M. E., Gambardella, A., Concha, L., … & Gleichgerrcht, E. (2020). Network-based atrophy modeling in the common epilepsies: a worldwide ENIGMA study. Science Advances, 6.
Shafiei, G., Markello, R. D., Makowski, C., Talpalaru, A., Kirschner, M., Devenyi, G. A., … & Chakravarty, M. M. (2020). Spatial patterning of tissue volume loss in schizophrenia reflects brain network architecture. Biological psychiatry, 87(8), 727-735.
Zeighami, Y., Ulla, M., Iturria-Medina, Y., Dadar, M., Zhang, Y., Larcher, K. M. H., … & Dagher, A. (2015). Network structure of brain atrophy in de novo Parkinson’s disease. Elife, 4, e08440.
Brown, J. A., Deng, J., Neuhaus, J., Sible, I. J., Sias, A. C., Lee, S. E., … & Grinberg, L. T. (2019). Patient-tailored, connectivity-based forecasts of spreading brain atrophy. Neuron, 104(5), 856-868.
Spin permutations¶
Alexander-Bloch, A. F., Shou, H., Liu, S., Satterthwaite, T. D., Glahn, D. C., Shinohara, R. T., … & Raznahan, A. (2018). On testing for spatial correspondence between maps of human brain structure and function. Neuroimage, 178, 540-551.
Váša, F., Seidlitz, J., Romero-Garcia, R., Whitaker, K. J., Rosenthal, G., Vértes, P. E., … & Jones, P. B. (2018). Adolescent tuning of association cortex in human structural brain networks. Cerebral Cortex, 28(1), 281-294.
Acknowledgements¶
The authors would like to express their gratitude to the open science initiatives that made this work possible:
The ENIGMA Consortium

The Human Connectome Project

Allen Human Brain Atlas

We would also like to recognise funding support from the Canadian Institutes of Health Research (CIHR) and Fonds de la recherche en santé du Québec (FRQS). Core funding for ENIGMA was provided by the NIH Big Data to Knowledge (BD2K) program under consortium grant U54 EB020403, by the ENIGMA World Aging Center (R56 AG058854), and by the ENIGMA Sex Differences Initiative (R01 MH116147).
Core developers 👩🏻💻¶
Sara Larivière, MICA Lab - Montreal Neurological Institute
Boris Bernhardt, MICA Lab - Montreal Neurological Institute