Functional reducibility of higher-order networks (2024)

thanks: These authors contributed equallythanks: These authors contributed equally

Maxime Lucasmaxime.lucas.work@gmail.comCENTAI Institute, Turin, Italy  Luca GalloDepartment of Network and Data Science, Central European University, 1100 Vienna, Austria  Arsham GhavasiehDepartment of Physics and Astronomy “Galileo Galilei”, University of Padua, Via F. Marzolo 8, 315126 Padova, Italy  Federico Battistonbattistonf@ceu.eduDepartment of Network and Data Science, Central European University, 1100 Vienna, Austria  Manlio De Domenicomanlio.dedomenico@unipd.itDepartment of Physics and Astronomy “Galileo Galilei”, University of Padua, Via F. Marzolo 8, 315126 Padova, ItalyPadua Center for Network Medicine, University of Padua, Via F. Marzolo 8, 315126 Padova, ItalyIstituto Nazionale di Fisica Nucleare, Sez. Padova, Italy

(May 3, 2024)

Abstract

Empirical complex systems are widely assumed to be characterized not only by pairwise interactions, but also by higher-order (group) interactions that affect collective phenomena, from metabolic reactions to epidemics. Nevertheless,higher-order networks’ superior descriptive power—compared to classical pairwise networks—comes with a much increased model complexity and computational cost.Consequently, it is of paramount importance to establish a quantitative method to determine when such a modeling framework is advantageous with respect to pairwise models, and to which extent it provides a parsimonious description of empirical systems.Here, we propose a principled method, based on information compression, to analyze the reducibility of higher-order networks to lower-order interactions, by identifying redundancies in diffusion processes while preserving the relevant functional information. The analysis of a broad spectrum of empirical systems shows that, although some networks contain non-compressible group interactions, others can be effectively approximated by lower-order interactions—some technological and biological systems even just by pairwise interactions.More generally, our findings mark a significant step towards minimizing the dimensionality of models for complex systems.

preprint: APS/123-QED

Introduction

Many complex systems exhibit an interconnected structure that can be encoded by pairwise interactions between their constituents. Such pairwise interactions have been used to model biological, social and technological systems, providing a powerful descriptive and predictive framework[1, 2, 3, 4, 5, 6].Recently, the analysis of higher-order structural patterns and dynamical behaviors attracted the attention of the research community as a powerful framework to model group interactions[7, 8, 9, 10], with applications ranging from neuroscience[11, 12] to ecology[13] and social sciences [14, 15], highlighting the emergence of novel phenomena and non-trivial collective behavior[16, 17, 18, 19, 20, 21].

Higher-order networks encode more information than pairwise interactions: for example, metabolic reactions are more realistically described by group interactions between any number of reagents and reactants, capturing information that would be lost by considering the union of pairwise interactions instead.However, this modeling flexibility comes at a cost: new data needs to be adequately recorded and stored as group interactions instead of pairwise ones,and new analytical[22, 23, 24, 25] and computational[26, 27, 28] tools need to be developed. Moreover, the complexity and computational cost of these tools increase exponentially as larger group interactions are considered.

It is therefore crucial to understand under which conditions higher-order representations need to be favored over classical pairwise ones, and whether it is possible to devise a grounded procedure to determine which representation provides the most parsimonious description of an empirical system.

A similar challenge was faced nearly a decade ago, when the advent of temporal and categorical data[29, 30, 31] allowed multilayer representations of complex networks[32]. For these representations, a principled approach was used to show that not all layers, or types of interaction, are equally informative: Some information can be discarded or aggregated to reduce the overall complexity of the model without sacrificing its descriptive power[33, 34].Although multilayer networks are different from higher-order networks, this approach, formally based on the density matrix formalism[35], provides a good candidate for the present case.Indeed, the idea is similar to the widely used information compression algorithms adopted in computer science:by exploiting the regularities in the data, one can build a compressed representation that optimizes the number of bits needed to describe the data with a model and those to encode the model itself.Similar approaches have also been used to coarse-grain complex and multiplex networks[36, 37, 38].

Here, we build on this long-standing research line and propose a principled approach to optimally compress systems with higher-order interactions—accounting for the complexity of the data and the complexity of the model.Specifically, we determine an optimal order of interactions up to which interactions need be considered to obtain a functionally optimal representation of the system—larger orders can be safely discarded.Formally, we do so by generalizing the concept of network density matrix[35, 39] to account for higher-order diffusive dynamics with the multiorder Laplacian[40, 17] and calculate a (modified) message length corresponding to each order. By minimizing this message length, in the spirit of the original minimum message length principle[41], we find the optimal compression of the data which, in turn, corresponds to the most parsimonious functional description of the system. In the following, we refer to this procedure as functional reduction. A higher-order network is fully reducible—to a pairwise network—if the optimal order is 1, while its reducibility decreases for increasing optimal orders. We demonstrate the validity of our method by performing an extensive analysis of synthetic networks and investigate the functional reducibility of a broad spectrum of real-world higher-order systems.

The advantage of this framework is that it provides a bridge between network analysis and information theory by means of a formalism that is largely inspired by quantum statistical physics, which has found a variety of applications from systems biology[42] to neuroscience[43], shedding light on fundamental mechanisms such as the emergence of network sparsity[44] and the renormalization group[45].

Results

.1 Density matrix for higher-order networks

The flow of information between nodes in a (pairwise) network can be modeled by different dynamical processes. Arguably, the simplest and most successful of these processes is diffusion, that can be described by means of the propagator eτ𝐋superscript𝑒𝜏𝐋e^{-\tau\mathbf{L}}italic_e start_POSTSUPERSCRIPT - italic_τ bold_L end_POSTSUPERSCRIPT, where 𝐋𝐋\mathbf{L}bold_L the combinatorial Laplacian and τ𝜏\tauitalic_τ is the diffusion time. In particular, the information flow from node i𝑖iitalic_i to node j𝑗jitalic_j is described by the component (eτ𝐋)ijsubscriptsuperscript𝑒𝜏𝐋𝑖𝑗\left(e^{-\tau\mathbf{L}}\right)_{ij}( italic_e start_POSTSUPERSCRIPT - italic_τ bold_L end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT. Network states can then be encoded by a density matrix[35, 39] defined as

𝝆τ=eτ𝐋Z,subscript𝝆𝜏superscript𝑒𝜏𝐋𝑍\bm{\rho}_{\tau}=\frac{e^{-\tau\mathbf{L}}}{Z},bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT = divide start_ARG italic_e start_POSTSUPERSCRIPT - italic_τ bold_L end_POSTSUPERSCRIPT end_ARG start_ARG italic_Z end_ARG ,(1)

where the partition function Z=Tr(eτ𝐋)𝑍Trsuperscript𝑒𝜏𝐋Z=\text{Tr}\left(e^{-\tau\mathbf{L}}\right)italic_Z = Tr ( italic_e start_POSTSUPERSCRIPT - italic_τ bold_L end_POSTSUPERSCRIPT ) ensures a unit trace for this operator.

Functional reducibility of higher-order networks (1)

Here, we generalize density matrices to higher-order networks. The most general formalism to encode higher-order networks is that of hypergraphs[8]. A hypergraph is defined by a set of nodes and a set of hyperedges that represent the interactions between any number of those nodes. A hyperedge is said to be of order d𝑑ditalic_d if it involves d+1𝑑1d+1italic_d + 1 nodes: a 0-hyperedge is a node, a 1-hyperedge is a 2-node interaction, a 2-hyperedge is a 3-node interaction, and so on. Simplicial complexes are a special case of hypergraph that is also commonly used: they additionally require that each subset of each hyperedge is included, too. In a hypergraph H𝐻Hitalic_H with maximum order dmaxsubscript𝑑maxd_{\rm max}italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT, diffusion between nodes through hyperedges of order up to Ddmax𝐷subscript𝑑maxD\leq d_{\rm max}italic_D ≤ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT can be described by the multiorder Laplacian[40, 46]

𝐋[D]𝐋(D,mul)=d=1DγdK(d)𝐋(d),superscript𝐋delimited-[]𝐷superscript𝐋𝐷mulsuperscriptsubscript𝑑1𝐷subscript𝛾𝑑delimited-⟨⟩superscript𝐾𝑑superscript𝐋𝑑\mathbf{L}^{[D]}\equiv\mathbf{L}^{(D,\text{ mul})}=\sum_{d=1}^{D}\frac{\gamma_%{d}}{\langle K^{(d)}\rangle}\mathbf{L}^{(d)},bold_L start_POSTSUPERSCRIPT [ italic_D ] end_POSTSUPERSCRIPT ≡ bold_L start_POSTSUPERSCRIPT ( italic_D , mul ) end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_d = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT divide start_ARG italic_γ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_ARG start_ARG ⟨ italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ⟩ end_ARG bold_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ,(2)

which is a weighted sum of the Laplacians 𝐋(d)superscript𝐋𝑑\mathbf{L}^{(d)}bold_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT at each order d𝑑ditalic_d up to order D𝐷Ditalic_D 111In general, a hypergraph does not need to have hyperedges at every order below D𝐷Ditalic_D, unless it is a simplicial complex. If there is no hyperedge of order d𝑑ditalic_d, both the Laplacian and the average degree in Eq.2 vanish and the result is undefined. In those cases, the sum thus needs to be taken over all orders below D𝐷Ditalic_D that exist: 𝒟={dD:K(d)>0}𝒟conditional-set𝑑𝐷delimited-⟨⟩superscript𝐾𝑑0\mathcal{D}=\{d\leq D:\langle K^{(d)}\rangle>0\}caligraphic_D = { italic_d ≤ italic_D : ⟨ italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ⟩ > 0 }. At each order, the weight is defined by a real coefficient γdsubscript𝛾𝑑\gamma_{d}italic_γ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT (which we set to 1 for simplicity) and the averaged generalized degrees K(d)delimited-⟨⟩superscript𝐾𝑑\left<K^{(d)}\right>⟨ italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ⟩. Each d𝑑ditalic_d-order Laplacian is defined byLij(d)=Ki(d)δij1dAij(d)subscriptsuperscript𝐿𝑑𝑖𝑗subscriptsuperscript𝐾𝑑𝑖subscript𝛿𝑖𝑗1𝑑subscriptsuperscript𝐴𝑑𝑖𝑗L^{(d)}_{ij}=K^{(d)}_{i}\delta_{ij}-\frac{1}{d}A^{(d)}_{ij}italic_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_δ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT - divide start_ARG 1 end_ARG start_ARG italic_d end_ARG italic_A start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT,in terms of the generalized degrees 𝑲(d)superscript𝑲𝑑\bm{K}^{(d)}bold_italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT and adjacency matrix 𝐀(d)superscript𝐀𝑑\mathbf{A}^{(d)}bold_A start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT of order d𝑑ditalic_d. The matrix 𝐋[D]superscript𝐋delimited-[]𝐷\mathbf{L}^{[D]}bold_L start_POSTSUPERSCRIPT [ italic_D ] end_POSTSUPERSCRIPT satisfies all the properties expected from a Laplacian: it is positive semidefinite and its rows (columns) sum to zero (see Methods for details).

Accordingly, the multiorder density matrix of hypergraph H𝐻Hitalic_H, up to order d𝑑ditalic_d, is defined as

𝝆τ[d]=eτ𝐋[d]Z,subscriptsuperscript𝝆delimited-[]𝑑𝜏superscript𝑒𝜏superscript𝐋delimited-[]𝑑𝑍\bm{\rho}^{[d]}_{\tau}=\frac{e^{-\tau\mathbf{L}^{[d]}}}{Z},bold_italic_ρ start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT = divide start_ARG italic_e start_POSTSUPERSCRIPT - italic_τ bold_L start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT end_ARG start_ARG italic_Z end_ARG ,(3)

with the partition function Z=Tr(eτ𝐋[d])𝑍Trsuperscript𝑒𝜏superscript𝐋delimited-[]𝑑Z=\text{Tr}\left(e^{-\tau\mathbf{L}^{[d]}}\right)italic_Z = Tr ( italic_e start_POSTSUPERSCRIPT - italic_τ bold_L start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ). Just like its pairwise analog in Eq.1, this operator satisfies all the expected properties of a density matrix: it is positive definite and its eigenvalues sum up to one. Importantly, the diffusion time τ𝜏\tauitalic_τ plays the role of a topological scale parameter: small values of τ𝜏\tauitalic_τ allow information to diffuse only to neighboring nodes, probing only short-scale structures. Larger values of τ𝜏\tauitalic_τ, instead, allow the diffusion to reach more remote parts of the hypergraph and describe large-scale structures. In this context, the meaning of “small” and “large” depends on the network structure, and can be estimated with respect to the magnitude of the largest (1/λmax1subscript𝜆1/\lambda_{\max}1 / italic_λ start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT) and smallest (1/λmin1subscript𝜆1/\lambda_{\min}1 / italic_λ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT) eigenvalues of the Laplacians, respectively.

.2 Quantifying the reducibility of a hypergraph

We approach the reducibility of a hypergraph as a problem of model selection. We formulate that problem as follows: given a hypergraph H𝐻Hitalic_H with maximum order dmaxsubscript𝑑maxd_{\rm max}italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT, is H𝐻Hitalic_H an optimal representation of itself, or is considering only its hyperedges up to a given order d<dmax𝑑subscript𝑑maxd<d_{\rm max}italic_d < italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT sufficient? Formally, we treat the density matrix 𝝆[dmax]superscript𝝆delimited-[]subscript𝑑max\bm{\rho}^{[d_{\rm max}]}bold_italic_ρ start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT as data and 𝝆[d]superscript𝝆delimited-[]𝑑\bm{\rho}^{[d]}bold_italic_ρ start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT as a model of the data. Formulated in this way, we need to find the “optimal” model of the data, that is, to evaluate the optimal order doptsubscript𝑑optd_{\rm opt}italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT. To define “optimal”, we use an approach inspired by the minimum message length formalism: the model needs to represent the data as accurately as possible while remaining as simple as possible, akin to Occam’s Razor. The steps of the methods are illustrated in Fig.1: (a) start from an original hypergraph, (b) calculate the optimal largest order doptsubscript𝑑optd_{\rm opt}italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT by minimizing the modified message length, and (c) reduce the original hypergraph to an optimal hypergraph, that is, one with orders only up to doptsubscript𝑑optd_{\rm opt}italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT without losing functional information.

As illustrated in Fig.2, we define the (modified) message length \mathcal{L}caligraphic_L as the sum of the information loss—the opposite of the model accuracy—of the model, measured by a suitably generalized Kullback-Leibler divergence DKLsubscript𝐷KLD_{\rm KL}italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT, and the model complexity C𝐶Citalic_C (see Methods for details):

(𝝆τ[dmax]|𝝆τ[d])=DKL(𝝆τ[dmax]|𝝆τ[d])+C(𝝆τ[d]).conditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]𝑑subscript𝐷KLconditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]𝑑𝐶superscriptsubscript𝝆𝜏delimited-[]𝑑\mathcal{L}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau}^{[d]}\right%)=D_{\rm KL}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau}^{[d]}%\right)+C\left(\bm{\rho}_{\tau}^{[d]}\right).caligraphic_L ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) = italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) + italic_C ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) .(4)

Note that by definition, there is no information loss when considering all possible orders, i.e., DKL(𝝆τ[dmax]|𝝆τ[dmax])=0subscript𝐷KLconditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑max0D_{\rm KL}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau}^{[d_{\rm max%}]}\right)=0italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT ) = 0. Additionally, we call this message length “modified” because its definition deviates from the standard Bayesian message length. This is because we work with operators andinstead of probability distributions, and so there is not direct generalization of the Bayes theorem. To partially overcome this issue, we use again a Kullback-Leibler divergence to approximate the modelcomplexity (see Methods).

The optimal order doptsubscript𝑑optd_{\rm opt}italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT is then that that minimizes the modified message length:

dopt=argmind(𝝆τ[dmax]|𝝆τ[d]).subscript𝑑optsubscriptargmin𝑑conditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]𝑑d_{\rm opt}=\operatorname*{arg\,min}\limits_{d}\mathcal{L}\left(\bm{\rho}_{%\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau}^{[d]}\right).italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT caligraphic_L ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) .(5)

Finally, we define the reducibility of the hypergraph as

χ(H)=dmaxdoptdmax1,𝜒𝐻subscript𝑑maxsubscript𝑑optsubscript𝑑max1\chi(H)=\frac{d_{\rm max}-d_{\rm opt}}{d_{\rm max}-1},italic_χ ( italic_H ) = divide start_ARG italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT - italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT end_ARG start_ARG italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT - 1 end_ARG ,(6)

which measures the ratio between the number of orders to reduce and the maximum number of orders to reduce, dmax1subscript𝑑max1d_{\rm max}-1italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT - 1. By construction, χ(H)=0𝜒𝐻0\chi(H)=0italic_χ ( italic_H ) = 0 for a hypergraph that is not reducible at all, i.e., dopt=dmaxsubscript𝑑optsubscript𝑑maxd_{\rm opt}=d_{\rm max}italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT = italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT, while χ(H)=1𝜒𝐻1\chi(H)=1italic_χ ( italic_H ) = 1 for a hypergraph that is maximally reducible, that is, it can be optimally reduced to its pairwise interactions, dopt=1subscript𝑑opt1d_{\rm opt}=1italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT = 1.

.3 Rescaling the diffusion time τ𝜏\tauitalic_τ at each order

As mentioned above, a topological scale τ𝜏\tauitalic_τ may be large for some networks but low for others, which is a challenge when the aim is to compare networks. To overcome this problem, we rescale τ𝜏\tauitalic_τ to ensure an appropriate diffusion time for each structure, and we exploit examples of hypergraphs with certain regularities to define a baseline and characterize the scaling relation.

Specifically, there is a class of hypergraphs for which Laplacians of different orders are proportional, i.e., 𝐋(d)𝐋(d)proportional-tosuperscript𝐋𝑑superscript𝐋superscript𝑑\mathbf{L}^{(d)}\propto\mathbf{L}^{(d^{\prime})}bold_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ∝ bold_L start_POSTSUPERSCRIPT ( italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT.This occurs in very regular structures including complete hypergraphs and some simplicial complex lattices (see Methods). In this case, since the Laplacian matrices govern the flow of information, all orders and all their combinations encode the same functional information. Consequently, we expect to see these structures to be functionally invariant under reduction—i.e., the modified message length should not change as one reduces the hypergraph.

However, without rescaling, the summation of Laplacian matrices according to Eq.2 simply strengthens the flow pathways in the original hypergraph, making it different from its reduced versions. To correct for this effect, we rescale τ𝜏\tauitalic_τ to allow meaningful comparisons between hypergraphs of different orders.

Since the density matrix only depends on the product of τ𝐋(d)𝜏superscript𝐋𝑑\tau\mathbf{L}^{(d)}italic_τ bold_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT, this can be achieved by selecting a value of diffusion time τ𝜏\tauitalic_τ, and then rescaling it to obtain a new τ(d)superscript𝜏𝑑\tau^{\prime}(d)italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) at each order like so:

τ(d)=dmaxdτ.superscript𝜏𝑑subscript𝑑max𝑑𝜏\tau^{\prime}(d)=\frac{d_{\rm max}}{d}\tau.italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) = divide start_ARG italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT end_ARG start_ARG italic_d end_ARG italic_τ .(7)

This ensures that the multiorder density matrices are all equal

𝝆τ(d)[d]=𝝆τ[dmax]d,\bm{\rho}_{\tau^{\prime}(d)}^{[d]}=\bm{\rho}_{\tau}^{[d_{\rm max]}}\quad%\forall d,bold_italic_ρ start_POSTSUBSCRIPT italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max ] end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ∀ italic_d ,(8)

in the special case of hypergraphs with proportional Laplacian matrices, and gives a flat modified message length in those extreme structures (Fig.S1).

For illustration purposes, we set τ=1/λN𝜏1subscript𝜆𝑁\tau=1/\lambda_{N}italic_τ = 1 / italic_λ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT in numerical experiments, unless otherwise stated, where λNsubscript𝜆𝑁\lambda_{N}italic_λ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT is the largest eigenvalue of the multiorder Laplacian of the original hypergraph 𝐋[dmax]superscript𝐋delimited-[]subscript𝑑max\mathbf{L}^{[d_{\rm max}]}bold_L start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT.

Physically, this rescaling simply means that we adapt the topological scale at which we probe the hypergraph as we consider more orders, to highlight the distribution of flow pathways rather than their accumulated strengths. We use this rescaling of τ𝜏\tauitalic_τ at each order, in all hypergraphs (see Methods for details). In other words, we compute the modified message length (𝝆τ[dmax]|𝝆τ(d)[d])conditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆superscript𝜏𝑑delimited-[]𝑑\mathcal{L}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau^{\prime}(d)}%^{[d]}\right)caligraphic_L ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) where the τ𝜏\tauitalic_τ of the reduced hypergraph is rescaled, contrary to Eq.4.

.4 Random structures

We now investigate the reducibility of two types of heterogeneous random structures: random hypergraphs and random simplicial complexes. To do so, we compute the optimal order as described above.

A random hypergraph is defined by a number of nodes N𝑁Nitalic_N and a set of wiring probability pdsubscript𝑝𝑑p_{d}italic_p start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT for each order required. At each order d𝑑ditalic_d, a hyperdedge is created for any combination of d+1𝑑1d+1italic_d + 1 nodes with probability pdsubscript𝑝𝑑p_{d}italic_p start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, similarly to Erdős-Rényi networks. Random simplicial complexes are built in the same manner before adding the missing subfaces of all simplices, to respect the condition of inclusion. In both cases, we set N=100𝑁100N=100italic_N = 100, pd=50/Ndsubscript𝑝𝑑50superscript𝑁𝑑p_{d}=50/N^{d}italic_p start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT = 50 / italic_N start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT and dmax=4subscript𝑑max4d_{\rm max}=4italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT = 4.

Functional reducibility of higher-order networks (2)

Figure3 shows the modified message length considering orders from 1 to 4, for 100 realizations of each type of random structure. In random hypergaphs (Fig.3a), the optimal order is the maximum, dopt=4=dmaxsubscript𝑑opt4subscript𝑑maxd_{\rm opt}=4=d_{\rm max}italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT = 4 = italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT. This means that those random hypergraphs, at this diffusion scale τ=1/λN𝜏1subscript𝜆𝑁\tau=1/\lambda_{N}italic_τ = 1 / italic_λ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT, are not reducible, i.e., χ=0𝜒0\chi=0italic_χ = 0. Instead, in the random simplicial complex case, dopt=3subscript𝑑opt3d_{\rm opt}=3italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT = 3, reflecting a higher reducibility χ=1/3𝜒13\chi=1/3italic_χ = 1 / 3.

The only difference between these two cases is that the hyperedges between different orders are correlated (nested) in random simplicial complexes but not in random hypergraphs. To test the effect of this feature, we start from random simplicial complexes and gradually change them to random hypergraphs by shuffling their hyperedges. Specifically, we change each hyperedge into another inexisting hyperedge with probability pshufflesubscript𝑝shufflep_{\rm shuffle}italic_p start_POSTSUBSCRIPT roman_shuffle end_POSTSUBSCRIPT. For pshuffle=1subscript𝑝shuffle1p_{\rm shuffle}=1italic_p start_POSTSUBSCRIPT roman_shuffle end_POSTSUBSCRIPT = 1, the result is a random hypergraph, and Fig.S2 shows the results that confirm the above pattern.

Functional reducibility of higher-order networks (3)

.4.1 Effect of the diffusion time τ𝜏\tauitalic_τ and density

As mentioned above, τ𝜏\tauitalic_τ acts as a topological scale. Diffusion processes with different values of τ𝜏\tauitalic_τ are thus expected to “see” different structures, which may result in different optimal order and reduced hypergraph. To illustrate this, we compute the modified message length curves for the random simplicial complex case, for five values of τ𝜏\tauitalic_τ evenly spaced on a logarithmic scale, and where the second and fourth are 1/λN1subscript𝜆𝑁1/\lambda_{N}1 / italic_λ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT and 1/λ21subscript𝜆21/\lambda_{2}1 / italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, respectively. FigureS3 shows that as the topological scale increases (larger τ𝜏\tauitalic_τ), the modified message length (i) increases overall and (ii) the reducibility decreases to χ=0𝜒0\chi=0italic_χ = 0. FigureS5 shows the same results for three values of the hyperedge density, where we can see that for higher densities, the modified message length curve becomes much more flat with no clear minimum (see FigureS5), suggesting a functional similarity between the large-scale structure at all orders.

We similarly tested the effect of the sole density on the reducibility (Figs.S4 andS5). In general, the overall density coefficient does not seem to significantly affect the reducibility.

.5 Real-world hypergraphs

We now investigate how reducible real-world hypergraphs are by considering 22 empirical hypergraph datasets from 4 categories: coauthorships, face-to-face contacts, biological systems, and online communications (“technological”).

First, we observe a great variety in the reducilibity values (Fig.4) and the associated modified message length curves (Fig.S6). Fig.4a-c show three examples with curves of very different shapes and with different reduciblity. Second, we note that datasets from different categories seem to be distinctively reducible: all coauthorship datasets are not reducible (χ=0𝜒0\chi=0italic_χ = 0), half of the technological datasets are fully reducible (tag datasets, χ=1𝜒1\chi=1italic_χ = 1) and the other half is not (email datasets, χ=0𝜒0\chi=0italic_χ = 0), whereas most contact datasets have medium values of reducibility. The number of nodes, the maximum and optimal orders, and the hypergraph reducibility are reported for all datasets in Table1, and a description of the datasets is provided in Methods.

In Fig.S7, we also show the reducibility values against structural parameters of the hypergraphs: number of nodes, number of edges, density, and maximum order. We did not observe clear correlations with any of those parameters. In particular, the reducibility takes many values between 0 and 1 for any value of the structural parameters, which confirms the variety of reducibility values in the empirical datasets.

Functional reducibility of higher-order networks (4)
DatasetCategoryN𝑁Nitalic_Ndmaxsubscript𝑑maxd_{\rm max}italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPTdoptsubscript𝑑optd_{\rm opt}italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPTχ𝜒\chiitalic_χ
coauth-mag-geology_1980coauthorship135017170.00
coauth-mag-geology_1981coauthorship46417170.00
coauth-mag-geology_1982coauthorship133117170.00
coauth-mag-geology_1983coauthorship53514140.00
congress-billsother17183993530.12
kaggle-whats-cookingother671464640.00
contact-high-schoolcontact327430.33
contact-primary-schoolcontact242420.67
hospital-lyoncontact75420.67
hypertext-conferencecontact113530.50
invs13contact92320.50
invs15contact217320.50
science-gallerycontact410440.00
sfhh-conferencecontact403850.43
malawi-villagecontact84320.50
dawnbio22901511.00
ndc-classesbio62838110.73
ndc-substancesbio306524230.04
email-enrontechnology14336360.00
email-eutechnology98639390.00
tags-ask-ubuntutechnology3021411.00
tags-math-sxtechnology1627411.00

Discussion

All areas of natural and social science, as well as engineering ones, are undergoing a deluge of publicly available data with complex structure. This data allows for building more detailed and powerful models of systems from areas such as physics, biology, sociology, and technology. One class of such models is networks that encode group interactions rather than just pairwise ones: higher-order networks. Higher-order networks provide a natural framework for modeling systems where interactions between more than two units occur, such as chemical reactions of metabolic interest or social interactions. However, they are usually projected—by design—to pairwise interactions when data is gathered, and higher-order information is inevitably lost. By preserving that information, higher-order networks have the potential to yield a more reliable model of some empirical systems.Nevertheless, higher-order networks come with novel challenges: new theoretical and computational methods have to be developed, while algorithms become exponentially more complex for increasing order of the interactions.It is thus of paramount importance to understand under which conditions one should opt for higher-order modeling—or keep using traditional pairwise models.

Here, we have provided a principled method, at the edge between statistical physics and information theory, to guide researchers in identifying the most suitable representation for their data. Our method is based on minimizing a suitable modified message length, encoding both the number of bits required to describe the data given a model and the number of bits required to describe the model estimated by its entropy, to find the optimal order of group interactions. Orders above the optimal one can be safely discarded because they provide only redundant information about the system while increasing model complexity and the computational cost for its analysis.

The message length we defined is “modified” compared to the standard Bayesian definition. Indeed, the density matrices are operators—and not probability distributions—for which there is no direct generalization ofthe Bayes theorem. To partially overcome this, weused a Kullback-Leibler divergence to approximatethe model complexity and define an effective regularization term.

Remarkably, we show that not all systems require a higher-order model to be described and that even within the same class of systems, there is some level of variability. We started by testing the method on extremely regular cases, where the modified message length was flat for all orders, as computed analytically. We then applied the method to random structures, where results indicate that random hypergraphs are non-reducible: i.e., they are best represented by considering all possible orders of interaction. This result is expected because random hypergraphs are the most uncorrelated model possible: hyperedges are uncorrelated within each order, just like in Erdős-Rényi random graphs, but hyperedges of different orders are also uncorrelated, yielding incompressibility of the representation. In other words, all orders are relevant to describe the system’s structure and dynamics, since they encode genuine uncorrelated noise that is incompressible.Conversely, random simplicial complexes were reducible because hyperedges at different orders are correlated by design: the presence of correlation introduces some level of redundancy that is captured by our method.

Finally, we applied our method to empirical datasets that contain group interactions. The large variety of reducibility values obtained indicates that while the extra information encoded in higher-order networks may be optimal for some systems, others can be optimally represented with lower orders and in some cases even with only pairwise interactions. It is also important to note that the category of systems to which the datasets belong appears to be correlated with its reducibility, although more datasets are needed to perform a systematic and quantitative analysis of this phenomenon. Our results challenge the widespread assumption that complex network data must necessarily be investigated through the lens of higher-order dynamics. In fact, there are completely reducible and non-reducible systems, with a complete spectrum of cases between these two extremal cases, demonstrating that some orders might be irrelevant or uninformative to describe an empirical system. Moving forward, it will be crucial to collect new datasets from the biological sciences to determine whether and in what scenarios complex networks such as metabolomes and connectomes can be considered reducible or irreducible. This will help to understand under which conditions higher-order mechanisms and behaviors are essential for the function of these systems, as is often assumed nowadays.

Our work contributes to the increasing interest in reducing the dimensionality of higher-order network models[48, 49], and more generally of complex systems models[50, 51, 52, 45, 53]. In particular, phase reduction techniques for coupled oscillators have shown that higher-order interactions appear naturally when reducing the dynamics of (initially pairwise) coupled higher-dimensional oscillators to simple 1-dimensional phase oscillators [50]. This is consistent with recent results reducing complex to low-rank matrices, in which case higher-order interactions also appear naturally [53]. A renormalization group approach also identified important orders of interaction in empirical datasets which were not always pairwise [49]. Parallel but complementary to these approaches that look at “mechanisms”, other approaches such as maximum entropy models and higher-order information-theoretic measures look at “behaviors”, that is, time series of observables from those systems. Interestingly, several such studies have instead shown that pairwise models can be sufficient, under certain conditions, to describe the behaviors of systems that contain higher-order interactions, or mechanisms [54, 55, 56, 57]. Except for initial work [10, 58], little is known about the relationship between these results and approaches, suggesting a promising direction for future work.

Detecting redundancies and exploiting them with a principled approach to identify a compressed representation of the data,has the potential to enhance our understanding of empirical higher-order systems, and contributes to the increasing interest in dimensionality reduction of such networks[48, 49, 53]. The main advantage of our framework is that it builds on a consolidated formalism that is firmly grounded on the statistical physics of strongly correlated systems. As in the case of multilayer systems[33], we think that it is remarkable that it is possible to tackle such challenges by capitalizing on a formal analogy between quantum and higher-order systems, which can be further exploited to gain novel insights into the structural and functional organization of complex systems.

Methods

.6 Multiorder Laplacian for hypergraphs

The Laplacian matrix 𝐋𝐋\mathbf{L}bold_L of a graph is defined as Lij=KiδijAijsubscript𝐿𝑖𝑗subscript𝐾𝑖subscript𝛿𝑖𝑗subscript𝐴𝑖𝑗L_{ij}=K_{i}\delta_{ij}-A_{ij}italic_L start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = italic_K start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_δ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT - italic_A start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT, where Kisubscript𝐾𝑖K_{i}italic_K start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the degree of node i𝑖iitalic_i, δijsubscript𝛿𝑖𝑗\delta_{ij}italic_δ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is the Kronecker delta, and 𝐀𝐀\mathbf{A}bold_A is the adjacency matrix.For hypergraphs, we can define dmaxsubscript𝑑maxd_{\mathrm{max}}italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT Laplacian matrices, one for each order of interaction.The Laplacian of order d𝑑ditalic_d can be defined as [40, 17]

Lij(d)=Ki(d)δij1dAij(d),subscriptsuperscript𝐿𝑑𝑖𝑗subscriptsuperscript𝐾𝑑𝑖subscript𝛿𝑖𝑗1𝑑subscriptsuperscript𝐴𝑑𝑖𝑗L^{(d)}_{ij}=K^{(d)}_{i}\delta_{ij}-\frac{1}{d}A^{(d)}_{ij},italic_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_δ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT - divide start_ARG 1 end_ARG start_ARG italic_d end_ARG italic_A start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ,(9)

where Ki(d)subscriptsuperscript𝐾𝑑𝑖K^{(d)}_{i}italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the degree of order d𝑑ditalic_d of node i𝑖iitalic_i, i.e., the number of d𝑑ditalic_d-hyperedges connected to node i𝑖iitalic_i, while 𝐀(d)superscript𝐀𝑑\mathbf{A}^{(d)}bold_A start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT is the adjacency matrix of order d𝑑ditalic_d, whose elements Aij(d)subscriptsuperscript𝐴𝑑𝑖𝑗A^{(d)}_{ij}italic_A start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT counts the number of d𝑑ditalic_d-hyperedges connected to nodes i𝑖iitalic_i and j𝑗jitalic_j.

We can hence define the multiorder Laplacian up to an order D𝐷Ditalic_D as [40, 17]

𝐋[D]=𝐋(D,mul)=d=1Dγdk(d)𝐋(d),superscript𝐋delimited-[]𝐷superscript𝐋𝐷mulsuperscriptsubscript𝑑1𝐷subscript𝛾𝑑delimited-⟨⟩superscript𝑘𝑑superscript𝐋𝑑\mathbf{L}^{[D]}=\mathbf{L}^{(D,\mathrm{mul})}=\sum\limits_{d=1}^{D}\frac{%\gamma_{d}}{\langle k^{(d)}\rangle}\mathbf{L}^{(d)},bold_L start_POSTSUPERSCRIPT [ italic_D ] end_POSTSUPERSCRIPT = bold_L start_POSTSUPERSCRIPT ( italic_D , roman_mul ) end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_d = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT divide start_ARG italic_γ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_ARG start_ARG ⟨ italic_k start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ⟩ end_ARG bold_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ,(10)

where γdsubscript𝛾𝑑\gamma_{d}italic_γ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is a tuning parameter of interactions of order d𝑑ditalic_d, while K(d)delimited-⟨⟩superscript𝐾𝑑\langle K^{(d)}\rangle⟨ italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ⟩ is the average degree of order d𝑑ditalic_d. For simplicity, in this study, we always set γd=1subscript𝛾𝑑1\gamma_{d}=1italic_γ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT = 1.

Note that in general, a hypergraph does not need to have hyperedges at every order below D𝐷Ditalic_D, unless it is a simplicial complex. If there is no hyperedge of order d𝑑ditalic_d, both the Laplacian and the average degree in Eq.2 vanish and the result is undefined. In those cases, the sum thus needs to be taken over all orders below D𝐷Ditalic_D that exist: 𝒟={dD:K(d)>0}𝒟conditional-set𝑑𝐷delimited-⟨⟩superscript𝐾𝑑0\mathcal{D}=\{d\leq D:\langle K^{(d)}\rangle>0\}caligraphic_D = { italic_d ≤ italic_D : ⟨ italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ⟩ > 0 }.

.7 Information loss as Kullback-Leibler divergence between two hypergraphs

The state of the original hypergraph H𝐻Hitalic_H is stored in the multiorder density matrix

𝝆τ[dmax]=eτ𝐋[dmax]/Z[dmax],superscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscript𝑒𝜏superscript𝐋delimited-[]subscript𝑑maxsuperscript𝑍delimited-[]subscript𝑑max\bm{\rho}_{\tau}^{[d_{\rm max}]}=e^{-\tau\mathbf{L}^{[d_{\rm max}]}}/Z^{[d_{%\rm max}]},bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT = italic_e start_POSTSUPERSCRIPT - italic_τ bold_L start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT / italic_Z start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT ,(11)

and the state of the a reduced hypergraph where we consider orders up to d𝑑ditalic_d is given by

𝝆τ[d]=eτ𝐋[d]/Z[d].superscriptsubscript𝝆𝜏delimited-[]𝑑superscript𝑒𝜏superscript𝐋delimited-[]𝑑superscript𝑍delimited-[]𝑑\bm{\rho}_{\tau}^{[d]}=e^{-\tau\mathbf{L}^{[d]}}/Z^{[d]}.bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = italic_e start_POSTSUPERSCRIPT - italic_τ bold_L start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT / italic_Z start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT .(12)

To perform model selection and determine the optimal order doptsubscript𝑑optd_{\rm opt}italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT to represent the hypergraph, we treat 𝝆τ[dmax]superscriptsubscript𝝆𝜏delimited-[]subscript𝑑max\bm{\rho}_{\tau}^{[d_{\rm max}]}bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT as data and 𝝆τ[d]superscriptsubscript𝝆𝜏delimited-[]𝑑\bm{\rho}_{\tau}^{[d]}bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT as a model of it. The first key aspect of a good model is that it must describe the data as accurately as possible. We can quantity the modeling error, or information loss, with the Kullback-Leibler (KL) entropy divergence between the data and the model, defined as

DKL(𝝆τ[dmax]|𝝆τ[d])=S[dmax]+S([dmax]|[d])0,subscript𝐷KLconditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]𝑑superscript𝑆delimited-[]subscript𝑑max𝑆conditionaldelimited-[]subscript𝑑maxdelimited-[]𝑑0D_{\rm KL}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau}^{[d]}\right)%=-S^{[d_{\rm max}]}+S\left([d_{\rm max}]|[d]\right)\geq 0,italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) = - italic_S start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT + italic_S ( [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] | [ italic_d ] ) ≥ 0 ,(13)

where S[dmax]=Tr(𝝆τ[dmax]log𝝆τ[dmax])superscript𝑆delimited-[]subscript𝑑maxTrsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxS^{[d_{\rm max}]}=-\text{Tr}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}\log{\bm{%\rho}_{\tau}^{[d_{\rm max}]}}\right)italic_S start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT = - Tr ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT roman_log bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT ) is the Von Neumann entropy of the hypergraph and S([dmax]|[d])=Tr(𝝆τ[dmax]log𝝆τ[d])𝑆conditionaldelimited-[]subscript𝑑maxdelimited-[]𝑑Trsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]𝑑S([d_{\rm max}]|[d])=-\text{Tr}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}\log{\bm{%\rho}_{\tau}^{[d]}}\right)italic_S ( [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] | [ italic_d ] ) = - Tr ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT roman_log bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) is the cross-entropy between the hypergraph and its reduced form.

The Von Neumann entropy of a hypergraph can also be written as

S[d]=Tr(𝝆τ[d]log𝝆τ[d])=iλilogλi,superscript𝑆delimited-[]𝑑Trsuperscriptsubscript𝝆𝜏delimited-[]𝑑superscriptsubscript𝝆𝜏delimited-[]𝑑subscript𝑖subscript𝜆𝑖subscript𝜆𝑖S^{[d]}=-\text{Tr}\left(\bm{\rho}_{\tau}^{[d]}\log{\bm{\rho}_{\tau}^{[d]}}%\right)=-\sum_{i}\lambda_{i}\log\lambda_{i},italic_S start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = - Tr ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT roman_log bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) = - ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_log italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ,(14)

where {λi}subscript𝜆𝑖\{\lambda_{i}\}{ italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } are the eigenvalues of the density matrix. The Von Neumann entropy S[d]superscript𝑆delimited-[]𝑑S^{[d]}italic_S start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT is zero if only one eigenvalue is non-zero (“pure state”, in the language of quantum mechanics), and maximal equal to logN𝑁\log Nroman_log italic_N if all eigenvalues are equal (“maximally mixed state”, in the language of quantum mechanics). This occurs when all the eigenvalues of the Laplacian are zero: N𝑁Nitalic_N isolated nodes.

The information loss DKLsubscript𝐷KLD_{\rm KL}italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT takes positive values proportionally to the inaccuracy of the mode, and reaches zero at d=dmax𝑑subscript𝑑maxd=d_{\rm max}italic_d = italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT—as the data is the most accurate model of itself.

.8 Model complexity

Accuracy, or in contrast, information loss, is insufficient to determine the optimal model and thus the order d𝑑ditalic_d. In fact, a model can always be made more accurate by overfitting. Thus, high model accuracy must be balanced with low model complexity.

We measure the complexity of the model in terms of its entropic deviation from the simplest possible model: a network of isolated nodes. We know that the entropy of N𝑁Nitalic_N isolated nodes is given by Siso=logNsubscript𝑆iso𝑁S_{\rm iso}=\log{N}italic_S start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT = roman_log italic_N. This gives an upper bound on the Von Neumann entropy of a hypergraph, guaranteeing that S[d]Sisosuperscript𝑆delimited-[]𝑑subscript𝑆isoS^{[d]}\leq S_{\rm iso}italic_S start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ≤ italic_S start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT. Therefore, we define the model complexity C𝐶Citalic_C as:

C(𝝆τ[d])=SisoS[d].𝐶superscriptsubscript𝝆𝜏delimited-[]𝑑subscript𝑆isosuperscript𝑆delimited-[]𝑑C\left(\bm{\rho}_{\tau}^{[d]}\right)=S_{\rm iso}-S^{[d]}.italic_C ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) = italic_S start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT - italic_S start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT .(15)

By definition, the model complexity is non-negative, C0𝐶0C\geq 0italic_C ≥ 0, and is expected to be lower when the eigenvalues of 𝝆τ[d]superscriptsubscript𝝆𝜏delimited-[]𝑑\bm{\rho}_{\tau}^{[d]}bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT are all similar, and larger when they are more diverse.

.9 Minimizing the modified message length

We can now define the modified message length \mathcal{L}caligraphic_L by combining the information loss in Eq.13 and the model complexity in Eq.15:

(𝝆τ[dmax]|𝝆τ[d])=DKL(𝝆τ[dmax]|𝝆τ[d])+C(𝝆τ[d]).conditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]𝑑subscript𝐷KLconditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]𝑑𝐶superscriptsubscript𝝆𝜏delimited-[]𝑑\mathcal{L}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau}^{[d]}\right%)=D_{\rm KL}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau}^{[d]}%\right)+C\left(\bm{\rho}_{\tau}^{[d]}\right).caligraphic_L ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) = italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) + italic_C ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) .(16)

By definition, minimizing the modified message length corresponds to maximizing the accuracy of the model and, at the same time, minimizing the model complexity (Occam’s Razor). To obtain the best compression, we find the smallest order d𝑑ditalic_d which gives

dopt=mind(𝝆τ[dmax]|𝝆τ[d]).subscript𝑑optsubscript𝑑conditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]𝑑d_{\rm opt}=\min\limits_{d}\mathcal{L}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|%\bm{\rho}_{\tau}^{[d]}\right).italic_d start_POSTSUBSCRIPT roman_opt end_POSTSUBSCRIPT = roman_min start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT caligraphic_L ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT ) .(17)

It is worth mentioning that the propagation scale τ𝜏\tauitalic_τ works as a resolution parameter. When τ𝜏\tauitalic_τ is very large, the process approaches the steady state, and the network topology becomes irrelevant and, consequently, any model can be a good model. Whereas, at very small τ𝜏\tauitalic_τ, the field evolution is linear and through the paths of length 1absent1\approx 1≈ 1, exhibiting maximum resolution. Although we explore a variety of values of τ𝜏\tauitalic_τ, we mainly focus on a characteristic propagation scale τcsubscript𝜏𝑐\tau_{c}italic_τ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, the largest τ𝜏\tauitalic_τ for which a linearization of the time evolution operator is still valid. Assume the eigenvalues of the Laplacian are given by {λ}subscript𝜆\{\lambda_{\ell}\}{ italic_λ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT } where =1,2,N12𝑁\ell=1,2,...Nroman_ℓ = 1 , 2 , … italic_N and let λNsubscript𝜆𝑁\lambda_{N}italic_λ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT be the largest of them. Then, the eigenvalues of the time-evolution operator eτ𝐋superscript𝑒𝜏𝐋e^{-\tau\mathbf{L}}italic_e start_POSTSUPERSCRIPT - italic_τ bold_L end_POSTSUPERSCRIPT are given by {eτλ}superscript𝑒𝜏subscript𝜆\{e^{-\tau\lambda_{\ell}}\}{ italic_e start_POSTSUPERSCRIPT - italic_τ italic_λ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT end_POSTSUPERSCRIPT }. Here, τc=1/λNsubscript𝜏𝑐1subscript𝜆𝑁\tau_{c}=1/\lambda_{N}italic_τ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = 1 / italic_λ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT, ensuring that the last eigenvalue of the time evolution operator is reasonably linearizable eτcλN1τcλNsuperscript𝑒subscript𝜏𝑐subscript𝜆𝑁1subscript𝜏𝑐𝜆𝑁e^{-\tau_{c}\lambda_{N}}\approx 1-\tau_{c}\lambda{N}italic_e start_POSTSUPERSCRIPT - italic_τ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT italic_λ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ≈ 1 - italic_τ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT italic_λ italic_N. This ensures an acceptable linearization for the rest of the eigenvalues since λNsubscript𝜆𝑁\lambda_{N}italic_λ start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT is the largest eigenvalue of the Laplacian.

It is important to remark that in the original message length formulation the hypothesis is that observational data can be described by a generative model characterized by a probability distribution. In turn, such a distribution depends on some unknown parameters that can be fitted by maximizing the Bayesian posterior or, equivalently, minimizing the message length. Consequently, the term accounting for model complexity, C𝐶Citalic_C, depends on the number of parameters used to describe that probability distribution.Here, we deviate from the standard Bayesian approach, because we work with operators and not with probability distributions, thus a direct generalization of the Bayes theorem in this context is difficult. In fact, our model is a density matrix that characterizes the probability distribution of activating pathways for information flows[59].

Consequently, using our formalism, there is no direct way to reconcile our definition of message length with the standard Bayesian complexity term. In fact, the latter can be understood as Shannon’s surprise about the prior probability, which has no direct counterpart in our formalism.To partially overcome this issue, we use again a Kullback-Leibler divergence to approximate the model complexity and define an effective regularization term.In fact, this reduces to the deviation of Von Neumann entropy of the d𝑑ditalic_d-th order model from the case of a “gas network” (i.e., N𝑁Nitalic_N disconnected nodes), which can be understood as the maximally mixed state to define our network of size N. In information-theoretical terms, if we define 𝝈iso=𝐈Nsubscript𝝈iso𝐈𝑁\bm{\sigma}_{\rm iso}=\frac{\mathbf{I}}{N}bold_italic_σ start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT = divide start_ARG bold_I end_ARG start_ARG italic_N end_ARG as the density matrix of the maximally mixed state, where 𝐈𝐈\mathbf{I}bold_I is the identity matrix of order N𝑁Nitalic_N, thenDKL(𝝆||𝝈iso)CD_{KL}(\bm{\rho}||\bm{\sigma}_{\rm iso})\equiv Citalic_D start_POSTSUBSCRIPT italic_K italic_L end_POSTSUBSCRIPT ( bold_italic_ρ | | bold_italic_σ start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT ) ≡ italic_C encodes the expected “excess surprise” that we have about the state 𝝆𝝆\bm{\rho}bold_italic_ρ if we use 𝝈isosubscript𝝈iso\bm{\sigma}_{\rm iso}bold_italic_σ start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT as a prior model for it:

DKL(𝝆||𝝈iso)\displaystyle D_{KL}(\bm{\rho}||\bm{\sigma}_{\rm iso})italic_D start_POSTSUBSCRIPT italic_K italic_L end_POSTSUBSCRIPT ( bold_italic_ρ | | bold_italic_σ start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT )=Tr(𝝆(log𝝆log𝝈iso))absentTr𝝆𝝆subscript𝝈iso\displaystyle=\text{Tr}\left(\bm{\rho}(\log{\bm{\rho}}-\log{\bm{\sigma}_{\rmiso%}})\right)= Tr ( bold_italic_ρ ( roman_log bold_italic_ρ - roman_log bold_italic_σ start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT ) )
=Tr(𝝆log𝝆)Tr(𝝆log𝝈iso)absentTr𝝆𝝆Tr𝝆subscript𝝈iso\displaystyle=\text{Tr}\left(\bm{\rho}\log{\bm{\rho}}\right)-\text{Tr}\left(%\bm{\rho}\log{\bm{\sigma}_{\rm iso}}\right)= Tr ( bold_italic_ρ roman_log bold_italic_ρ ) - Tr ( bold_italic_ρ roman_log bold_italic_σ start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT )
=STr(𝝆(log𝐈log(N)))absent𝑆Tr𝝆𝐈𝑁\displaystyle=-S-\text{Tr}\left(\bm{\rho}(\log{\mathbf{I}}-\log(N))\right)= - italic_S - Tr ( bold_italic_ρ ( roman_log bold_I - roman_log ( italic_N ) ) )
=logNSC.absent𝑁𝑆𝐶\displaystyle=\log{N}-S\equiv C.= roman_log italic_N - italic_S ≡ italic_C .(18)

Remarkably, the term logN𝑁\log Nroman_log italic_N is constant and does not alter the optimization procedure, but it is useful to indirectly reconnect our complexity term C𝐶Citalic_C to the surprise about the prior appearing from the classical Bayesian approach.For this reason, through this paper, we will use the term modified message length instead of message length to acknowledge the existing gap.

.10 Rescaling τ𝜏\tauitalic_τ

.10.1 Complete hypergraph

In some extremely regular structures, such as complete hypergraphs or some simplicial complex lattices, the Laplacians at all orders are proportional. For example, for complete hypergraphs,

𝐋(d)=K(d)N1𝐋(1),superscript𝐋𝑑superscript𝐾𝑑𝑁1superscript𝐋1\mathbf{L}^{(d)}=\frac{K^{(d)}}{N-1}\mathbf{L}^{(1)},bold_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT = divide start_ARG italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_N - 1 end_ARG bold_L start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT ,(19)

and consequently

𝐋[d]=dN1𝐋(1)=d𝐋[1].superscript𝐋delimited-[]𝑑𝑑𝑁1superscript𝐋1𝑑superscript𝐋delimited-[]1\mathbf{L}^{[d]}=\frac{d}{N-1}\mathbf{L}^{(1)}=d\,\mathbf{L}^{[1]}.bold_L start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = divide start_ARG italic_d end_ARG start_ARG italic_N - 1 end_ARG bold_L start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT = italic_d bold_L start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT .(20)

Another direct but useful consequence of this is the relationship between the multiorder Laplacians at any two orders

𝐋[d]=dd𝐋[d].superscript𝐋delimited-[]𝑑𝑑superscript𝑑superscript𝐋delimited-[]superscript𝑑\mathbf{L}^{[d]}=\frac{d}{d^{\prime}}\,\mathbf{L}^{[d^{\prime}]}.bold_L start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = divide start_ARG italic_d end_ARG start_ARG italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG bold_L start_POSTSUPERSCRIPT [ italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ] end_POSTSUPERSCRIPT .(21)

Since by definition Eq.3, the density matrix 𝝆τ[d]superscriptsubscript𝝆𝜏delimited-[]𝑑\bm{\rho}_{\tau}^{[d]}bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT depends only on the product τ𝐋[d]𝜏superscript𝐋delimited-[]𝑑\tau\mathbf{L}^{[d]}italic_τ bold_L start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT, we can write

𝝆τ[d]=eτd𝐋[1]Tr(eτd𝐋[1])=𝝆τ~[1]subscriptsuperscript𝝆delimited-[]𝑑𝜏superscript𝑒𝜏𝑑superscript𝐋delimited-[]1Trsuperscript𝑒𝜏𝑑superscript𝐋delimited-[]1subscriptsuperscript𝝆delimited-[]1~𝜏\bm{\rho}^{[d]}_{\tau}=\frac{e^{-\tau d\,\mathbf{L}^{[1]}}}{\text{Tr}\left(e^{%-\tau d\,\mathbf{L}^{[1]}}\right)}=\bm{\rho}^{[1]}_{\tilde{\tau}}bold_italic_ρ start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT = divide start_ARG italic_e start_POSTSUPERSCRIPT - italic_τ italic_d bold_L start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT end_ARG start_ARG Tr ( italic_e start_POSTSUPERSCRIPT - italic_τ italic_d bold_L start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ) end_ARG = bold_italic_ρ start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT start_POSTSUBSCRIPT over~ start_ARG italic_τ end_ARG end_POSTSUBSCRIPT(22)

where τ~=dτ~𝜏𝑑𝜏\tilde{\tau}=d\tauover~ start_ARG italic_τ end_ARG = italic_d italic_τ, or equivalently, between two orders

𝝆τ[d]=𝝆ddτ[d].superscriptsubscript𝝆𝜏delimited-[]𝑑superscriptsubscript𝝆𝑑superscript𝑑𝜏delimited-[]superscript𝑑\bm{\rho}_{\tau}^{[d]}=\bm{\rho}_{\frac{d}{d^{\prime}}\tau}^{[d^{\prime}]}.bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = bold_italic_ρ start_POSTSUBSCRIPT divide start_ARG italic_d end_ARG start_ARG italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ] end_POSTSUPERSCRIPT .(23)

Hence, instead of selecting a single diffusion time τ𝜏\tauitalic_τ for all orders, we can select a different, more appropriate diffusion time at each order. Specifically, we can select a main τ𝜏\tauitalic_τ, and rescale it at each order to ensure that the density matrices are the same. One could choose the rescaling so that the density matrices would be equal to any of them non-rescaled, e.g. to 𝝆τ[1]superscriptsubscript𝝆𝜏delimited-[]1\bm{\rho}_{\tau}^{[1]}bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT. However, to ensure that the information loss vanishes when considering all possible orders, DKL(𝝆τ[dmax]|𝝆τ[dmax])=0subscript𝐷KLconditionalsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑maxsuperscriptsubscript𝝆𝜏delimited-[]subscript𝑑max0D_{\rm KL}\left(\bm{\rho}_{\tau}^{[d_{\rm max}]}|\bm{\rho}_{\tau}^{[d_{\rm max%}]}\right)=0italic_D start_POSTSUBSCRIPT roman_KL end_POSTSUBSCRIPT ( bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT | bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT ] end_POSTSUPERSCRIPT ) = 0, we need to set all of them equal to 𝝆τ[1]superscriptsubscript𝝆𝜏delimited-[]1\bm{\rho}_{\tau}^{[1]}bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT. This is achieved by rescaling the diffusion time by

τ(d)=dmaxdτsuperscript𝜏𝑑subscript𝑑max𝑑𝜏\tau^{\prime}(d)=\frac{d_{\rm max}}{d}\tauitalic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) = divide start_ARG italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT end_ARG start_ARG italic_d end_ARG italic_τ(24)

so that

𝝆τ(d)[d]=𝝆dmaxdτ[d],=𝝆τ[dmax].\bm{\rho}_{\tau^{\prime}(d)}^{[d]}=\bm{\rho}_{\frac{d_{\rm max}}{d}\tau}^{[d]}%,=\bm{\rho}_{\tau}^{[d_{\rm max]}}.bold_italic_ρ start_POSTSUBSCRIPT italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = bold_italic_ρ start_POSTSUBSCRIPT divide start_ARG italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT end_ARG start_ARG italic_d end_ARG italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT , = bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max ] end_POSTSUBSCRIPT end_POSTSUPERSCRIPT .(25)

.10.2 General case of proportional Laplacians

In general, the Laplacian of order d𝑑ditalic_d is proportional to that of order 1111, 𝐋(d)𝐋(1)proportional-tosuperscript𝐋𝑑superscript𝐋1\mathbf{L}^{(d)}\propto\mathbf{L}^{(1)}bold_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ∝ bold_L start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT, when their respective adjacency matrices are proportional, that is

𝐀(d)=dB(d)𝐀(1).superscript𝐀𝑑𝑑𝐵𝑑superscript𝐀1\mathbf{A}^{(d)}=d\,B(d)\mathbf{A}^{(1)}.bold_A start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT = italic_d italic_B ( italic_d ) bold_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT .(26)

where B(d)𝐵𝑑B(d)italic_B ( italic_d ) is a coefficient that may depend on the order d𝑑ditalic_d. Indeed, by definition, the generalized degree and adjacency matrix are related by Ki(d)=1djAij(d)superscriptsubscript𝐾𝑖𝑑1𝑑subscript𝑗subscriptsuperscript𝐴𝑑𝑖𝑗K_{i}^{(d)}=\frac{1}{d}\sum_{j}A^{(d)}_{ij}italic_K start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_d end_ARG ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_A start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT, and thus we also have

𝑲(d)=B(d)𝑲(1),superscript𝑲𝑑𝐵𝑑superscript𝑲1\bm{K}^{(d)}=B(d)\bm{K}^{(1)},bold_italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT = italic_B ( italic_d ) bold_italic_K start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT ,(27)

ensuring that the Laplacian matricesare proportional, 𝐋(d)=B(d)𝐋(1)superscript𝐋𝑑𝐵𝑑superscript𝐋1\mathbf{L}^{(d)}=B(d)\mathbf{L}^{(1)}bold_L start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT = italic_B ( italic_d ) bold_L start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT.

Equation27 implies that

B(d)=K(d)/K(1),𝐵𝑑delimited-⟨⟩superscript𝐾𝑑delimited-⟨⟩superscript𝐾1B(d)=\langle K^{(d)}\rangle/\langle K^{(1)}\rangle,italic_B ( italic_d ) = ⟨ italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ⟩ / ⟨ italic_K start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT ⟩ ,(28)

and hence, the multiorder Laplacian up to order d𝑑ditalic_d is given by

𝐋[d]=dK(1)𝐋(1)=d𝐋[1]superscript𝐋delimited-[]𝑑𝑑delimited-⟨⟩superscript𝐾1superscript𝐋1𝑑superscript𝐋delimited-[]1\mathbf{L}^{[d]}=\frac{d}{\langle K^{(1)}\rangle}\mathbf{L}^{(1)}=d\,\mathbf{L%}^{[1]}bold_L start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = divide start_ARG italic_d end_ARG start_ARG ⟨ italic_K start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT ⟩ end_ARG bold_L start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT = italic_d bold_L start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT(29)

which is consistent with the complete hypergraph case in Eq.19, where we have K(1)=N1delimited-⟨⟩superscript𝐾1𝑁1\langle K^{(1)}\rangle=N-1⟨ italic_K start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT ⟩ = italic_N - 1. The rest of the derivation is thus the same as in the complete graph case: rescaling τ𝜏\tauitalic_τ per order as

τ(d)=dmaxdτsuperscript𝜏𝑑subscript𝑑max𝑑𝜏\tau^{\prime}(d)=\frac{d_{\rm max}}{d}\tauitalic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) = divide start_ARG italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT end_ARG start_ARG italic_d end_ARG italic_τ(30)

ensures

𝝆τ(d)[d]=𝝆dmaxdτ[d],=𝝆τ[dmax].\bm{\rho}_{\tau^{\prime}(d)}^{[d]}=\bm{\rho}_{\frac{d_{\rm max}}{d}\tau}^{[d]}%,=\bm{\rho}_{\tau}^{[d_{\rm max]}}.bold_italic_ρ start_POSTSUBSCRIPT italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = bold_italic_ρ start_POSTSUBSCRIPT divide start_ARG italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT end_ARG start_ARG italic_d end_ARG italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT , = bold_italic_ρ start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ italic_d start_POSTSUBSCRIPT roman_max ] end_POSTSUBSCRIPT end_POSTSUPERSCRIPT .(31)

Note again that, in general, a hypergraph does not need to have hyperedges at every order below D𝐷Ditalic_D, unless it is a simplicial complex. If there is no hyperedge at some orders d𝑑ditalic_d, Eq.29 must be adjusted, as the factor d𝑑ditalic_d comes from the number of orders present. The set of orders present is in general 𝒟={id:K(i)>0}𝒟conditional-set𝑖𝑑delimited-⟨⟩superscript𝐾𝑖0\mathcal{D}=\{i\leq d:\langle K^{(i)}\rangle>0\}caligraphic_D = { italic_i ≤ italic_d : ⟨ italic_K start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ⟩ > 0 }, so that Eq.29 becomes 𝐋[d]=|𝒟|𝐋[1]superscript𝐋delimited-[]𝑑𝒟superscript𝐋delimited-[]1\mathbf{L}^{[d]}=|\mathcal{D}|\,\mathbf{L}^{[1]}bold_L start_POSTSUPERSCRIPT [ italic_d ] end_POSTSUPERSCRIPT = | caligraphic_D | bold_L start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT and Eq.30 becomes τ(d)=dmax|𝒟|τsuperscript𝜏𝑑subscript𝑑max𝒟𝜏\tau^{\prime}(d)=\frac{d_{\rm max}}{|\mathcal{D}|}\tauitalic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_d ) = divide start_ARG italic_d start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT end_ARG start_ARG | caligraphic_D | end_ARG italic_τ.

.10.3 Higher-order lattices

For a triangular lattice in which every triangle is promoted to a 2222-simplex, each node is part of six 1111-simplices and six 2222-simplices. Furthermore, each 1111-simplex of the lattice is part of two different 2222-simplices. This means that each pair of nodes share two 2-simplices if they share one 1-simplex, and zero otherwise. Formally:

ki(2)=6=ki(1)andAij(2)=2Aij(1),formulae-sequencesubscriptsuperscript𝑘2𝑖6subscriptsuperscript𝑘1𝑖andsubscriptsuperscript𝐴2𝑖𝑗2subscriptsuperscript𝐴1𝑖𝑗k^{(2)}_{i}=6=k^{(1)}_{i}\quad\mathrm{and}\quad A^{(2)}_{ij}=2A^{(1)}_{ij},italic_k start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 6 = italic_k start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT roman_and italic_A start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = 2 italic_A start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ,(32)

so that B(2)=1𝐵21B(2)=1italic_B ( 2 ) = 1.

.11 Description of the datasets

We assigned a category to each of the 22 empirical datasets. All datasets are accessible via XGI [26] and stored at https://zenodo.org/communities/xgi/.

Each of the coauthorship datasets corresponds to papers published in a single year (1980, 1981, 1982, 1983). A node represents an author, and a hyperedge represents a publication marked with the “Geology” tag in the Microsoft Academic Graph [60].

In the contact datasets, a node represents a person and a hyperedge represents a group or people in close proximity at a given time. Most of the original datasets are from the SocioPatterns collaboration [61, 62].

The biological datasets include two constructed in[24] with data from the National Drug Code Directory (NDC). In ndc-classes, a node represents a label (a short text description of a drug’s function) and a hyperedge represents a set of those labels applied to a given drug. In ndc-substances, a node represents a substance and a hyperedege is the set of substances in a given drug. The Drug Abuse Warning Network (DAWN) is a national health surveillance system that records drug use that contributes to hospital emergency department visits throughout the United States. A node represents a drug, and a hyperedge is the set of drugs used by a given patient (as reported by the patient) in an emergency department visit. For a period of time, the recording system only recorded the first 16 drugs reported by a patient, so the dataset only uses the first 16 drugs (at most).

The technological datasets include two email datasets and two tag ones. In the email datasets, a node represents an email address and a hyperedge is the set of all recipient addresses included in an email, including the sender’s. In the tags datasets, a node represents a tag, and a hyperedge is a set of tags associated to a question on online Stack Exchange forums (Mathematics Stack Exchange and Ask Ubuntu). The tag datasets were constructed in[24] with data from the Stack Exchange data dump.

The other datasets contain two datasets. In congress-bills, constructed in [24], a node represents a member of the US Congress and a hyperedge is the set of members co-sponsoring a bill between 1973-2016. In kaggle-whats-cooking [63], a node represents a food ingredient and a hyperedge is the set of ingredients used in a given recipe.

References

  • Boccalettietal. [2006]S.Boccaletti, V.Latora,Y.Moreno, M.Chavez,andD.-U.Hwang,Complex networks: Structure and dynamics,Phys. Rep.424,175 (2006).
  • Barabásietal. [2011]A.-L.Barabási, N.Gulbahce,andJ.Loscalzo,Network medicine: anetwork-based approach to human disease,Nat. Rev. Genet.12,56(2011).
  • BullmoreandSp*rns [2012]E.BullmoreandO.Sp*rns,The economy of brainnetwork organization,Nat. Rev. Neurosci.13,336 (2012).
  • Pastor-Satorrasetal. [2015]R.Pastor-Satorras, C.Castellano, P.VanMieghem,andA.Vespignani,Epidemic processes incomplex networks,Rev. Mod. Phys.87,925 (2015).
  • Ciminietal. [2019]G.Cimini, T.Squartini,F.Saracco, D.Garlaschelli, A.Gabrielli,andG.Caldarelli,The statistical physics of real-world networks,Nat. Rev. Phys.1,58 (2019).
  • DeDomenico [2023]M.DeDomenico,More is different inreal-world multilayer networks,Nat. Phys.19,1247 (2023).
  • Lambiotteetal. [2019]R.Lambiotte, M.Rosvall,andI.Scholtes,From networks to optimal higher-ordermodels of complex systems,Nat. Phys.15,313 (2019).
  • Battistonetal. [2020]F.Battiston, G.Cencetti,I.Iacopini, V.Latora, M.Lucas, A.Patania, J.-G.Young,andG.Petri,Networksbeyond pairwise interactions: Structure and dynamics,Phys. Rep.874,1 (2020).
  • Battistonetal. [2021]F.Battiston, E.Amico,A.Barrat, G.Bianconi, G.Ferrazde Arruda, B.Franceschiello, I.Iacopini, S.Kéfi, V.Latora, Y.Moreno, etal.,The physics of higher-order interactions in complex systems,Nat. Phys.17,1093 (2021).
  • Rosasetal. [2022]F.E.Rosas, P.A.Mediano,A.I.Luppi, T.F.Varley, J.T.Lizier, S.Stramaglia, H.J.Jensen,andD.Marinazzo,Disentangling high-order mechanisms and high-order behaviours incomplex systems,Nat. Phys.18,476 (2022).
  • Petrietal. [2014]G.Petri, P.Expert,F.Turkheimer, R.Carhart-Harris, D.Nutt, P.J.Hellyer,andF.Vaccarino,hom*ological scaffolds of brain functional networks,J. R. Soc. Interface11,20140873 (2014).
  • Santoroetal. [2023]A.Santoro, F.Battiston,M.Lucas, G.Petri,andE.Amico,Higher-order connectomics of human brain function reveals localtopological signatures of task decoding, individual identification, andbehavior,bioRxiv,2023.12.04.569913 (2023).
  • Levineetal. [2017]J.M.Levine, J.Bascompte,P.B.Adler,andS.Allesina,Beyond pairwise mechanisms of species coexistencein complex communities,Nature546,56 (2017).
  • Pataniaetal. [2017]A.Patania, G.Petri,andF.Vaccarino,The shape of collaborations,EPJ Data Science6,1 (2017).
  • Cencettietal. [2021]G.Cencetti, F.Battiston,B.Lepri,andM.Karsai,Temporal properties of higher-order interactions in socialnetworks,Scientific Reports11,1 (2021).
  • Iacopinietal. [2019]I.Iacopini, G.Petri,A.Barrat,andV.Latora,Simplicial models of social contagion,Nat. Commun.10,1 (2019).
  • Zhangetal. [2023]Y.Zhang, M.Lucas,andF.Battiston,Higher-order interactions shape collectivedynamics differently in hypergraphs and simplicial complexes,Nat. Commun.14,1605 (2023).
  • SkardalandArenas [2020]P.S.SkardalandA.Arenas,Higher order interactionsin complex networks of phase oscillators promote abrupt synchronizationswitching,Commun. Phys.3,1 (2020).
  • Millánetal. [2020]A.P.Millán, J.J.Torres,andG.Bianconi,Explosive higher-orderkuramoto dynamics on simplicial complexes,Phys. Rev. Lett.124,218301 (2020).
  • Ferraz de Arrudaetal. [2023]G.Ferraz de Arruda, G.Petri, P.M.Rodriguez,andY.Moreno,Multistability,intermittency, and hybrid transitions in social contagion models onhypergraphs,Nat. Commun.14,1 (2023).
  • Alvarez-Rodriguezetal. [2021]U.Alvarez-Rodriguez, F.Battiston, G.F.deArruda, Y.Moreno,M.Perc,andV.Latora,Evolutionary dynamics of higher-order interactions insocial networks,Nat. Hum. Behav.5,586 (2021).
  • Contiscianietal. [2022]M.Contisciani, F.Battiston,andC.DeBacco,Inference of hyperedgesand overlapping communities in hypergraphs,Nat. Commun.13,7229 (2022).
  • Lotitoetal. [2022]Q.F.Lotito, F.Musciotto,A.Montresor,andF.Battiston,Higher-order motif analysis in hypergraphs,Commun. Phys.5,79 (2022).
  • Bensonetal. [2018]A.R.Benson, R.Abebe,M.T.Schaub, A.Jadbabaie,andJ.Kleinberg,Simplicial closure and higher-order link prediction,Proc. Natl. Acad. Sci. U.S.A.115,E11221 (2018).
  • Chodrow [2020]P.S.Chodrow,Configuration models ofrandom hypergraphs,J. Complex Netw.8,cnaa018 (2020).
  • Landryetal. [2023]N.W.Landry, M.Lucas,I.Iacopini, G.Petri, A.Schwarze, A.Patania,andL.Torres,XGI: A Python package for higher-order interactionnetworks,J. Open Source Softw.8,5162 (2023).
  • Lotitoetal. [2023]Q.F.Lotito, M.Contisciani,C.DeBacco, L.DiGaetano, L.Gallo, A.Montresor, F.Musciotto, N.Ruggeri,andF.Battiston,Hypergraphx: a library for higher-order network analysis,J. Complex Netw.11,cnad019 (2023).
  • Praggastisetal. [2023]B.Praggastis, S.Aksoy,D.Arendt, M.Bonicillo, C.Joslyn, E.Purvine, M.Shapiro,andJ.Y.Yun,HyperNetX:A Python package for modeling complex network data as hypergraphs,arXiv10.48550/arXiv.2310.11626(2023).
  • Muchaetal. [2010]P.J.Mucha, T.Richardson,K.Macon, M.A.Porter,andJ.-P.Onnela,Community structure in time-dependent, multiscale, andmultiplex networks,Science328,876 (2010).
  • HolmeandSaramäki [2012]P.HolmeandJ.Saramäki,Temporal networks,Phys. Rep.519,97 (2012).
  • Battistonetal. [2014]F.Battiston, V.Nicosia,andV.Latora,Structural measures for multiplexnetworks,Phys. Rev. E89,032804 (2014).
  • DeDomenicoetal. [2013]M.DeDomenico, A.Solé-Ribalta, E.Cozzo, M.Kivelä,Y.Moreno, M.A.Porter, S.Gómez,andA.Arenas,Mathematical formulation of multilayer networks,Phys. Rev. X3,041022 (2013).
  • DeDomenicoetal. [2015a]M.DeDomenico, V.Nicosia, A.Arenas,andV.Latora,Structural reducibility of multilayernetworks,Nat. Commun.6,6864 (2015a).
  • GhavasiehandDeDomenico [2020]A.GhavasiehandM.DeDomenico,Enhancing transportproperties in interconnected systems without altering their structure,Phys. Rev. Res.2,013155 (2020).
  • DeDomenicoandBiamonte [2016]M.DeDomenicoandJ.Biamonte,Spectral Entropiesas Information-Theoretic Tools for Complex Network Comparison,Phys. Rev. X6,041062 (2016).
  • RosvallandBergstrom [2007]M.RosvallandC.T.Bergstrom,Aninformation-theoretic framework for resolving community structure in complexnetworks,Proc. Natl. Acad. Sci. U.S.A.104,7327 (2007).
  • DeDomenicoetal. [2015b]M.DeDomenico, A.Lancichinetti, A.Arenas,andM.Rosvall,Identifying modular flowson multilayer networks reveals highly overlapping organization ininterconnected systems,Phys. Rev. X5,011027 (2015b).
  • SantoroandNicosia [2020]A.SantoroandV.Nicosia,Algorithmic complexity ofmultiplex networks,Phys. Rev. X10,021069 (2020).
  • Ghavasiehetal. [2020]A.Ghavasieh, C.Nicolini,andM.DeDomenico,Statistical physicsof complex information dynamics,Phys. Rev. E102,052304 (2020).
  • Lucasetal. [2020]M.Lucas, G.Cencetti,andF.Battiston,Multiorder Laplacian forsynchronization in higher-order networks,Phys. Rev. Res.2,033410 (2020).
  • WallaceandBoulton [1968]C.S.WallaceandD.M.Boulton,An information measurefor classification,The Computer Journal11,185 (1968).
  • Ghavasiehetal. [2021]A.Ghavasieh, S.Bontorin,O.Artime, N.Verstraete,andM.DeDomenico,Multiscale statistical physics of the pan-viralinteractome unravels the systemic nature of sars-cov-2 infections,Commun. Phys.4,1 (2021).
  • Nicolinietal. [2020]C.Nicolini, G.Forcellini, L.Minati,andA.Bifone,Scale-resolved analysis of brainfunctional connectivity networks with spectral entropy,NeuroImage211,116603 (2020).
  • GhavasiehandDeDomenico [2024]A.GhavasiehandM.DeDomenico,Diversity ofinformation pathways drives sparsity in real-world networks,Nat. Phys.20,1 (2024).
  • Villegasetal. [2023]P.Villegas, T.Gili,G.Caldarelli,andA.Gabrielli,Laplacian renormalization group for heterogeneousnetworks,Nat. Phys.19,445 (2023).
  • Gambuzzaetal. [2021]L.V.Gambuzza, F.DiPatti,L.Gallo, S.Lepri, M.Romance, R.Criado, M.Frasca, V.Latora,andS.Boccaletti,Stability of synchronization in simplicial complexes,Nat. Commun.12,1255 (2021).
  • Note [1]In general, a hypergraph does not need to have hyperedges atevery order below D𝐷Ditalic_D, unless it is a simplicial complex. If there is nohyperedge of order d𝑑ditalic_d, both the Laplacian and the average degree in Eq.2 vanish and the result is undefined. In those cases,the sum thus needs to be taken over all orders below D𝐷Ditalic_D that exist:𝒟={dD:K(d)>0}𝒟conditional-set𝑑𝐷delimited-⟨⟩superscript𝐾𝑑0\mathcal{D}=\{d\leq D:\langle K^{(d)}\rangle>0\}caligraphic_D = { italic_d ≤ italic_D : ⟨ italic_K start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ⟩ > 0 }.
  • Neuhäuseretal. [2023]L.Neuhäuser, M.Scholkemper, F.Tudisco,andM.T.Schaub,Learning the effectiveorder of a hypergraph dynamical system,arXiv (2023),2306.01813 .
  • Nurissoetal. [2024]M.Nurisso, M.Morandini,M.Lucas, F.Vaccarino, T.Gili,andG.Petri,Higher-order Laplacian Renormalization,arXiv10.48550/arXiv.2401.11298(2024),2401.11298 .
  • Bicketal. [2016]C.Bick, P.Ashwin,andA.Rodrigues,Chaos in generically coupled phase oscillatornetworks with nonpairwise interactions,Chaos26,094814 (2016).
  • Jiangetal. [2018]J.Jiang, Z.-G.Huang,T.P.Seager, W.Lin, C.Grebogi, A.Hastings,andY.-C.Lai,Predicting tipping points in mutualistic networks through dimensionreduction,Proc. Natl. Acad. Sci. U.S.A.115,E639 (2018).
  • Laurenceetal. [2019]E.Laurence, N.Doyon,L.J.Dubé,andP.Desrosiers,Spectral Dimension Reduction of ComplexDynamical Networks,Phys. Rev. X9,011042 (2019).
  • Thibeaultetal. [2024]V.Thibeault, A.Allard,andP.Desrosiers,The low-rank hypothesis of complexsystems,Nat. Phys.,1(2024).
  • MerchanandNemenman [2016]L.MerchanandI.Nemenman,On the Sufficiencyof Pairwise Interactions in Maximum Entropy Models of Networks,J. Stat. Phys.162,1294 (2016).
  • Shimazakietal. [2015]H.Shimazaki, K.Sadeghi,T.Ishikawa, Y.Ikegaya,andT.Toyoizumi,Simultaneous silence organizes structured higher-order interactionsin neural populations,Sci. Rep.5,9821 (2015).
  • Schneidmanetal. [2003]E.Schneidman, S.Still,M.J.Berry,andW.Bialek,Network Information and ConnectedCorrelations,Phys. Rev. Lett.91,238701 (2003).
  • Amari [2001]S.-I.Amari,Information geometry onhierarchy of probability distributions,IEEE Trans. Inf. Theory47,1701 (2001).
  • Robiglioetal. [2024]T.Robiglio, M.Neri,D.Coppes, C.Agostinelli, F.Battiston, M.Lucas,andG.Petri,Synergistic signatures of group mechanisms in higher-ordersystems,arXiv:2401.1158810.48550/arXiv.2401.11588 (2024).
  • GhavasiehandDeDomenico [2022]A.GhavasiehandM.DeDomenico,Generalized networkdensity matrices for analysis of multiscale functional diversity,arXiv preprint arXiv:2210.1670110.1103/PhysRevE.107.044304 (2022).
  • Sinhaetal. [2015]A.Sinha, Z.Shen,Y.Song, H.Ma, D.Eide, B.-J.P.Hsu,andK.Wang,An overview of microsoftacademic service (MAS) and applications,inProceedings of the 24th International Conference on World Wide Web(ACM Press,2015).
  • Barratetal. [2013]A.Barrat, C.Cattuto,V.Colizza, F.Gesualdo, L.Isella, E.Pandolfi, J.F.Pinton, L.Ravà, C.Rizzo,M.Romano, etal.,Empirical temporal networksof face-to-face human interactions,Eur. Phys. J. Spec. Top.222,1295 (2013).
  • soc [2008]SocioPatterns: a collection of contacts datasets (2008),accessed: 2023-08-19.
  • Kan [2015]W.Kan,What’s cooking? (2015).
Acknowledgements.

M.L. thanks Marco Nurisso for useful feedback on the manuscript. F.B. and L.G. acknowledge support from the Air Force Office of Scientific Research under award number FA8655-22-1-7025.

Data availability:All data is publically available from XGI-data: https://zenodo.org/communities/xgi/about.

Code availability:Code for reproducing our results is available online from the repository https://github.com/maximelucas/hypergraph_reducibility. It uses the XGI package [26].

Author contributions:All authors designed the study. ML, LG, and AG performed the theoretical analysis. ML and LG performed the numerical simulations. All authors discussed the results. ML wrote the original draft and all authors reviewed and edited it. FB and MDD supervised the project.

Competing interests:The authors declare no competing interests.

Supplementary Material:
Functional reducibility of higher-order networks

Maxime Lucas, Luca Gallo, Arsham Ghavasieh, Federico Battiston, and Manlio De Domenico

Functional reducibility of higher-order networks (5)
Functional reducibility of higher-order networks (6)
Functional reducibility of higher-order networks (7)
Functional reducibility of higher-order networks (8)
Functional reducibility of higher-order networks (9)
Functional reducibility of higher-order networks (10)
Functional reducibility of higher-order networks (11)
Functional reducibility of higher-order networks (2024)
Top Articles
Latest Posts
Article information

Author: Aracelis Kilback

Last Updated:

Views: 5858

Rating: 4.3 / 5 (44 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Aracelis Kilback

Birthday: 1994-11-22

Address: Apt. 895 30151 Green Plain, Lake Mariela, RI 98141

Phone: +5992291857476

Job: Legal Officer

Hobby: LARPing, role-playing games, Slacklining, Reading, Inline skating, Brazilian jiu-jitsu, Dance

Introduction: My name is Aracelis Kilback, I am a nice, gentle, agreeable, joyous, attractive, combative, gifted person who loves writing and wants to share my knowledge and understanding with you.