Abstract
In this talk, I will present the multivariate normal approximation for the centered subgraph counts in dense random graphs. The main motivation to investigate these statistics is the fact that they are key to understanding fluctuations of regular subgraph counts since they act as an orthogonal basis of a corresponding L2 space. We also identify the resulting limiting Gaussian stochastic measures by means of the theory of generalised U-statistics and Gaussian Hilbert spaces, which we think is a suitable framework to describe and understand higher-order fluctuations in dense random graph models.
This is joint work with Adrian Roellin.
Abstract
We initiate a study of large deviations for block model random graphs in the dense regime. Following Chatterjee and Varadhan (2011), we establish an LDP for dense block models, viewed as random graphons. As an application of our result, we study upper tail large deviations for homomorphism densities of regular graphs. We identify the existence of a "symmetric'' phase, where the graph, conditioned on the rare event, looks like a block model with the same block sizes as the generating graphon. In specific examples, we also identify the existence of a "symmetry breaking'' regime, where the conditional structure is not a block model with compatible dimensions. This identifies a "reentrant phase transition'' phenomenon for this problem---analogous to one established for Erdős–Rényi random graphs (Chatterjee and Dey (2010), Charrerjee and Varadhan (2011)). Finally, extending the analysis of Lubetzky and Zhao (2015), we identify the precise boundary between the symmetry and symmetry breaking regimes for homomorphism densities of regular graphs and the operator norm on Erdős–Rényi bipartite graphs.
Joint work with Christian Borgs, Jennifer Chayes, Subhabrata Sen, and Samantha Petti
Abstract
Ramsey's Theorem states that for every graph H, there is an integer R(H) such that every 2-edge-coloring of R(H)-vertex complete graph contains a monochromatic copy of H. In this talk, we focus on a natural quantitative extension: how many monochromatic copies of H can we find in every 2-edge-coloring of K_N, and what graphs H are so-called common, i.e., the number of monochromatic copies of H is asymptotically minimized by a random 2-edge-coloring. A classical result of Goodman from 1959 states that the triangle is a common graph. On the other hand, Thomason proved in 1989 that no clique of order at least four is common, and the existence of a common graph with chromatic number larger than three was open until 2012, when Hatami, Hladky, Kral, Norin and Razborov proved that the 5-wheel is common. In this talk, we show that for every k>4 there exists a common graph with chromatic number k.
This is a joint work with D. Kral and F. Wei
Abstract
A combinatorial structure is said to be quasirandom if it resembles a random structure in a certain robust sense. The notion of quasirandom graphs, developed in the work of Rödl, Thomason, Chung, Graham and Wilson in 1980s, is particularly robust as several different properties of truly random graphs, e.g., subgraph density, edge distribution and spectral properties, are satisfied by a large graph if and only if one of them is. We will discuss recent results on quasirandomness of various kinds of combinatorial structures (in particular, directed graphs, permutations and Latin squares) obtained using analytic tools provided by the theory of combinatorial limits.
Abstract
The goal of this talk is to describe probabilistic approaches to three major problems for dynamic networks, both of which are intricately connected to long range dependence in the evolution of such models:
1.Nonparametric change point detection: Consider models of growing networks which evolve via new vertices attaching to the pre-existing network according to one attachment function $f$ till the system grows to size $τ(n) < n$, when new vertices switch their behavior to a different function g till the system reaches size n. The goal is to estimate the change point given the observation of the networks over time with no knowledge of the functions driving the dynamics. We will describe non-parametric estimators for such problems.
2. Detecting the initial seed which resulted in the current state of the network: Imagine observing a static time slice of the network after some large time $n$ started with an initial seed. Suppose all one gets to see is the current topology of the network (without any label or age information). Developing probably efficient algorithms for estimating the initial seed has inspired intense activity over the last few years in the probability community. We will describe recent developments in addressing such questions including robustness results such as the fixation of so called hub vertices as time evolves.
3. Co-evolving networks: models of networks where dynamics on the network (directed random walks towards the root) modifies the structure of the network which then impacts behavior of subsequent dynamics. We will describe non-trivial phase transitions of condensation around the root and its connection to quasi-stationary distributions of 1-d random walks.
Abstract
In many domains, we are interested in estimating the total causal treatment effect in the presence of network interference, where the outcome of one individual or unit is affected by the treatment assignment of those in its local network. Additional challenges arise when complex cluster randomized designs are not feasible to implement, or the network is unknown and costly to estimate. We propose a new measure of model complexity that characterizes the difficulty of estimating the total treatment effect under the standard A/B testing setup. We provide a class of unbiased estimators whose variance is optimal with respect to the population size and the treatment budget. Furthermore, we show that when the network is completely unknown, we can still estimate the total treatment effect under a richer yet simple staggered rollout experimental design. The proposed design principles, and related estimator, work with a broad class of outcome models. Our solution and statistical guarantees do not rely on restrictive network properties, allowing for highly connected networks that may not be easily clustered. This is joint work with Edoardo Airoldi, Christian Borgs, Jennifer Chayes, Mayleen Cortez, and Matthew Eichhorn
Abstract
We consider an inhomogeneous Erd\H{o}s-R\'enyi random graph $G_N$ with vertex set $[N] = \{1,\dots,N\}$ for which the pair of vertices $i,j \in [N]$, $i\neq j$, is connected by an edge with probability $r(\tfrac{i}{N},\tfrac{j}{N})$, independently of other pairs of vertices. Here, $r\colon\,[0,1]^2 \to (0,1)$ is a symmetric function that plays the role of a reference graphon.
(1) Let $\lambda_N$ be the maximal eigenvalue of the \emph{adjacency matrix} of $G_N$. It is known that $\lambda_N/N$ satisfies an LDP with rate $N$ as $N \to \infty$. The associated rate function $\psi_r$ is given by a variational formula that involves the rate function $I_r$ of a large deviation principle on graphon space. We analyse this variational formula in order to identify the basic properties of $\psi_r$, especially when the reference graphon $r$ is of rank 1.
(2) Let $\hat\lambda_N$ be the maximal eigenvalue of the \emph{Laplacian matrix} of $G_N$. We show that $\hat\lambda_N/N$ satisfies a downward LDP with rate $\binom{N}{2}$ and an upward LDP with rate $N$. The associated rate functions $\hat\psi_r$ and $\hat\psi^*_r$ are those of the downward LDP and upward LDP for the maximal degree in the graph. We identify the basic properties of both $hat\psi_r$ and $\hat\psi^*_r$.
Abstract
The theory of graph limits is an important tool in understanding properties of large networks. We begin the talk with a survey of this theory, concentrating in particular on the sparse setting. We then investigate a power-law random graph model and cast it in the sparse graph limit theory framework. The distinctively different structures of the limit graph are explored in detail in the sub-critical and super-critical regimes. In the sub-critical regime, the graph is empty with high probability, and in the rare event that it is non-empty, it consists of a single edge. Contrarily, in the super-critical regime, a non-trivial random graph exists in the limit, and it serves as an uncovered boundary case between different types of graph convergence.
Abstract
A random structure exhibits symmetry if its law remains invariant under a group of transformations. Exchangeability (of graphs, sequences, etc) and stationarity are examples. Under suitable conditions, the transformation group can be used to define an estimator that averages over an instance of the structure, and such estimators turn out to satisfy a law of large numbers, a central limit theorem, and further convergence results. Loosely speaking: The large-sample theory of i.i.d averages still holds if the i.i.d assumption is substituted by a suitable symmetry assumption.
Joint work with Morgane Austern.
Abstract
We consider interacting particle systems on suitable convergent sequences of sparse (or heterogeneous graphs) and show that the limiting dynamics of the associated neighborhood empirical measure process
(the so-called hydrodynamic limit) can be autonomously characterized in terms of a non-Markovian process. We then describe Markovian approximations to the latter and provide examples where they are exact. This includes joint work with G. Cocomello and A. Ganguly.
Abstract
Dense graph limit theory is mainly concerned with law-of large-number type of results. We propose a corresponding central limit theorem - or rather fluctuation theory - based on Janson's theory of Gaussian Hilbert Spaces and generalised U-statistics from the 1990s. Our approach provides rates and allows for proper statistical inference based on subgraph counts.
Abstract
In this talk we study the random cluster model on essentially large girth and random regular graphs. We give explicit formula for the limiting free entropy of the random cluster model. Our result extends the work of Dembo, Montanari, Sly and Sun for the Potts model, and we prove a conjecture of Helmuth, Jenssen and Perkins about the phase transition of the random cluster model. This is joint work with Ferenc Bencs and Márton Borbényi.