Hostname: page-component-848d4c4894-ndmmz Total loading time: 0 Render date: 2024-05-08T07:22:46.067Z Has data issue: false hasContentIssue false

Colouring graphs with forbidden bipartite subgraphs

Published online by Cambridge University Press:  08 June 2022

James Anderson
Affiliation:
School of Mathematics, Georgia Institute of Technology, Atlanta, GA, USA
Anton Bernshteyn*
Affiliation:
School of Mathematics, Georgia Institute of Technology, Atlanta, GA, USA
Abhishek Dhawan
Affiliation:
School of Mathematics, Georgia Institute of Technology, Atlanta, GA, USA
*
*Corresponding author. Email: bahtoh@gatech.edu
Rights & Permissions [Opens in a new window]

Abstract

A conjecture of Alon, Krivelevich and Sudakov states that, for any graph $F$ , there is a constant $c_F \gt 0$ such that if $G$ is an $F$ -free graph of maximum degree $\Delta$ , then $\chi\!(G) \leqslant c_F \Delta/ \log\!\Delta$ . Alon, Krivelevich and Sudakov verified this conjecture for a class of graphs $F$ that includes all bipartite graphs. Moreover, it follows from recent work by Davies, Kang, Pirot and Sereni that if $G$ is $K_{t,t}$ -free, then $\chi\!(G) \leqslant (t + o(1)) \Delta/ \log\!\Delta$ as $\Delta \to \infty$ . We improve this bound to $(1+o(1)) \Delta/\log\!\Delta$ , making the constant factor independent of $t$ . We further extend our result to the DP-colouring setting (also known as correspondence colouring), introduced by Dvořák and Postle.

Type
Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

All graphs in this paper are finite, undirected and simple. The starting point of our investigation is the following celebrated conjecture of Alon, Krivelevich and Sudakov:

Conjecture 1.1 (Alon–Krivelevich–Sudakov [Reference Alon, Krivelevich and Sudakov3, Conjecture 3.1]). For every graph $F$ , there is a constant $c_F \gt 0$ such that if $G$ is an $F$ -free graph of maximum degree $\Delta \geqslant 2$ , then $\chi\!(G) \leqslant c_F\Delta/\log\!\Delta$ .

Here we say that $G$ is $F$ -free if $G$ has no subgraph (not necessarily induced) isomorphic to $F$ . As long as $F$ contains a cycle, the bound in Conjecture 1.1 is best possible up to the value of $c_F$ , since there exist $\Delta$ -regular graphs $G$ of arbitrarily high girth with $\chi\!(G) \geqslant (1/2)\Delta/\log\!\Delta$ [Reference Bollobás7]. On the other hand, the best known general upper bound is $\chi\!(G) \leqslant c_F \Delta \log \log\!\Delta/\log\!\Delta$ due to Johansson [Reference Johansson16] (see also [Reference Molloy21]), which exceeds the conjectured value by a $\log \log\!\Delta$ factor.

Nevertheless, there are some graphs $F$ for which Conjecture 1.1 has been verified. Among the earliest results along these lines is the theorem of Kim [Reference Kim18] that if $G$ has girth at least $5$ (that is, $G$ is $\{K_3, C_4\}$ -free), then $\chi\!(G) \leqslant (1 + o(1))\Delta/\log\!\Delta$ . (Here and in what follows $o(1)$ indicates a function of $\Delta$ that approaches $0$ as $\Delta \to \infty$ .) Johansson [Reference Johansson15] proved Conjecture 1.1 for $F = K_3$ ; that is, Johansson showed that if $G$ is triangle-free, then $\chi\!(G) \leqslant c\Delta/\log\!\Delta$ for some constant $c \gt 0$ . Johansson’s proof gave the value $c = 9$ [Reference Molloy and Reed22, p. 125], which was later improved to $4 + o(1)$ by Pettie and Su [Reference Pettie and Su23] and, finally, to $1+o(1)$ by Molloy [Reference Molloy21], matching Kim’s bound for graphs of girth at least $5$ .

In the same paper where they stated Conjecture 1.1, Alon, Krivelevich and Sudakov verified it for the complete tripartite graph $F = K_{1,t,t}$ [Reference Alon, Krivelevich and Sudakov3, Corollary 2.4]. (Note that the case $t = 1$ yields Johansson’s theorem.) Their results give the bound $c_F = O(t)$ for such $F$ , which was recently improved to $t + o(1)$ by Davies, Kang, Pirot and Sereni [Reference Davies, Kang, Pirot and Sereni11, §5.6]. Numerous other results related to Conjecture 1.1 can be found in the same paper.

Here we are interested in the case when the forbidden graph $F$ is bipartite. It follows from the result of Davies, Kang, Pirot and Sereni mentioned above that if $F = K_{t,t}$ , then Conjecture 1.1 holds with $c_F = t + o(1)$ . Prior to this work, this has been the best known bound for all $t \geqslant 3$ (the graph $F = K_{2,2}$ satisfies Conjecture 1.1 with $c_F = 1 + o(1)$ by [Reference Davies, Kang, Pirot and Sereni11, Theorem 4]). We improve this bound to $1 + o(1)$ for all $t$ (so only the lower order term actually depends on the graph $F$ ):

Theorem 1.2. For every bipartite graph $F$ and every $\varepsilon \gt 0$ , there is $\Delta _0 \in{\mathbb{N}}$ such that every $F$ -free graph $G$ of maximum degree $\Delta \geqslant \Delta _0$ satisfies $\chi\!(G) \leqslant (1+\varepsilon )\Delta/\log\!\Delta$ .

As witnessed by random $\Delta$ -regular graphs, the upper bound on $\chi\!(G)$ in Theorem 1.2 is asymptotically optimal up to a factor of $2$ [Reference Bollobás7]. Furthermore, this bound coincides with the so-called shattering threshold for colourings of random graphs of average degree $\Delta$ [Reference Zdeborová and Krząkała25, Reference Achlioptas and Coja-Oghlan1], as well as the density threshold for factor of i.i.d. independent sets in $\Delta$ -regular trees [Reference Rahman and Virág24], which suggests that reducing the number of colours further would be a challenging problem, even for graphs $G$ of large girth. Indeed, it is not even known if such graphs admit independent sets of size greater than $(1+o(1))|V(G)| \log\!\Delta/\Delta$ .

In view of the results in [Reference Alon, Krivelevich and Sudakov3] and [Reference Davies, Kang, Pirot and Sereni11], it is natural to ask if a version of Theorem 1.2 also holds for $F = K_{1,t,t}$ . We give the affirmative answer in the paper [Reference Anderson, Bernshteyn and Dhawan5], where we use some of the techniques developed here to prove that every $K_{1,t,t}$ -free graph $G$ satisfies $\chi\!(G) \leqslant (4+o(1))\Delta/\log\!\Delta$ . In other words, we eliminate the dependence on $t$ in the constant factor, although we are unable to reduce it all the way to $1 + o(1)$ .

Returning to the case of bipartite $F$ , we establish an extension of Theorem 1.2 in the context of DP-colouring (also known as correspondence colouring), which was introduced a few years ago by Dvořák and Postle [Reference Dvǒrák and Postle12]. DP-colouring is a generalisation of list colouring. Just as in ordinary list colouring, we assume that every vertex $u \in V(G)$ of a graph $G$ is given a list $L(u)$ of colours to choose from. In contrast to list colouring though, the identifications between the colours in the lists are allowed to vary from edge to edge. That is, each edge $uv \in E(G)$ is assigned a matching $M_{uv}$ (not necessarily perfect and possibly empty) from $L(u)$ to $L(v)$ . A proper DP-colouring then is a mapping $\varphi$ that assigns a colour $\varphi (u) \in L(u)$ to each vertex $u \in V(G)$ so that whenever $uv \in E(G)$ , we have $\varphi (u)\varphi (v) \notin M_{uv}$ . Note that list colouring is indeed a special case of DP-colouring which occurs when the colours “correspond to themselves,” i.e., for each $c \in L(u)$ and $c' \in L(v)$ , we have $cc' \in M_{uv}$ if and only if $c = c'$ .

Formally, we describe DP-colouring using an auxiliary graph $H$ called a DP-cover of $G$ . Here we treat the lists of colours assigned to distinct vertices as pairwise disjoint (this is a convenient assumption that does not restrict the generality of the model). The definition below is a modified version of the one given in [Reference Bernshteyn6]:

Definition 1.3. A DP-cover (or a correspondence cover ) of a graph $G$ is a pair $\mathcal{H} = (L, H)$ , where $H$ is a graph and $L\;\colon V(G) \to 2^{V(H)}$ is a function such that:

  • The set $\{L(v) \,:\, v \in V(G)\}$ forms a partition of $V(H)$ .

  • For each $v \in V(G)$ , $L(v)$ is an independent set in $H$ .

  • For $u$ , $v \in V(G)$ , the induced subgraph $H[L(u) \cup L(v)]$ is a matching; this matching is empty whenever $uv \notin E(G).$

We refer to the vertices of $H$ as colours . For $c \in V(H)$ , we let $L^{-1}\!(c)$ denote the underlying vertex of $c$ in $G$ , i.e., the unique vertex $v \in V(G)$ such that $c \in L(v)$ . If two colours $c$ , $c' \in V(H)$ are adjacent in $H$ , we say that they correspond to each other and write $c \sim c'$ . An $\mathcal{H}$ -colouring is a mapping $\varphi\;\colon V(G) \to V(H)$ such that $\varphi (u) \in L(u)$ for all $u \in V(G)$ . Similarly, a partial $\mathcal{H}$ -colouring is a partial map $\varphi \colon V(G) \dashrightarrow V(H)$ such that $\varphi (u) \in L(u)$ whenever $\varphi (u)$ is defined. A (partial) $\mathcal{H}$ -colouring $\varphi$ is proper if the image of $\varphi$ is an independent set in $H$ , i.e., if $\varphi (u) \not \sim \varphi (v)$ for all $u$ , $v \in V(G)$ such that $\varphi (u)$ and $\varphi (v)$ are both defined. A DP-cover $\mathcal{H}$ is $k$ -fold for some $k \in{\mathbb{N}}$ if $|L(u)| \geqslant k$ for all $u \in V(G)$ . The DP-chromatic number of $G$ , denoted by $\chi _{DP}(G)$ , is the smallest $k$ such that $G$ admits a proper $\mathcal{H}$ -colouring with respect to every $k$ -fold DP-cover $\mathcal{H}$ .

An interesting feature of DP-colouring is that it allows one to put structural constraints not on the base graph, but on the cover graph instead. For instance, Cambie and Kang [Reference Cambie and Kang10] made the following conjecture:

Conjecture 1.4 (Cambie–Kang [Reference Cambie and Kang10, Conjecture 4]). For every $\varepsilon \gt 0$ , there is $d_0 \in{\mathbb{N}}$ such that the following holds. Let $G$ be a triangle-free graph and let $\mathcal{H} = (L,H)$ be a DP-cover of $G$ . If $H$ has maximum degree $d \geqslant d_0$ and $|L(u)| \geqslant (1+\varepsilon )d/\log d$ for all $u \in V(G)$ , then $G$ admits a proper $\mathcal{H}$ -colouring.

The conclusion of Conjecture 1.4 is known to hold if $d$ is taken to be the maximum degree of $G$ rather than of $H$ [Reference Bernshteyn6] (notice that $\Delta (G) \geqslant \Delta (H)$ , so a bound on $\Delta (G)$ is a stronger assumption than a bound on $\Delta (H)$ ). Cambie and Kang [Reference Cambie and Kang10, Corollary 3] verified Conjecture 1.4 when $G$ is not just triangle-free but bipartite. Amini and Reed [Reference Amini and Reed4] and, independently, Alon and Assadi [Reference Alon and Assadi2, Proposition 3.2] proved a version of Conjecture 1.4 for list colouring, but with $1 + o(1)$ replaced by a larger constant ( $8$ in [Reference Alon and Assadi2]). To the best of our knowledge, it is an open problem to reduce the constant factor to $1 + o(1)$ even in the list colouring framework.

Notice that in Cambie and Kang’s conjecture, the base graph $G$ is assumed to be triangle-free (which, of course, implies that $H$ is triangle-free as well). In principle, it is possible that $H$ is triangle-free while $G$ is not, and it seems that the conclusion of Conjecture 1.4 could hold even then. We suspect that Conjecture 1.1 should also hold in the following stronger form:

Conjecture 1.5. For every graph $F$ , there is a constant $c_F \gt 0$ such that the following holds. Let $G$ be a graph and let $\mathcal{H} = (L,H)$ be a DP-cover of $G$ . If $H$ is $F$ -free and has maximum degree $d \geqslant 2$ and if $|L(u)| \geqslant c_F d/\log d$ for all $u \in V(G)$ , then $G$ admits a proper $\mathcal{H}$ -colouring.

After this discussion, we are now ready to state our main result:

Theorem 1.6. There is a constant $\alpha \gt 0$ such that for every $\varepsilon \gt 0$ , there is $d_0 \in{\mathbb{N}}$ such that the following holds. Suppose that $d$ , $s$ , $t \in{\mathbb{N}}$ satisfy

\begin{equation*} d \geqslant d_0, \quad s \leqslant d^{\alpha \varepsilon }, \quad \text {and} \quad t \leqslant \frac {\alpha \varepsilon \log d}{\log \log d}. \end{equation*}

If $G$ is a graph and $\mathcal{H} = (L,H)$ is a DP-cover of $G$ such that:

  1. (i) $H$ is $K_{s,t}$ -free,

  2. (ii) $\Delta (H) \leqslant d$ , and

  3. (iii) $|L(u)| \geqslant (1+\varepsilon )d/\log d$ for all $u \in V(G)$ ,

then $G$ has a proper $\mathcal{H}$ -colouring.

If $F$ is an arbitrary bipartite graph with parts of size $s$ and $t$ , then an $F$ -free graph is also $K_{s,t}$ -free. Thus, Theorem 1.6 yields the following result for large enough $d_0$ as a function of $s$ , $t$ and $\varepsilon$ :

Corollary 1.7. For every bipartite graph $F$ and $\varepsilon \gt 0$ , there is $d_0 \in{\mathbb{N}}$ such that the following holds. Let $d \geqslant d_0$ . Suppose $\mathcal{H} = (L,H)$ is a DP-cover of $G$ such that:

  1. (i) $H$ is $F$ -free,

  2. (ii) $\Delta (H) \leqslant d$ , and

  3. (iii) $|L(u)| \geqslant (1+\varepsilon )d/\log d$ for all $u \in V(G)$ .

Then $G$ has a proper $\mathcal{H}$ -colouring.

Setting $d = \Delta (G)$ in Corollary 1.7 gives an extension of Theorem 1.2 to DP-colouring:

Corollary 1.8. For every bipartite graph $F$ and $\varepsilon \gt 0$ , there is $\Delta _0 \in{\mathbb{N}}$ such that every $F$ -free graph $G$ with maximum degree $\Delta \geqslant \Delta _0$ satisfies $\chi _{DP}(G) \leqslant (1 + \varepsilon )\Delta/\log\!\Delta$ .

We close this introduction with a few words about the proof of Theorem 1.6. To find a proper $\mathcal{H}$ -colouring of $G$ we employ a variant of the so-called “Rödl Nibble” method, in which we randomly colour a small portion of $V(G)$ and then iteratively repeat the same procedure with the vertices that remain uncoloured. (See [Reference Kang, Kelly, Kühn, Methuku and Osthus17] for a recent survey on this method.) Throughout the iterations, both the maximum degree of the cover graph and the minimum list size are decreasing, but we show that the former is decreasing at a faster rate than the latter. Thus, we eventually arrive at a situation where $\Delta (H) \ll |L(v)|$ for all $v \in V(G)$ , and then it is easy to complete the colouring. The specific procedure in our proof is essentially the same as the one used by Kim [Reference Kim18] (see also [Reference Molloy and Reed22, Chapter 12]) to bound the chromatic number of graphs of girth at least $5$ , suitably modified for the DP-colouring framework. We describe it in detail in §3. The main novelty in our analysis is in the proof of Lemma 4.5, which allows us to control the maximum degree of the cover graph after each iteration. This is the only part of the proof that relies on the assumption that $H$ is $K_{s,t}$ -free. The proof of Lemma 4.5 involves several technical ingredients, which we explain in §5. In §6, we put the iterative process together and verify that the colouring can be completed.

2. Preliminaries

In this section, we outline the main probabilistic tools that will be used in our arguments. We start with the symmetric version of the Lovász Local Lemma.

Theorem 2.1 (Lovász Local Lemma; [Reference Molloy and Reed22, §4]). Let $A_1$ , $A_2$ , …, $A_n$ be events in a probability space. Suppose there exists $p \in [0, 1)$ such that for all $1 \leqslant i \leqslant n$ we have $\mathbb{P}[A_i] \leqslant p$ . Further suppose that each $A_i$ is mutually independent from all but at most $d_{LLL}$ other events $A_j$ , $j\neq i$ for some $d_{LLL} \in{\mathbb{N}}$ . If $4pd_{LLL} \leqslant 1$ , then with positive probability none of the events $A_1$ , …, $A_n$ occur.

Aside from the Local Lemma, we will require several concentration of measure bounds. The first of these is the Chernoff Bound for binomial random variables. We state the two-tailed version below:

Theorem 2.2 (Chernoff; [Reference Molloy and Reed22, §5]). Let $X$ be a binomial random variable on $n$ trials with each trial having probability $p$ of success. Then for any $0 \leqslant \xi \leqslant \mathbb{E}[X]$ , we have

\begin{align*} \mathbb{P}\Big [\big |X - \mathbb{E}[X]\big | \geqslant \xi \Big ] \lt 2\exp{\!\left (-\frac{\xi ^2}{3\mathbb{E}[X]}\right )}. \end{align*}

We will also take advantage of two versions of Talagrand’s inequality. The first version is the standard one:

Theorem 2.3 (Talagrand’s Inequality; [Reference Molloy and Reed22, §10.1]). Let $X$ be a non-negative random variable, not identically 0, which is a function of $n$ independent trials $T_1$ , …, $T_n$ . Suppose that $X$ satisfies the following for some $\gamma$ , $r \gt 0$ :

  1. (T1) Changing the outcome of any one trial $T_i$ can change $X$ by at most $\gamma$ .

  2. (T2) For any $s\gt 0$ , if $X \geqslant s$ then there is a set of at most $rs$ trials that certify $X$ is at least $s$ .

Then for any $0 \leqslant \xi \leqslant \mathbb{E}[X]$ , we have

\begin{align*} \mathbb{P}\Big [\big |X-\mathbb{E}[X]\big | \geqslant \xi + 60\gamma \sqrt{r\mathbb{E}[X]}\Big ]\leqslant 4\exp{\!\left (-\frac{\xi ^2}{8\gamma ^2r\mathbb{E}[X]}\right )}. \end{align*}

The second version of Talagrand’s inequality we will use was developed by Bruhn and Joos [Reference Bruhn and Joos9]. We refer to it as Exceptional Talagrand’s Inequality. In this version, we are allowed to discard a small “exceptional” set of outcomes before constructing certificates.

Theorem 2.4 (Exceptional Talagrand’s Inequality [Reference Bruhn and Joos9, Theorem 12]). Let $X$ be a non-negative random variable, not identically 0, which is a function of $n$ independent trials $T_1$ , …, $T_n$ , and let $\Omega$ be the set of outcomes for these trials. Let $\Omega ^* \subseteq \Omega$ be a measurable subset, which we shall refer to as the exceptional set . Suppose that $X$ satisfies the following for some $\gamma \gt 1$ , $s\gt 0$ :

  1. (ET1) For all $q\gt 0$ and every outcome $\omega \notin \Omega ^*$ , there is a set $I$ of at most $s$ trials such that $X(\omega ') \gt X(\omega ) - q$ whenever $\omega ' \not \in \Omega ^*$ differs from $\omega$ on fewer than $q/\gamma$ of the trials in $I$ .

  2. (ET2) $\mathbb{P}[\Omega ^*] \leqslant M^{-2}$ , where $M = \max \{\sup\!X, 1\}$ .

Then for every $\xi \gt 50\gamma \sqrt{s}$ , we have:

\begin{align*} \mathbb{P}\Big [\big |X - \mathbb{E}[X]\big | \geqslant \xi \Big ] \leqslant 4\exp{\!\left (-\frac{\xi ^2}{16\gamma ^2s}\right )} + 4\mathbb{P}(\Omega ^*). \end{align*}

Finally, we shall use the Kővári–Sós–Turán theorem for $K_{s,t}$ -free graphs:

Theorem 2.5 (Kővári–Sós–Turán [Reference Kővàri, Sós and Turán19]; see also [Reference Hyltén-Cavallius13]). Let $G$ be a bipartite graph with a bipartition $V(G) = X \sqcup Y$ , where $|X| = m$ , $|Y| = n$ , and $m \geqslant n$ . Suppose that $G$ does not contain a complete bipartite subgraph with $s$ vertices in $X$ and $t$ vertices in $Y$ . Then $|E(G)| \leqslant s^{1/t} m^{1-1/t} n + tm$ .

3. The wasteful colouring procedure

To prove Theorem 1.6, we will start by showing we can produce a partial $\mathcal{H}$ -colouring of our graph with desirable properties. Before we do so, we introduce some notation used in the next lemma. When $\varphi$ is a partial $\mathcal{H}$ -colouring of $G$ , we define $L_\varphi (v) \;:\!=\; \{c \in L(v) \,:\, N_H(c) \cap \mathrm{im}(\varphi ) = \emptyset \}$ . Given parameters $d$ , $\ell$ , $\eta$ , $\beta \gt 0$ , we define the following functions:

\begin{align*} \textsf{keep}(d, \ell, \eta ) &\;:\!=\; \left (1 - \frac{\eta }{\ell }\right )^{d}, \\ \textsf{uncolor}(d, \ell, \eta ) &\;:\!=\; 1 - \eta \, \textsf{keep}(d, \ell, \eta ),\\ \ell '(d, \ell, \eta, \beta ) &\;:\!=\; \textsf{keep}(d, \ell, \eta )\, \ell - \ell ^{1 - \beta }, \\ d'(d, \ell, \eta, \beta ) &\;:\!=\; \textsf{keep}(d, \ell, \eta )\, \textsf{uncolor}(d, \ell, \eta )\, d + d^{1 - \beta }. \end{align*}

The meaning of this notation will become clear when we describe the randomised colouring procedure we use to prove the following lemma.

Lemma 3.1. There are $\tilde{d} \in{\mathbb{N}}$ , $\tilde{\alpha } \gt 0$ with the following property. Let $\eta \gt 0$ , $d$ , $\ell$ , $s$ , $t \in{\mathbb{N}}$ satisfy:

  1. (1) $d \geqslant \tilde{d}$ ,

  2. (2) $\eta d \lt \ell \lt 8d$ ,

  3. (3) $s \leqslant d^{1/4}$ ,

  4. (4) $t \leqslant \dfrac{\tilde{\alpha }\log d}{\log \log d}$ ,

  5. (5) $\dfrac{1}{\log ^5d} \lt \eta \lt \dfrac{1}{\log d}.$

Then whenever $G$ is a graph and $\mathcal{H} = (L, H)$ is a DP-cover of $G$ such that

  1. (6) $H$ is $K_{s,t}$ -free,

  2. (7) $\Delta (H) \leqslant d$ ,

  3. (8) $|L(v)| \geqslant \ell$ for all $v \in V(G)$ ,

there exist a partial proper $\mathcal{H}$ -colouring $\varphi$ and an assignment of subsets $L'(v) \subseteq L_\varphi (v)$ to each $v \in V(G) \setminus \mathrm{dom}(\varphi )$ such that, setting

\begin{equation*} G' \;:\!=\; G\!\left [V(G)\setminus \mathrm {dom}(\varphi )\right ] \quad \text {and} \quad H' \;:\!=\; H\left [\bigcup _{v \in V(G')} L'(v)\right ], \end{equation*}

we get that for all $v \in V(G')$ , $c \in L'(v)$ and $\beta = 1/(25t)$ :

\begin{align*} |L'(v)| \,\geqslant \, \ell '(d, \ell, \eta, \beta ) \quad \text{and} \quad \deg _{H'}\!(c) \,\leqslant \, d'\kern-.5pt(d, \ell, \eta, \beta ). \end{align*}

To prove Lemma 3.1, we will carry out a variant of the the “Wasteful Coloring Procedure,” as described in [Reference Molloy and Reed22, Chapter 12]. As mentioned in the introduction, essentially the same procedure was used by Kim [Reference Kim18] to bound the chromatic number of graphs of girth at least $5$ . We describe this procedure in terms of DP-colouring below:

In §§4 and 5, we will show that, with positive probability, the output of the Wasteful Coloring Procedure satisfies the conclusion of Lemma 3.1. With this procedure in mind, we can now provide an intuitive understanding for the functions defined in the beginning of this section. Suppose $G$ , $\mathcal{H} = (L,H)$ satisfy $|L(v)| = \ell$ and $\Delta (H) = d$ . If we run the Wasteful Coloring Procedure with these $G$ and $\mathcal{H}$ , then $\textsf{keep}(d,\ell, \eta )$ is the probability that a colour $c \in L(v)$ is kept by $v$ (i.e., $c \in K(v)$ ), while $\textsf{uncolor}(d, \ell, \eta )$ is approximately the probability that a vertex $v \in V(G)$ is uncoloured (i.e., $\varphi (v) = \textsf{blank}$ ). The details of these calculations are given in §4. Note that, assuming the terms $\ell ^{1-\beta }$ and $d^{1-\beta }$ in the definitions of $\ell '(d, \ell, \eta, \beta )$ and $d'(d, \ell, \eta, \beta )$ are small, we can write

\begin{equation*} \frac {d'(d, \ell, \eta, \beta )}{\ell '(d, \ell, \eta, \beta )} \,\approx \, \textsf {uncolor}(d, \ell, \eta ) \frac {d}{\ell }. \end{equation*}

In other words, an application of Lemma 3.1 reduces the ratio $d/\ell$ roughly by a factor of $\textsf{uncolor}(d, \ell, \eta )$ . In §6 we will show that Lemma 3.1 can be applied iteratively to eventually make the ratio $d/\ell$ less than, say, $1/8$ , after which the colouring can be completed using the following proposition:

Proposition 3.2. Let $G$ be a graph with a $DP$ -cover $\mathcal{H} = (L,H)$ such that $|L(v)| \geqslant 8d$ for every $v \in V(G)$ , where $d$ is the maximum degree of $H$ . Then, there exists a proper $\mathcal{H}$ -colouring of $G$ .

This proposition is standard and proved using the Lovász Local Lemma. Its proof in the DP-colouring framework can be found, e.g., in [Reference Bernshteyn6, Appendix].

For the reader already familiar with some of the applications the “Rödl Nibble” method to graph colouring problems, let us make a comment about one technical feature of our Wasteful Coloring Procedure. For the analysis of constructions of this sort, it is often beneficial to assume that every colour has the same probability of being kept. It is clear, however, that in our procedure the probability that a colour $c \in V(H)$ is kept depends on the degree of $c$ in $H$ : the larger the degree, the higher the chance that $c$ gets removed. The usual way of addressing this issue is by introducing additional randomness in the form of “equalizing coin flips” that artificially increase the probability of removing the colours of low degree. (See, for example, the procedure in [Reference Molloy and Reed22, Chapter 12].) However, it turns out that we can avoid the added technicality of dealing with equalising coin flips by leveraging the generality of the DP-colouring framework. Namely, by replacing $H$ with a supergraph, we may always arrange $H$ to be $d$ -regular (see Proposition 4.1). This allows us to assume that every colour has the same probability of being kept, even without extra coin flips. This way of simplifying the analysis of probabilistic colouring constructions was introduced by Bonamy, Perrett and Postle in [Reference Bonamy, Perrett and Postle8] and nicely exemplifies the benefits of working with DP-colourings compared to the classical list-colouring setting.

4. Proof of Lemma 3.1

In this section, we present the proof of Lemma 3.1, apart from one technical lemma that will be established in §5. We start with the following proposition which allows us to assume that the given DP-cover of $G$ is $d$ -regular.

Proposition 4.1. Let $G$ be a graph and $(L,H)$ be a DP-cover of $G$ such that $\Delta (H) \leqslant d$ and $H$ is $K_{s,t}$ -free for some $d$ , $s$ , $t \in{\mathbb{N}}$ . Then there exist a graph $G^*$ and a DP-cover $(L^*, H^*)$ of $G^\ast$ such that the following statements hold:

  • $G$ is a subgraph of $G^*$ ,

  • $H$ is a subgraph of $H^*$ ,

  • for all $v \in V(G)$ , $L^\ast (v) = L(v)$ ,

  • $H^*$ is $K_{s,t}$ -free,

  • $H^*$ is $d$ -regular.

Proof. Set $N = \sum _{c \in V(H)}\!(d-\deg _H\!(c))$ and let $\Gamma$ be an $N$ -regular graph with girth at least $5$ . (Such $\Gamma$ exists by [Reference Imrich14, Reference Margulis20].) Without loss of generality, we may assume that $V(\Gamma ) = \{1,\ldots,k\}$ , where $k \;:\!=\; |V(\Gamma )|$ . Take $k$ vertex-disjoint copies of $G$ , say $G_1$ , …, $G_k$ , and let $(L_i, H_i)$ be a DP-cover of $G_i$ isomorphic to $(L,H)$ . Define $X_i \;:\!=\; \{c\in V(H_i)\,:\, \deg _{H_i}\!(c) \lt d\}$ for every $1\leqslant i \leqslant k$ . The graphs $G^\ast$ and $H^\ast$ are obtained from the disjoint unions of $G_1$ , …, $G_k$ and $H_1$ , …, $H_k$ respectively by performing the following sequence of operations once for each edge $ij \in E(\Gamma )$ , one edge at a time:

  1. (1) Pick arbitrary vertices $c \in X_i$ and $c' \in X_j$ .

  2. (2) Add the edge $cc'$ to $E(H^*)$ and the edge $L_i^{-1}\!(c)L_j^{-1}\!(c')$ to $E(G^*)$ .

  3. (3) If $\deg _{H^*}\!(c) = d$ , remove $c$ from $X_i$ .

  4. (4) If $\deg _{H^*}\!(c') = d$ , remove $c'$ from $X_j$ .

Since $\Gamma$ is $N$ -regular, throughout this process the sum $\sum _{c \in V(H_i)}\! (d - \deg _{H^\ast }\!(c))$ decreases exactly $N$ times, which implies that the resulting graph $H^\ast$ is $d$ -regular. Furthermore, since $\Gamma$ has girth at least $5$ and $H$ is $K_{s,t}$ -free, $H^*$ is also $K_{s,t}$ -free. Hence, if we define $L^*\colon V(G^*) \to 2^{V(H^*)}$ so that $L^*(v) = L_i(v)$ for all $v \in V(G_i)$ , then $(L^*,H^*)$ is a DP-cover of $G^*$ satisfying all the requirements.

Suppose $d$ , $\ell$ , $s$ , $t$ , $\eta$ and a graph $G$ with a DP-cover $\mathcal{H} = (L, H)$ satisfy the conditions of Lemma 3.1. By removing some vertices from $H$ if necessary, we may assume that $|L(v)| = \ell$ for all $v \in V(G)$ . Furthermore, by Proposition 4.1, we may assume that $H$ is $d$ -regular. Since we may delete all the edges of $G$ whose corresponding matchings in $H$ are empty, we may also assume that $\Delta (G) \leqslant \ell d$ . Suppose we have carried out the Wasteful Coloring Procedure with these $G$ and $\mathcal{H}$ . As in the statement of Lemma 3.1, we let

\begin{equation*} G' \;:\!=\; G[V(G)\setminus \mathrm {dom}(\varphi )] \quad \text {and} \quad H' \;:\!=\; H\left [\bigcup _{v \in V(G')} L'(v)\right ]. \end{equation*}

For each $v \in V(G)$ and $c \in V(H)$ , we define the random variables

\begin{equation*}\ell '(v) \,\;:\!=\;\, |K(v)| \quad \text {and} \quad d'(c) \,\;:\!=\;\, |N_H(c)\cap V(H')|.\end{equation*}

Note that if $v \in V(G')$ , then $\ell '(v) = |L'(v)|$ ; similarly, if $c \in V(H')$ , then $d'(c)= \deg _{H'}\!(c)$ . As in Lemma 3.1, we let $\beta \;:\!=\; 1/(25t)$ . For the ease of notation, we will write $\textsf{keep}$ to mean $\textsf{keep}(d,\ell,\eta )$ , $\textsf{uncolor}$ to mean $\textsf{uncolor}(d, \ell, \eta )$ , etc. We will show, for $d$ large enough, that:

Lemma 4.2. For all $v \in V(G)$ , $\mathbb{E}[\ell '(v)] = {\textsf{keep}}\, \ell$ ,

Lemma 4.3. For all $v \in V(G)$ , $\mathbb{P}\Big [\big |\ell '(v) - \mathbb{E}[\ell '(v)]\big | \gt \ell ^{1-\beta }\Big ] \leqslant d^{-100}$ .

Lemma 4.4. For all $c \in V(H)$ , $\mathbb{E}[d'(c)] \leqslant {\textsf{keep}}\, {\textsf{uncolor}}\, d + d/\ell$ ,

Lemma 4.5. For all $c \in V(H)$ , $\mathbb{P}\big [d'(c) \gt \mathbb{E}[d'(c)] - d/\ell + d^{1-\beta }\big ] \leqslant d^{-100}.$

Together, these lemmas will allow us to complete the proof of Lemma 3.1, as follows.

Proof of Lemma 3.1. Take $\tilde d$ so large that Lemmas 4.24.5 hold. Define the following random events for every vertex $v \in V(G)$ and every colour $c\in V(H)$ :

\begin{equation*}A_v \;:\!=\; \left [\ell '(v) \leqslant \ell '\right ] \quad \text {and} \quad B_{c} \;:\!=\; \left [d'(c) \geqslant d'\right ].\end{equation*}

We will use the Lovász Local Lemma, Theorem 2.1. By Lemma 4.2 and Lemma 4.3, we have:

\begin{align*} \mathbb{P}[A_v] &= \mathbb{P}\big [\ell '(v) \leqslant \textsf{keep} \, \ell - \ell ^{1-\beta }\big ]\\ &= \mathbb{P}\big [\ell '(v) \leqslant \mathbb{E}[\ell '(v)] - \ell ^{1-\beta }\big ] \\ & \leqslant d^{-100}. \end{align*}

By Lemma 4.4 and Lemma 4.5, we have:

\begin{align*} \mathbb{P}[B_{c}] &= \mathbb{P}\big [d'(c) \geqslant \textsf{keep} \, \textsf{uncolor} \, d + d^{1-\beta }\big ]\\ & \leqslant \mathbb{P}\big [d'(c) \geqslant \mathbb{E}[d'(c)] - (d/\ell )+ d^{1-\beta }\big ] \\ & \leqslant d^{-100}. \end{align*}

Let $p \;:\!=\; d^{-100}$ . Note that events $A_v$ and $B_{c}$ are mutually independent from events of the form $A_u$ and $B'_{\!\!c}$ where $c' \in L(u)$ and $u \in V(G)$ is at distance at least $5$ from $v$ . Since we are assuming that $\Delta (G) \leqslant \ell d$ , there are at most $1 + (\ell d)^4$ vertices in $G$ of distance at most $4$ from $v$ . For each such vertex $u$ , there are $\ell + 1$ events corresponding to $u$ and the colours in $L(u)$ , so we can let $d_{LLL} \;:\!=\; (\ell +1)(1 + (\ell d)^4) = O(d^9)$ . Assuming $d$ is large enough, we have

\begin{equation*} 4pd_{LLL} \leqslant 1, \end{equation*}

so, by Theorem 2.1, with positive probability none of the events $A_v$ , $B_{c}$ occur, as desired.

The proofs of Lemmas 4.24.4 are fairly straightforward and similar to the corresponding parts of the argument in the girth- $5$ case (see [Reference Molloy and Reed22, Chapter 12]). We present them here.

Proof of Lemma 4.2. Consider any $c \in L(v)$ . We have $c \in K(v)$ exactly when $N_H(c) \cap \textsf{col}(A) = \emptyset$ , i.e., when no neighbour of $c$ is assigned to its underlying vertex. The probability of this event is $\left (1 -\eta/\ell \right )^{d} = \textsf{keep}$ . By the linearity of expectation, it follows that $\mathbb{E}[\ell '(v)] = \textsf{keep}\, \ell$ .

Proof of Lemma 4.3. It is easier to consider the random variable $r(v) \;:\!=\; \ell - \ell '(v)$ , the number of colours removed from $L(v)$ . We will use Theorem 2.3, Talagrand’s Inequality. Order the colours in $L(u)$ for each $u \in N_G(v)$ arbitrarily. Let $T_u$ be the random variable that is equal to $0$ if $u \not \in A$ and $i$ if $u \in A$ and $\textsf{col}(u)$ is the $i$ -th colour in $L(u)$ . Then $T_u$ , $u \in N_G(v)$ is a list of independent trials whose outcomes determine $r(v)$ . Changing the outcome of any one of these trials can affect $r(v)$ at most by $1$ . Furthermore, if $r(v) \geqslant s$ for some $s$ , then this fact can be certified by the outcomes of $s$ of these trials. Namely, for each removed colour $c \in L(v) \setminus K(v)$ , we take the trial $T_u$ corresponding to any vertex $u \in N_G(v)$ such that $u \in A$ and $\textsf{col}(u)$ is adjacent to $c$ in $H$ . Thus, we can now apply Theorem 2.3 with $\gamma =1$ , $r=1$ to get:

\begin{align*} \mathbb{P}\Big [\big |\ell '(v) - \mathbb{E}[\ell '(v)]\big | \gt \ell ^{1-\beta }\Big ] &=\mathbb{P}\Big [\big |r(v) - \mathbb{E}[r(v)]\big | \gt \ell ^{1-\beta }\Big ] \\ &= \mathbb{P}\Big [\big |r(v) - \mathbb{E}[r(v)]\big | \gt \frac{\ell ^{1-\beta }}{2} + \frac{\ell ^{1-\beta }}{2}\Big ] \\ &\leqslant \mathbb{P}\Big [\big |r(v) - \mathbb{E}[r(v)]\big | \gt \frac{\ell ^{1-\beta }}{2} + 60\sqrt{\mathbb{E}[r(v)]}\Big ] \\ &\leqslant 4\exp \left (-\frac{\ell ^{2(1-\beta )}}{32\, (1-\textsf{keep})\, \ell }\right ) \\ &\leqslant 4\exp{\left (-\frac{\ell ^{1-2\beta }}{32}\right )} \\ &\leqslant 4\exp{\left (-\frac{(d/\log ^5d)^{1-2\beta }}{32}\right )} \\ &\leqslant d^{-100}, \end{align*}

where the first and last inequalities hold for $d$ large enough.

Proof of Lemma 4.4. Let $u \in N_G(v)$ and $c'\in L(u) \cap N_H(c)$ . We need to bound the probability that $\varphi (u) = \textsf{blank}$ and $c' \in K(u)$ . We split into the following cases.

Case 1: $u \notin A$ and $c' \in K(u)$ . This occurs with probability $(1-\eta ) \textsf{keep}$ .

Case 2: $u \in A$ , $\textsf{col}(u)=c'' \neq c'$ , $\varphi (u) = \textsf{blank}$ and $c' \in K(u)$ . In this case, there must be some $w \in N_G(u)$ such that $\textsf{col}(w) \sim c''$ . Since $c' \in K(u)$ , we must have $\textsf{col}(w) \not \sim c'$ . For each $w \in N_G(u)$ ,

\begin{align*} \mathbb{P}\big [\textsf{col}(w) \sim c'' \,|\, \textsf{col}(w) \not \sim c'\big ] \,=\, \left (\frac{\eta }{\ell }\right )/\left (1 - \frac{\eta }{\ell }\right ) \,=\, \frac{\eta }{\ell - \eta }. \end{align*}

Therefore, we can write

\begin{align*} &\mathbb{P}\big [\varphi (u) = \textsf{blank}\, |\, \textsf{col}(u) = c'',\, c' \in K(u)\big ]\\ &=\, 1 - \left (1 - \frac{\eta }{\ell - \eta }\right )^d\\ &=\, 1 - \textsf{keep} \, \left (1 - \frac{\eta ^2}{(\ell - \eta )^2}\right )^d\\ &\leqslant \, 1 - \textsf{keep} + \textsf{keep} \frac{d\eta ^2}{(\ell -\eta )^2}\\ &\leqslant \, 1 - \textsf{keep} + \frac{1}{\ell }, \end{align*}

where the last inequality follows since $\textsf{keep} \leqslant 1$ , $\eta d \lt \ell$ , $\eta \lt 1/\log d$ , and $d$ is large enough.

Putting the two cases together, we have:

\begin{align*} &\mathbb{P}\big [\varphi (u) = \textsf{blank}, \, c' \in K(u)\big ] \,\\&\leqslant \, (1-\eta )\, \textsf{keep} + \eta \, \left (1 - \frac{1}{\ell }\right )\, \textsf{keep} \, \left (1 - \textsf{keep} + \frac{1}{\ell }\right ) \\ &\leqslant \, \textsf{keep}\, \textsf{uncolor} + \frac{1}{\ell }. \end{align*}

Finally, by linearity of expectation, we conclude that

\begin{align*} \mathbb{E}[d'(c)] \,\leqslant \, d\left (\textsf{keep} \, \textsf{uncolor} + \frac{1}{\ell }\right ) \,=\, \textsf{keep}\, \textsf{uncolor}\, d + \frac{d}{\ell }, \end{align*}

proving the lemma.

The proof of Lemma 4.5 is quite technical and will be given in §5. It is the only part of our argument that relies on the fact that $H$ is $K_{s,t}$ -free. To explain why proving Lemma 4.5 is difficult, consider an arbitrary colour $c \in V(H)$ . The value $d'(c)$ depends on which of the neighbours of $c$ in $H$ are kept. This, in turn, is determined by what happens to the neighbours of the neighbours of $c$ . Since we are only assuming that $H$ is $K_{s,t}$ -free, the neighbourhoods of the neighbours of $c$ can overlap with each other. Roughly speaking, we will need to carefully analyse the structure of these overlaps to make sure that Talagrand’s inequality can be applied.

5. Proof of Lemma 4.5

Throughout this section, we shall use the following parameters, where $t$ is given in the statement of Lemma 3.1:

\begin{equation*} \beta \;:\!=\; \frac {1}{25t}, \quad \beta _1 \;:\!=\; \frac {1}{20t}, \quad \beta _2 \;:\!=\; \frac {1}{15t}, \quad \delta \;:\!=\; \frac {1}{3t}, \quad \delta _2 \;:\!=\; \frac {1}{10t}, \quad \tau \;:\!=\; \frac {4}{9t}. \end{equation*}

Fix a vertex $v \in V(G)$ and a colour $c \in L(v)$ . We need too show that, with high probability, the random variable $d'(c)$ does not significantly exceed its expectation. To this end, we make the following definitions:

\begin{align*} \mathcal{K} & \;:\!=\; \{c' \in N_H(c)\; :\; N_H(c')\cap \textsf{col}(A) = \emptyset \}, \\ \mathcal{U} & \;:\!=\; \{c' \in N_H(c)\; :\; \varphi (L^{-1}\!(c')) = \textsf{blank}\}. \end{align*}

Then $d'(c) = |\mathcal{U} \cap \mathcal{K}|$ . We will show that $|\mathcal{U}|$ is highly concentrated and prove that, with high probability, $|\mathcal{U} \setminus \mathcal{K}|$ is not much lower than its expected value. Using the identity $|\mathcal{U} \cap \mathcal{K}| = |\mathcal{U}| - |\mathcal{U}\setminus \mathcal{K}|$ will then give us the desired upper bound on $d'(c)$ .

Lemma 5.1. $\mathbb{P}\bigg [\Big ||\mathcal{U}|- \mathbb{E}\big [|\mathcal{U}|\big ]\Big | \geqslant d^{1-\beta _1}\bigg ] \leqslant d^{-110}$ .

Proof. We use Theorem 2.4, Exceptional Talagrand’s Inequality. Let $V_c \;:\!=\; L^{-1}(N_H(c))$ . In other words, $V_c$ is the set of neighbours of $v$ whose lists include a colour corresponding to $c$ . Then the set $\mathcal{U}$ is determined by the colouring outcomes of the vertices in $S \;:\!=\; V_c\cup N_G(V_c)$ . More precisely, as in the proof of Lemma 4.3, we arbitrarily order the colours in $L(u)$ for each $u \in S$ and let $T_u$ be the random variable that is equal to $0$ if $u \not \in A$ and $i$ if $u \in A$ and $\textsf{col}(u)$ is the $i$ -th colour in $L(u)$ . Then $T_u$ , $u \in S$ is a list of independent trials whose outcomes determine $|\mathcal{U}|$ . Let $\Omega$ be the set of outcomes of these trials. Let $C \;:\!=\; 25$ and define $\Omega ^* \subseteq \Omega$ to be the set of all outcomes in which there is a colour $c' \in L(S)$ such that $|N_H(c') \cap L(V_c) \cap \textsf{col}(A) | \geqslant C{\log d}$ . We claim that $|\mathcal{U}|$ satisfies conditions (ET1) and (ET2) of Theorem 2.4 with $s = 2d$ and $\gamma = 1 + C \log d$ .

To verify (ET1), take $q \gt 0$ and outcome $\omega \notin \Omega ^*$ . Each vertex $u \in L^{-1}(\mathcal{U})$ satisfies $u \notin A$ or there exists $w\in N_G(u)$ such that $u$ , $w \in A$ and $\textsf{col}(w) \sim \textsf{col}(u)$ . We call such $w$ a conflicting neighbour of $u$ . Form a subset $I$ of trials by including, for each $u \in L^{-1}(\mathcal{U})$ , the trial $T_u$ itself and, if applicable, the trial $T_w$ corresponding to any one conflicting neighbour $w$ of $u$ . Since $|\mathcal{U}| \leqslant d$ , we have $|I| \leqslant 2d = s$ . Now suppose that $\omega ' \not \in \Omega ^\ast$ satisfies $|\mathcal{U}(\omega ')| \leqslant |\mathcal{U}(\omega )| - q$ . For each vertex $u \in \mathcal{U}(\omega )\setminus \mathcal{U}(\omega ')$ , the outcomes of either the trial $T_u$ or the trial $T_w \in I$ of a conflicting neighbour $w$ of $u$ must be different in $\omega$ and in $\omega '$ . Since $\omega \notin \Omega ^\ast$ , every $w \in S$ can be a conflicting neighbour of at most $C{\log d}$ vertices $u$ . Therefore, $\omega '$ and $\omega$ must differ on at least $q/(1 + C{\log d})$ trials, as desired.

It remains to show $\mathbb{P}\left [\Omega ^*\right ] \leqslant M^{-2}$ , where $M = \max \{\sup |\mathcal{U}|, 1\}$ . For any $c' \in L(S)$ , the number of colours in $N_H(c') \cap L(V_c) \cap \textsf{col}(A)$ is a binomial random variable with at most $d$ trials, each having probability $\eta/\ell$ . Let $X_{c'}$ denote this random variable. Note that $\mathbb{E}[X_{c'}] = \eta d/\ell \lt 1$ . By the union bound, we have

\begin{align*} \mathbb{P}\!\left [X_{c'} \geqslant C{\log d}\right ] &\leqslant\left({\substack{d\\ \lceil C \log d \rceil }}\right) \left (\frac{\eta }{\ell }\right )^{\lceil C \log d \rceil } \leqslant \left (\frac{ed}{\lceil C \log d \rceil }\right )^{\lceil C \log d \rceil } \left (\frac{\eta }{\ell }\right )^{\lceil C \log d \rceil } \\ &\leqslant \left (\frac{e}{\lceil C \log d \rceil }\right )^{\lceil C \log d \rceil } \leqslant d^{-150}, \end{align*}

where the last inequality holds for $d$ large enough. By the union bound and the fact that $\ell \leqslant 8d$ (by the assumptions of Lemma 3.1), we get

\begin{align*} \mathbb{P}\!\left [\exists\; c' \in L(S) \text{ such that } X'_{\!\!c} \geqslant C \log d\right ] \leqslant \ell |S| d^{-150} \leqslant d^{-125}, \end{align*}

where we use that $|S| \leqslant d + \ell d^2$ and $d$ is large enough. Thus $\mathbb{P}\!\left [\Omega ^*\right ] \leqslant d^{-125}$ . Note that $M = \max \{\sup\!|\mathcal{U}|, 1 \} = \max \{ d, 1\} = d$ , so $\mathbb{P}\!\left [\Omega ^*\right ] \leqslant 1/M^2$ , for $d$ large enough.

We can now use Exceptional Talagrand’s Inequality. Let $\xi \;:\!=\; d^{1-\beta _1}$ . Note that $\xi \gt 50\gamma \sqrt{s}$ for $d$ large enough. We can therefore write

\begin{align*} \mathbb{P}\bigg [\Big ||\mathcal{U}|- \mathbb{E}\big [|\mathcal{U}|\big ]\Big | \geqslant d^{1-\beta _1}\bigg ] &\leqslant 4\exp{\!\left (-\frac{d^{2-2\beta _1}}{32(1+C\log d)^2d}\right )} + 4\mathbb{P}[\Omega ^*]\\ &\leqslant 4\exp \!\left ( -O\!\left (\frac{d^{1 - 2\beta _1}}{\log ^2 d}\right )\right ) + 4d^{-125} \\ &\leqslant d^{-110}, \end{align*}

for $d$ large enough.

Lemma 5.2. $\mathbb{P}\Big [|\mathcal{U}\setminus \mathcal{K}| \geqslant \mathbb{E}\big [|\mathcal{U}\setminus \mathcal{K}|\big ] - d^{1-\beta _1}\Big ] \geqslant 1-d^{-110}.$

It is here that we take advantage of the fact that $H$ is $K_{s,t}$ -free. Before we proceed, we make a few definitions. Recall that $\delta = 1/(3t)$ , where $t$ is given in the statement of Lemma 3.1. Let $N^2_H(c)$ denote the set of all colours $c'' \in V(H)$ that are joined to $c$ by at least one path of length exactly $2$ (there may also be other paths joining $c$ and $c''$ ). We say:

\begin{align*} \text{$c'' \in N_H^2(c)$ is}\, & \boldsymbol{{bad}} \text{ if $c''$ has at least $d^{1-\delta }$ neighbors in $N_H(c)$},\\ & \boldsymbol{{good}} \text{ otherwise};\\ \text{$c' \in N_H(c)$ is } & \boldsymbol{{sad}} \text{ if $c'$ has at least $d^{1-\delta }$ bad neighbors},\\ & \boldsymbol{{happy}} \text{ otherwise.}\end{align*}

Let $\textsf{Bad}$ , $\textsf{Good}$ , $\textsf{Sad}$ and $\textsf{Happy}$ be the sets of bad, good, sad, and happy colours respectively. Note that, as we are not assuming that $H$ is triangle-free, it is possible that $N_H(c) \cap N_H^2(c) \neq \emptyset$ ; in particular, a colour can be both bad and sad. Bad colours are problematic from the point of view of Talagrand’s inequality, as each of them can be responsible for the removal of a large number of colours from $\mathcal{K}$ . Thankfully, we can use the Kővári–Sós–Turán theorem to argue that there are few sad colours, i.e., most colours in $N_H(c)$ have only a few bad neighbours.

Claim 5.3. The number of sad colours is at most $d^{1-\beta _2}$ , where $\beta _2 = 1/(15t)$ .

Proof. Since every bad colour has at least $d^{1-\delta }$ neighbours in $N_H(c)$ , we have

\begin{align*} |\textsf{Bad}| \leqslant \frac{2|E\big (N_H(c),N_H^2(c)\big )|}{d^{1-\delta }} \leqslant \frac{2d^2}{d^{1-\delta }} = 2d^{1+\delta }. \end{align*}

Let $B$ be the bipartite graph with parts $X$ and $Y$ , where $X = \textsf{Bad}$ and $Y$ is a copy of $N_H(c)$ disjoint from $X$ , with the edges in $B$ corresponding to those in $H$ . Note that if a colour $c' \in N_H(c) \cap N_H^2(c)$ is bad, then $B$ contains two copies of $c'$ , one in $X$ and the other in $Y$ . However, these two copies cannot be adjacent to each other, and hence every subgraph of $B$ isomorphic to $K_{s,t}$ must use only one copy of each colour. Since $H$ is $K_{s,t}$ -free, we conclude that $B$ is $K_{s,t}$ -free as well. If we set $\hat{\varepsilon } = 1/t$ , $m = |\textsf{Bad}|$ , $n = d$ , then, by the Kővári–Sós–Turán theorem,

\begin{align*} |E(B)| \leqslant s^{\hat{\varepsilon }}(2d^{1+\delta })^{1-\hat{\varepsilon }}\, d + 2\,t\,d^{1+\delta } \leqslant 4d^{2 -3\hat{\varepsilon }/4 + \delta - \delta \hat{\varepsilon }}. \end{align*}

On the other hand, since every sad colour has at least $d^{1-\delta }$ bad neighbours, we see that

\begin{equation*}|\textsf {Sad}|\, d^{1-\delta } \leqslant |E(B)| \leqslant 4 d^{2 - 3\hat {\varepsilon }/4 + \delta - \delta \hat {\varepsilon }}.\end{equation*}

This implies that for $d$ large enough,

\begin{align*} |\textsf{Sad}| & \leqslant 4d^{1 - 3\hat{\varepsilon }/4 + 2\delta - \delta \hat{\varepsilon }} \leqslant d^{1-\beta _2}, \end{align*}

as $3\hat \varepsilon/4 - 2\delta + \delta \hat \varepsilon \gt 1/(12t) \gt \beta _2 \gt 0$ .

Instead of proving the desired one-sided concentration inequality for $|\mathcal{U}\setminus \mathcal{K}|$ directly, we will focus on a slightly different parameter. Let

\begin{align*} \widetilde{\mathcal{K}} \;:\!=\; \big \{c'\in \textsf{Happy} \;:\; N_H(c') \cap \textsf{Good} \cap \textsf{col}(A) = \emptyset \big \}. \end{align*}

In other words, $\widetilde{\mathcal{K}}$ is the set of all happy colours that do not have a good neighbour in $\textsf{col}(A)$ . Then

\begin{align*} \mathcal{U} \setminus \widetilde{\mathcal{K}} =\{c' \in N_H(c) :\; & \varphi (L^{-1}\!(c')) = \textsf{blank}, \text{ and either} \\ & \text{(1) $c' \in \textsf{Sad}$, or }\\ & \text{(2) $N_H(c') \cap \textsf{Good} \cap \textsf{col}(A) \neq \emptyset $}\}. \end{align*}

Claim 5.4. Let $\widetilde Z \;:\!=\; |(\mathcal{U} \setminus \widetilde{\mathcal{K}}) \cap \textsf{Happy}|$ . Then $\mathbb{P}\big [\widetilde Z \geqslant \mathbb{E}[\widetilde Z] - d^{1-\delta _2}\big ] \geqslant 1-d^{-110}$ , where $\delta _2 = 1/(10t)$ .

Proof. Recall that $\tau = 4/(9t)$ . By definition, a good colour can be responsible for the removal of at most $d^{1-\delta }$ colours from $\widetilde{\mathcal{K}}$ . Unfortunately, this bound is still too large to apply Talagrand’s inequality directly. Instead, we will first partition $\textsf{Happy}$ into $k \;:\!=\; \lceil d^{1-\tau } \rceil$ sets $\textsf{Happy}_1$ , …, $\textsf{Happy}_k$ satisfying certain properties and then argue that the random variable $|(\mathcal{U} \setminus \widetilde{\mathcal{K}}) \cap \textsf{Happy}_i|$ is highly concentrated for each $i$ . The following subclaim states these properties and proves the existence of the partition.

Subclaim 5.4.a. There exists a partition of ${\textsf{Happy}}$ into sets ${\textsf{Happy}}_1$ , …, ${\textsf{Happy}}_k$ such that the following hold for all $1\leqslant i \leqslant k$ and every $c''\in {\textsf{Good}}$ :

  • $\dfrac{d^\tau }{4} \leqslant |{\textsf{Happy}}_i| \leqslant \dfrac{3d^{\tau }}{2}$ ,

  • $|N_H(c'') \cap {\textsf{Happy}}_i| \leqslant \dfrac{3d^{\tau - \delta }}{2}$ .

Proof of Subclaim 5.4.a. Independently for each $c' \in \textsf{Happy}$ , assign $c'$ to $\textsf{Happy}_i$ uniformly at random. Let $s_i \;:\!=\; |\textsf{Happy}_i|$ . Then $s_i$ is a binomial random variable with at most $d$ and at least $d - d^{1-\beta _2}$ trials, each succeeding with probability $1/k$ . We have $\mathbb{E}[s_i] = d/k \in \left[\frac{3d^\tau }{4}, d^\tau \right]$ as $3d^\tau/4 \lt (d - d^{1-\beta _2})/k \approx d^\tau - d^{\tau - \beta _2}$ for $d$ large enough, since $\tau \gt \beta _2$ . By the Chernoff bound (Theorem 2.2), we have:

\begin{equation*}\mathbb {P}\Big [\big |s_i - \mathbb {E}[s_i]\big | \geqslant \frac {d^\tau }{2}\Big ] \leqslant 2\exp {\left (-\frac {d^{\tau }}{12}\right )}.\end{equation*}

By the union bound and since $t \leqslant \tilde \alpha \dfrac{\log d}{\log \log d}$ , we have the following for $\tilde \alpha$ small enough:

(5.5) \begin{align} \mathbb{P}\left [\exists \, i:\, \big |s_i - \mathbb{E}[s_i]\big | \geqslant \frac{d^\tau }{2}\right ] \leqslant 2\,k\,\exp{\left (-\frac{d^{\tau }}{12}\right )} \leqslant d^{-1}. \end{align}

Now, for $c'' \in \textsf{Good}$ , let $r_i(c'')$ be the number of neighbours $c''$ has in $\textsf{Happy}_i$ . Then $r_i(c'')$ is a binomial random variable with at most $d^{1-\delta }$ trials (since $c''$ is good), each succeeding with probability $1/k$ . Let $\Theta$ be a binomial random variable with exactly $\lfloor d^{1-\delta } \rfloor$ trials, each succeeding with probability $1/k$ . Note that $\mathbb{E}[r_i(c'')] \leqslant \mathbb{E}[\Theta ] \leqslant d^{\tau -\delta }$ and $\mathbb{E}[\Theta ] \gt d^{\tau - \delta }/2$ . Then, by Theorem 2.2,

\begin{align*} \mathbb{P}\left [r_i(c'') \geqslant \frac{3d^{\tau - \delta }}{2}\right ] \leqslant \mathbb{P}\left [\Theta \geqslant \mathbb{E}[\Theta ] + \frac{d^{\tau - \delta }}{2}\right ] \leqslant 2\exp{\left (-\frac{(d^{\tau - \delta }/2)^2}{3d^{\tau -\delta }}\right )} = 2\exp{\left (-\frac{d^{\tau -\delta }}{12}\right )}. \end{align*}

By the union bound and since $t\leqslant \tilde \alpha \dfrac{\log d}{\log \log d}$ , we have the following for $\tilde \alpha$ small enough

(5.6) \begin{align} \mathbb{P}\left [\exists \, i,\, c''\in \textsf{Good} \;:\, r_i(c'') \geqslant \frac{3d^{\tau - \delta }}{2}\right ] \leqslant k\,d^2\,\exp{\left (-\frac{d^{\tau -\delta }}{12}\right )} \leqslant d^{-1}. \end{align}

Putting together (5.5) and (5.6), we obtain

\begin{equation*}\mathbb {P}\left [\textsf {Happy}_1, \, \ldots, \,\textsf {Happy}_k \text { satisfy the conditions stated}\right ] \geqslant 1 - 2d^{-1} \gt 0.\end{equation*}

So, such a partition must exist.

From here on out, we fix a partition $\textsf{Happy}_1$ , …, $\textsf{Happy}_k$ of $\textsf{Happy}$ that satisfies the conclusions of Subclaim 5.4.a. For each $1 \leqslant i \leqslant k$ , let $\widetilde{Z}_i\;:\!=\; |(\mathcal{U} \setminus \widetilde{\mathcal{K}})\cap \textsf{Happy}_i|$ . We will now use Exceptional Talagrand’s Inequality (Theorem 2.4) to show that each random variable $\widetilde{Z}_i$ is highly concentrated.

Subclaim 5.4.b. For each $1 \leqslant i \leqslant k$ , we have $\mathbb{P}\bigg [\left |\widetilde{Z}_i - \mathbb{E}[\widetilde{Z}_i]\right | \geqslant d^{\tau -\delta _2}\bigg ] \leqslant d^{-120}$ .

Proof of Subclaim 5.4.b. For brevity, set $X \;:\!=\; \widetilde{Z}_i$ . Let $D \;:\!=\; L^{-1}(\textsf{Happy}_i)$ be the set of the underlying vertices of the colours in $\textsf{Happy}_i$ . The random variable $X$ is determined by the colouring outcomes of the vertices in $S \;:\!=\; D \cup N_G(D)$ . More precisely, as in the proofs of Lemma 4.3 and 5.1, we arbitrarily order the colours in $L(u)$ for each $u \in S$ and let $T_u$ be the random variable that is equal to $0$ if $u \not \in A$ and $i$ if $u \in A$ and $\textsf{col}(u)$ is the $i$ -th colour in $L(u)$ . Then $T_u$ , $u \in S$ is a list of independent trials that determines $X$ . Let $\Omega$ be the set of outcomes of these trials. Let $C \;:\!=\; 25$ and define $\Omega ^* \subseteq \Omega$ to be the set of all outcomes in which there is a colour $c'' \in L(S)$ such that $|N_H(c'')\cap \textsf{Happy}_i \cap \textsf{col}(A)| \geqslant C{\log d}$ . We claim that $X$ satisfies conditions (ET1) and (ET2) in Theorem 2.4 with $s = 9d^\tau/2$ and $\gamma = 1 + 3d^{\tau -\delta }/2 + C{\log d}$ .

To verify (ET1), take $q \gt 0$ and $\omega \notin \Omega ^*$ . We form a set $I$ of at most $s$ trials as follows. Consider any colour $c' \in \textsf{Happy}_i$ that contributes towards $X$ and let $u \;:\!=\; L^{-1}\!(c')$ . By definition, $\varphi (u) = \textsf{blank}$ and there is a good neighbour $c''$ of $c'$ with $c'' \in \textsf{col}(A)$ . Pick any such $c''$ and let $w \;:\!=\; L^{-1}\!(c'')$ . We say that $w$ is the conflicting neighbour of $u$ of Type I . Next, since $\varphi (u) = \textsf{blank}$ , we either have $u \notin A$ , or there is $y \in N_G(u)$ such that $u$ , $y \in A$ and $\textsf{col}(y) \sim \textsf{col}(u)$ . Pick any such $y$ (if it exists) and call it the conflicting neighbour of $u$ of Type II . Add the following trials to $I$ :

\begin{equation*} T_u, \quad T_w, \quad T_y \text { ({if applicable})}. \end{equation*}

Since $|\textsf{Happy}_i| \leqslant 3d^\tau/2$ , we have $|I| \leqslant 3|\textsf{Happy}_i| \leqslant 9d^\tau/2 = s$ .

Note that, for every vertex $w \in S$ , there can be at most $3d^{\tau -\delta }/2$ vertices $u \in D$ such that $w$ is the conflicting neighbour of $u$ of Type I. Indeed, for $w$ to be the Type I conflicting neighbour of any vertex, it must be true that $w \in A$ and $\textsf{col}(w) \in \textsf{Good}$ . Then, by Subclaim 5.4.a, $\textsf{col}(w)$ has at most $3d^{\tau -\delta }/2$ neighbours in $\textsf{Happy}_i$ , as desired. Similarly, since $\omega \not \in \Omega ^\ast$ , for each vertex $y \in S$ , there are at most $C{\log d}$ vertices $u \in D$ such that $y$ is the conflicting neighbour of $u$ of Type II.

Now suppose that $\omega ' \not \in \Omega ^\ast$ satisfies $X(\omega ') \leqslant X(\omega ) - q$ . Consider any colour $c' \in \textsf{Happy}_i$ that contributes towards $X(\omega )$ but not $X(\omega ')$ and let $u \;:\!=\; L^{-1}\!(c')$ . Then either $T_u$ or at least one of the trials corresponding to the conflicting neighbours of $u$ must have different outcomes in $\omega$ and $\omega '$ . The observations in the previous paragraph imply that the total number of trials on which $\omega$ and $\omega '$ differ must be at least $q/(1 + 3d^{\tau -\delta }/2 + C{\log d})$ .

It remains to show that $\mathbb{P}\left [\Omega ^*\right ] \leqslant M^{-2}$ , where $M = \max \{\sup X, 1\}$ . As in the proof of Lemma 5.1, we get $\mathbb{P}\left [\Omega ^*\right ] \leqslant d^{-125}$ for $d$ large enough. Since $M = \max \{\sup X, 1 \} = \max \{ 3d^\tau/2, 1\} \leqslant d$ , we conclude that $\mathbb{P}\left [\Omega ^*\right ] \leqslant 1/M^2$ , as desired.

We can now use Exceptional Talagrand’s Inequality. Let $\xi = d^{\tau - \delta _2}$ . Note that $2\delta - 2\delta _2 -\tau \gt 0$ and $\xi \gt 50\gamma \sqrt{s}$ for $d$ large enough. We can therefore write

\begin{align*} \mathbb{P}\Big [\big |X - \mathbb{E}[X]\big | \geqslant d^{\tau - \delta _2}\Big ] &\leqslant 4\exp{\left (-\frac{d^{2\tau -2\delta _2}}{16 \left (1 + 3d^{\tau -\delta }/2 + C{\log d}\right )^2 \left (\frac{9d^\tau }{2}\right )}\right )} + 4\mathbb{P}[\Omega ^*]\\ &\leqslant 4\exp \left ( - O\left (\frac{d^{2\tau -2\delta _2}}{d^{3\tau -2\delta }}\right )\right ) + 4d^{-125} \\ &\leqslant 4\exp \left (-O\left (d^{2\delta - 2\delta _2 - \tau }\right )\right ) + 4d^{-125}\\ &\leqslant d^{-120}, \end{align*}

for $d$ large enough and $\tilde \alpha$ small enough.

Using Subclaim 5.4.b and the union bound, we obtain

\begin{align*} \mathbb{P}\Big [\exists \,i:\, \widetilde{Z}_i \leqslant \mathbb{E}[\widetilde{Z}_i] - d^{\tau - \delta _2} \Big ] \leqslant d^{1-\tau }d^{-120}\leqslant d^{-115}. \end{align*}

Since $\widetilde{Z} \;:\!=\; \sum _{i = 1}^k \widetilde{Z}_i$ , we conclude that

\begin{align*} \mathbb{P}\left [\widetilde{Z} \geqslant \mathbb{E}[\widetilde{Z}] - d^{1 - \delta _2}\right ] \geqslant \mathbb{P}\left [\forall \,i:\, \widetilde{Z}_i \geqslant \mathbb{E}[\widetilde{Z}_i] - d^{\tau - \delta _2}\right ] \geqslant 1-d^{-110}, \end{align*}

for $d$ large enough, as desired.

Since $|(\mathcal{U} \setminus \widetilde{\mathcal{K}}) \cap \textsf{Happy}| = \widetilde Z$ , the value $\widetilde Z$ can differ from $|\mathcal{U}\setminus \widetilde{\mathcal{K}}|$ at most by the number of sad colours. Thus, by Claim 5.3, we have $0 \leqslant |(\mathcal{U} \setminus \widetilde{\mathcal{K}})|-\widetilde Z \leqslant d^{1-\beta _2}$ , from which it follows that

(5.7) \begin{align} {}\mathbb{E}[\widetilde{Z}] &\geqslant \mathbb{E}\big [|\mathcal{U} \setminus \widetilde{\mathcal{K}}|\big ] - d^{1-\beta _2}. \end{align}

We now show that $\mathbb{E}\big [|\mathcal{U}\setminus \mathcal{K}|\big ]$ is not much larger than $\mathbb{E}\big [|\mathcal{U} \setminus \widetilde{\mathcal{K}}|\big ]$ .

Claim 5.8. $\mathbb{E}\big [|\mathcal{U}\setminus \mathcal{K}|\big ] - \mathbb{E}\big [|\mathcal{U}\setminus \widetilde{\mathcal{K}}|\big ] \leqslant d^{1-\delta }$ .

Proof. First note that

\begin{align*} |\mathcal{U}\setminus \mathcal{K}| - |\mathcal{U}\setminus \widetilde{\mathcal{K}}| \leqslant \big |(\mathcal{U}\setminus \mathcal{K})\cap \mathcal{\widetilde K}\big |, \end{align*}

so it suffices to show $\mathbb{E}\Big [\big |(\mathcal{U}\setminus \mathcal{K})\cap \mathcal{\widetilde K}\big |\Big ] \leqslant d^{1-\delta }$ . We have

\begin{align*} (\mathcal{U}\setminus \mathcal{K})\cap \mathcal{\widetilde K} \subseteq \{&c' \in N_H(c) \,:\, \varphi (L^{-1}(u)) = \textsf{blank},\, c' \in \textsf{Happy},\, N_H(c') \cap \textsf{Bad} \cap \textsf{col}(A) \neq \emptyset \}. \end{align*}

Note that if $c' \in \textsf{Happy}$ , we have

\begin{equation*}\mathbb {P}\big [N_H(c') \cap \textsf {Bad} \cap \textsf {col}(A) \neq \emptyset \big ] \leqslant \frac {\eta }{\ell } \, d^{1-\delta } \lt d^{-\delta },\end{equation*}

from which it follows

\begin{equation*} \mathbb {E}\Big [\big |(\mathcal {U}\setminus \mathcal {K})\cap \mathcal {\widetilde K}\big |\Big ] \leqslant \sum _{\substack {c' \in N_H(c) \\ c' \in \textsf {Happy}}} \mathbb {P}\big [N_H(c') \cap \textsf {Bad} \cap \textsf {col}(A) \neq \emptyset \big ] \leqslant d \, d^{-\delta } = d^{1-\delta }. \end{equation*}

We are now ready to finish the proof of Lemma 5.2.

Proof of Lemma 5.2. Observe that $(\mathcal{U}\setminus \widetilde{\mathcal{K}}) \setminus (\mathcal{U}\setminus \mathcal{K})$ is the set of colours $c'\in N_H(c)$ which satisfy that $\varphi (L^{-1}\!(c')) = \textsf{blank}$ , $c'\in \textsf{Sad}$ and $c' \in \mathcal{K}$ . By Claim 5.3, this implies

\begin{equation*} |\mathcal {U} \setminus \widetilde {\mathcal {K}}| - |\mathcal {U} \setminus \mathcal {K}| \leqslant d^{1-\beta _2}. \end{equation*}

Therefore, with probability at least $1-d^{-110}$ , we have the following chain of inequalities:

\begin{align*} |\mathcal{U}\setminus \mathcal{K}| &\geqslant |\mathcal{U}\setminus \widetilde{\mathcal{K}}| - d^{1-\beta _2} \\ &\geqslant \widetilde Z - d^{1-\beta _2} &\text{ (since $\widetilde{Z} \subseteq \mathcal{U}\setminus \widetilde{\mathcal{K}}$ )}\\ &\geqslant \mathbb{E}[\widetilde Z] - d^{1-\delta _2}-d^{1-\beta _2} &\text{ (by Claim 5.4)}\\ &\geqslant \mathbb{E}\big [|\mathcal{U}\setminus \widetilde{\mathcal{K}}|\big ] - d^{1-\delta _2} - 2d^{1-\beta _2} &\text{ (by (5.7))} \\ &\geqslant \mathbb{E}\big [|\mathcal{U}\setminus \mathcal{K}|\big ] - d^{1-\delta } - d^{1-\delta _2} - 2d^{1-\beta _2}. &\text{ (by Claim 5.8)}. \end{align*}

Since $\beta _1 = 1/(20t)$ , we have $\beta _1 \leqslant \frac{1}{2}\min \{\delta, \delta _2, \beta _2, 1\}$ , thus $d^{1-\beta _1} \geqslant d^{1-\delta } + d^{1-\delta _2} + 2d^{1-\beta _2}$ for $d$ large enough. Therefore,

\begin{align*} \mathbb{P}\Big [|\mathcal{U}\setminus \mathcal{K}| \geqslant \mathbb{E}\big [|\mathcal{U}\setminus \mathcal{K}|\big ] - d^{1-\beta _1}\Big ] \geqslant 1-d^{-110}, \end{align*}

as desired.

We can now complete the proof of Lemma 4.5:

\begin{align*} & \mathbb{P}\big [d'(c) \geqslant \mathbb{E}[d'(c)] - \frac{d}{\ell } + d^{1-\beta }\big ]\\ \leqslant \;& \mathbb{P}\big [d'(c) \geqslant \mathbb{E}[d'(c)] + 2d^{1-\beta _1}\big ] &\text{(for $d$ large enough)}\\ \leqslant \; & \mathbb{P}\Big [|\mathcal{U}| \gt \mathbb{E}\big [|\mathcal{U}|\big ]+ d^{1-\beta _1}\Big ] + \mathbb{P}\Big [|\mathcal{U}\setminus \mathcal{K}| \lt \mathbb{E}\big [|\mathcal{U}\setminus \mathcal{K}|\big ] - d^{1-\beta _1}\Big ] &\text{ (union bound) }\\ \leqslant \;& d^{-110} + d^{-110} &\text{ (by Lemmas 5.1 and 5.2)}\\ \leqslant \; & d^{-100}.\end{align*}

6. Proof of Theorem 1.6

In this section we prove Theorem 1.6 by iteratively applying Lemma 3.1 until we reach a stage where we can apply Proposition 3.2. To do so, we first define the parameters for the graph and cover at each iteration and then define $d_0$ such that the graphs at each iteration will satisfy the conditions of Lemma 3.1. This section follows similarly to [Reference Molloy and Reed22, Chapter 12].

We use the notation of Theorem 1.6. Let

\begin{equation*}G_1 \;:\!=\; G,\quad \mathcal {H}_1 = (L_1, H_1) \;:\!=\; \mathcal {H},\quad \ell _1 \;:\!=\; (1+\varepsilon )d/\log d,\quad d_1 \;:\!=\; d,\end{equation*}

where we may assume that $\varepsilon$ is sufficiently small, say $\varepsilon \lt 1/100$ . Since $d$ is large, we may also assume that $\ell _1$ is an integer by slightly modifying $\varepsilon$ if necessary. Define

\begin{equation*}\kappa \;:\!=\; (1+\varepsilon/2)\log (1+\varepsilon/100)\approx \varepsilon/100,\end{equation*}

and fix $\eta \;:\!=\; \kappa/\log d$ , so that $\eta$ is the same each time we apply Lemma 3.1. Set $\beta \;:\!=\; 1/(25t)$ and recursively define the following parameters for each $i \geqslant 1$ :

\begin{align*} \textsf{keep}_i &\;:\!=\; \left (1 - \frac{\kappa }{\ell _i \log d}\right )^{d_i}, & \textsf{uncolor}_i &\;:\!=\; 1 - \frac{\kappa }{\log d}\, \textsf{keep}_i,\\ \ell _{i+1} &\;:\!=\; \left \lceil \textsf{keep}_i\, \ell _i- \ell _i^{1 - \beta }\right \rceil, & d_{i+1}&\;:\!=\; \left \lfloor \textsf{keep}_i\, \textsf{uncolor}_i\, d_i + d_i^{1 - \beta }\right \rfloor . \end{align*}

Suppose that at the start of iteration $i$ , the following numerical conditions hold:

  1. (1) $d_i \geqslant \tilde{d}$ ,

  2. (2) $\eta \,d_i \lt \ell _i \lt 8d_i$ ,

  3. (3) $s \leqslant d_i^{1/4}$ ,

  4. (4) $t \leqslant \dfrac{\tilde{\alpha }\log d_i}{\log \log d_i}$ ,

  5. (5) $\dfrac{1}{\log ^5d_i} \lt \eta \lt \dfrac{1}{\log d_i}$ .

Furthermore, suppose that we have a graph $G_i$ and a DP-cover $\mathcal{H}_i = (L_i, H_i)$ of $G_i$ such that:

  1. (6) $H_i$ is $K_{s,t}$ -free,

  2. (7) $\Delta (H_i) \leqslant d_i$ ,

  3. (8) $|L_i(v)| \geqslant \ell _i$ for all $v \in V(G_i)$ .

Then we may apply Lemma 3.1 to obtain a partial $\mathcal{H}_i$ -colouring $\varphi _i$ of $G_i$ and an assignment of subsets $L_{i+1}(v) \subseteq (L_i)_{\varphi _i}(v)$ to each vertex $v \in V(G_i) \setminus \mathrm{dom}(\varphi _i)$ such that, setting

\begin{equation*} G_{i+1} \;:\!=\; G_i[V(G_i) \setminus \mathrm {dom}(\varphi _i)] \quad \text {and} \quad H_{i+1} \;:\!=\; H_i \left [\bigcup _{v \in V(G_{i+1})} L_{i+1}(v)\right ], \end{equation*}

we get that conditions (6)–(8) hold with $i+1$ in place of $i$ . Note that, assuming $d_0$ is large enough and $\alpha$ is small enough, conditions (1)–(8) are satisfied initially (i.e., for $i = 1$ ). Our goal is to show that there is some value $i^\star \in{\mathbb{N}}$ such that:

  • for all $1 \leqslant i \lt i^\star$ , conditions (1)–(5) hold, and

  • we have $\ell _{i^\star } \geqslant 8d_{i^\star }$ .

Since conditions (6)–(8) hold by construction, we will then be able to iteratively apply Lemma 3.1 $i^\star - 1$ times and then complete the colouring using Proposition 3.2.

We first show that the ratio $d_i/\ell _i$ is decreasing for $d_i$ , $\ell _i$ large enough.

Lemma 6.1. Suppose that for all $j \leqslant i$ , we have $\ell _j^\beta$ , $d_j^\beta \geqslant 30\log ^2 d$ and $\ell _j \leqslant 8d_j$ . Then

\begin{equation*} \frac {d_{i+1}}{\ell _{i+1}} \,\leqslant \, \frac {d_i}{\ell _i}. \end{equation*}

Proof. The proof is by induction on $i$ . Assume the statement holds for all values less than $i$ . In particular, $d_i/\ell _i \leqslant d_1/\ell _1 \lt \log d$ . Using this we find the following bound:

\begin{align*} \textsf{keep}_i\, \textsf{uncolor}_i &= \textsf{keep}_i\,\left (1 - \frac{\kappa }{\log d}\, \textsf{keep}_i\right ) \\ &= \textsf{keep}_i - \frac{\kappa }{\log d}\, \left (1 - \frac{\kappa }{\ell _i \log d}\right )^{2d_i} \\ &\leqslant \textsf{keep}_i - \frac{\kappa }{\log d}\,\left (1 - \frac{2\kappa d_i}{\ell _i\log d}\right ) \\ &\leqslant \textsf{keep}_i - \frac{\kappa }{2\log d} \\ &\leqslant \textsf{keep}_i - 3\ell _i^{-\beta }. \end{align*}

With this computation in mind, we have:

\begin{align*} \frac{d_{i+1}}{\ell _{i+1}} &\leqslant \frac{\textsf{keep}_i\, \textsf{uncolor}_i\, d_i + d_i^{1-\beta }}{\textsf{keep}_i\, \ell _i - \ell _i^{1-\beta }} \\ &\leqslant \frac{d_i\,(\textsf{keep}_i - 3\ell _i^{-\beta } + d_i^{-\beta })}{\ell _i\,(\textsf{keep}_i - \ell _i^{-\beta })} \\ &\leqslant \frac{d_i}{\ell _i}. \end{align*}

The last inequality follows since $\ell _i \leqslant 8d_i$ and $8^\beta \leqslant 2$ for $d$ large enough.

For computational purposes, it is convenient to remove the error terms $\ell _i^{1-\beta }$ and $d_i^{1-\beta }$ from the definitions of $d_{i+1}$ and $\ell _{i+1}$ . This is done in the following lemma.

Lemma 6.2. Let $\hat{\ell }_1 \;:\!=\; \ell _1$ , $\hat{d}_1 \;:\!=\; d_1$ , and recursively define:

\begin{align*} \hat{\ell }_{i+1} &\;:\!=\; {\textsf{keep}}_i\, \hat{\ell }_i, \\ \hat{d}_{i+1} &\;:\!=\; {\textsf{keep}}_i\, {\textsf{uncolor}}_i\, \hat{d}_i. \end{align*}

If for all $1\leqslant j \lt i$ , we have $d_j^\beta,\, \ell _j^\beta \geqslant 30\log ^4 d$ and $\ell _j \leqslant 8d_j$ , then

  • $|\ell _i - \hat{\ell }_i| \leqslant \hat{\ell }_i^{1-\beta/2}$ ,

  • $|d_i - \hat{d}_i| \leqslant \hat{d}_i^{1-\beta/2}$ .

Proof. Before we proceed with the proofs, let us record a few inequalities. By Lemma 6.1,

(6.3) \begin{equation} \textsf{keep}_i \geqslant 1 - \frac{d_i\, \kappa }{\ell _i\log d} \geqslant 1-\kappa . \end{equation}

Also, assuming $d$ is large enough, we have

(6.4) \begin{equation} \textsf{keep}_i \leqslant \exp \left (-\dfrac{\kappa \, d_i}{\ell _i\log d}\right ) \leqslant \exp \left (-\dfrac{\kappa }{8\log d}\right ) \leqslant 1 - \frac{\kappa }{10\log d}. \end{equation}

It follows from (6.3) that

\begin{equation*}\textsf {keep}_i\, \textsf {uncolor}_i = \textsf {keep}_i - \frac {\kappa }{\log d}\, \textsf {keep}_i^2 \geqslant 1 - \kappa \left ( 1 + \frac {\textsf {keep}_i^2}{\log d}\right ) \geqslant 1 - 2\kappa .\end{equation*}

Since $\kappa \lt 1/4$ , the function $f(x) = x^{1-\beta/2} - x$ is decreasing on $[1-2\kappa,1]$ . It follows from (6.4) that

(6.5) \begin{equation} \textsf{keep}_i^{1-\beta/2} - \textsf{keep}_i \geqslant \left (1 - (1-\beta/2)\,\frac{\kappa }{10\log d}\right ) - \left (1 - \frac{\kappa }{10\log d}\right ) = \frac{\beta \kappa }{20\log d}. \end{equation}

Also, we can write

(6.6) \begin{equation} (\textsf{keep}_i\, \textsf{uncolor}_i)^{1-\beta/2} - \textsf{keep}_i\, \textsf{uncolor}_i \geqslant \textsf{keep}_i^{1-\beta/2} - \textsf{keep}_i \geqslant \frac{\beta \kappa }{20\log d}. \end{equation}

Now we are ready to prove Lemma 6.2 by induction on $i$ . Note that $\hat{\ell }_i \geqslant \ell _i$ , $\hat{d}_i \leqslant d_i$ . For the base case $i = 1$ , the claim is trivial. Assume now that it holds for some $i$ and consider $i + 1$ . We have

\begin{align*} \hat{\ell }_{i+1} &= \textsf{keep}_i\, \hat{\ell }_i \\ &\leqslant \textsf{keep}_i\,(\ell _i + \hat{\ell }_i^{1 - \beta/2}) &\text{(by the inductive hypothesis)} \\ &\leqslant \ell _{i+1} + \ell _i^{1 - \beta } + \left (\textsf{keep}_i^{1-\beta/2} - \frac{\beta \kappa }{20\log d}\right )\, \hat{\ell }_{i}^{1-\beta/2} &\text{(by (6.5))} \\ &= \ell _{i+1} + \hat{\ell }_{i+1}^{1-\beta/2} + \ell _i^{1 - \beta } - \frac{\beta \kappa }{20\log d}\,\hat{\ell }_i^{1-\beta/2}. \end{align*}

It remains to argue that

\begin{equation*}\frac {\beta \kappa }{20\log d}\,\hat {\ell }_i^{1-\beta/2} \geqslant \ell _i^{1-\beta },\end{equation*}

which is equivalent to

\begin{equation*}\frac {\ell _i^{1-\beta }}{\hat {\ell }_i^{1-\beta/2}} \leqslant \frac {\beta \kappa }{20\log d}.\end{equation*}

To this end, we write

\begin{equation*}\frac {\ell _i^{1-\beta }}{\hat {\ell }_i^{1-\beta/2}} \leqslant \frac {\ell _i^{1-\beta }}{\ell _i^{1-\beta/2}} = \ell _i^{-\beta/2} \leqslant \frac {1}{5\log ^2d} \lt \frac {\beta \kappa }{20\log d},\end{equation*}

since $\beta = \Omega (\log \log d/\log d)$ . Thus, the claim holds for $d$ large enough.

The argument for $\hat{d}_{i+1}$ is almost identical. We have

\begin{align*} \hat{d}_{i+1} &= \textsf{keep}_i\, \textsf{uncolor}_i\, \hat{d}_i \\ &\geqslant \textsf{keep}_i\, \textsf{uncolor}_i\,(d_i - \hat{d}_i^{1 - \beta/2}) &\text{(by the inductive hypothesis)}\\ &\geqslant d_{i+1} - d_i^{1-\beta } - \left ((\textsf{keep}_i\,\textsf{uncolor}_i)^{1-\beta/2} - \frac{\beta \kappa }{20\log d}\right )\,\hat{d}_i^{1-\beta/2} &\text{(by (6.6))} \\ &= d_{i+1} - \hat{d}_{i+1}^{1-\beta/2} - d_i^{1-\beta } + \frac{\beta \kappa }{20\log d}\,\hat{d}_i^{1-\beta/2}. \end{align*}

It remains to argue that

\begin{equation*}\frac {\beta \kappa }{20\log d}\,\hat {d}_i^{1-\beta/2} \geqslant d_i^{1-\beta },\end{equation*}

which is equivalent to

\begin{equation*}\frac {d_i^{1-\beta }}{\hat {d}_i^{1-\beta/2}} \leqslant \frac {\beta \kappa }{20\log d}.\end{equation*}

To this end, we write

\begin{equation*}\frac {d_i^{1-\beta }}{\hat {{\kern-0.5pt}d}_i^{1-\beta/2}} \leqslant \frac {d_i^{-\beta }}{\hat {{\kern-0.5pt}d}_i^{1-\beta/2}}\,\left (\hat {d}_i + \hat {d}_i^{1-\beta/2}\right ) = d_i^{-\beta }(\hat {d}_i^{\beta/2}+1) \leqslant 2d_i^{-\beta/2} \leqslant \frac {1}{2\log ^2d} \lt \frac {\beta \kappa }{20\log d},\end{equation*}

and so, the claim holds for $d$ large enough.

Next we show that $\ell _i$ never gets too small:

Lemma 6.7. Suppose that for all $j \lt i$ , we have $\ell _j \leqslant 8d_j$ . Then $\ell _i \geqslant d^{\varepsilon/15}$ .

Proof. For brevity, set $r_i \;:\!=\; d_i/\ell _i$ and $\hat{r}_i \;:\!=\; \hat{d}_i/\hat{\ell }_i$ . The proof is by induction on $i$ . The base case $i = 1$ is clear. Now we assume that the desired bound holds for $\ell _1$ , …, $\ell _i$ and consider $\ell _{i+1}$ . Assuming $d$ is large enough, we have

(6.8) \begin{equation} 1 - \frac{\kappa }{\ell _i\log d} \geqslant \exp{\left (-\frac{\kappa }{(1-\varepsilon/4)\ell _i\log d}\right )}. \end{equation}

Note that $r_1 = \hat{r}_1 = \log d/(1+\varepsilon )$ and, assuming $\varepsilon \lt 1/100$ , $(1-\varepsilon/4)$ $(1+\varepsilon ) \geqslant (1 +\varepsilon/2)$ . Hence,

\begin{align*} \textsf{keep}_i &= \left (1 - \frac{\kappa }{\ell _i \log d}\right )^{d_i} \\ &\geqslant \exp{\left (-\frac{\kappa }{(1-\varepsilon/4)\log d}\, r_i\right )} &\text{(by (6.8))}\\ &\geqslant \exp{\left (-\frac{\kappa }{(1-\varepsilon/4)\log d}\, r_1\right )} &\text{(by Lemma 6.1)}\\ &\geqslant \exp{\left (-\frac{\kappa }{(1+\varepsilon/2)}\right )}. \end{align*}

With this bound on $\textsf{keep}_i$ , we can bound $\hat{r}_i$ as follows:

\begin{align*} \hat{r}_i &= \hat{r}_1\prod \limits _{j \lt i}\textsf{uncolor}_j \\ &= \hat{r}_1\prod \limits _{j \lt i}\left (1 - \frac{\kappa }{\log d}\, \textsf{keep}_j \right ) \\ &\leqslant \frac{\log d}{1+\varepsilon }\left (1 - \frac{\kappa }{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/2)}\right )}\right )^{i-1}. \end{align*}

Applying Lemma 6.2, we get a bound on $r_i$ for $d$ large enough in terms of $\varepsilon$ :

\begin{align*} r_i &\leqslant \hat{r}_i\left (\frac{1 + \hat{d}_i^{-\beta/2}}{1 - \hat{\ell }_i^{-\beta/2}}\right ) \\ &\leqslant \hat{r}_i(1 + \hat{\ell }_i^{-\beta/2} + \hat{d}_i^{-\beta/2}) \\ &\leqslant \hat{r}_i\left (1 + O(d^{-\varepsilon \beta/30})\right )\\ &\lt \frac{\log d}{1+\varepsilon/2}\left (1 - \frac{\kappa }{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/2)}\right )}\right )^{i-1}. \end{align*}

Note that for $\varepsilon$ small enough, $(1-\varepsilon/4)(1+\varepsilon/2) \geqslant 1+\varepsilon/8$ . Applying this and the above bound on $r_i$ , we can get a better bound on $\textsf{keep}_i$ :

\begin{align*} \textsf{keep}_i &\geqslant \exp{\left (-\frac{\kappa }{(1-\varepsilon/4)\log d}\, r_i\right )} \\ &\geqslant \exp{\left (-\frac{\kappa }{(1-\varepsilon/4)\log d}\, \frac{\log d}{1+\varepsilon/2}\left (1 - \frac{\kappa }{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/2)}\right )}\right )^{i-1}\right )} \\ &\geqslant \exp{\left (-\frac{\kappa }{(1+\varepsilon/8)}\left (1 - \frac{\kappa }{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/2)}\right )}\right )^{i-1}\right )}. \end{align*}

With this bound on $\textsf{keep}_i$ , we can get a lower bound on $\hat{\ell }_{i+1}$ as follows:

\begin{align*} \hat{\ell }_{i+1} &= \hat{\ell }_1\,\prod \limits _{j \leqslant i}\textsf{keep}_j \\ &\geqslant \hat{\ell }_1\,\prod \limits _{j \leqslant i}\exp{\left (-\frac{\kappa }{(1+\varepsilon/8)}\left (1 - \frac{\kappa }{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/2)}\right )}\right )^{j-1}\right )} \\ &= (1+\varepsilon )\,\frac{d}{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/8)}\sum \limits _{j \leqslant i}\left (1 - \frac{\kappa }{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/2)}\right )}\right )^{j-1}\right )} \\ &\geqslant (1+\varepsilon )\,\frac{d}{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/8)}\sum \limits _{j =1}^\infty \left (1 - \frac{\kappa }{\log d}\, \exp{\left (-\frac{\kappa }{(1+\varepsilon/2)}\right )}\right )^{j-1}\right )} \\ &= (1+\varepsilon )\,\frac{d}{\log d}\, \exp{\left (-\frac{\log d}{(1+\varepsilon/8)}\, \exp{\left (\frac{\kappa }{(1+\varepsilon/2)}\right )}\right )} \\ &= (1+\varepsilon )\,\frac{d}{\log d}\, d^{\left (-\dfrac{\exp{\left (\kappa/(1+\varepsilon/2)\right )}}{(1+\varepsilon/8)}\right )}. \end{align*}

Recalling that $\kappa = (1+\varepsilon/2)\log (1+\varepsilon/100)$ , we get

\begin{equation*}\frac {\exp {\left (\kappa/(1+\varepsilon/2)\right )}}{(1+\varepsilon/8)} = \frac {1+\varepsilon/100}{1+\varepsilon/8} \lt 1-\varepsilon/9.\end{equation*}

Therefore, for $d$ large enough, we get

\begin{equation*}\hat {\ell }_{i+1} \gt (1+\varepsilon )\,\frac {d}{\log d}\, d^{\varepsilon/9-1} \gt d^{\varepsilon/10}.\end{equation*}

Applying Lemma 6.2, we finally get the bound we desire:

\begin{equation*} \ell _{i+1} \geqslant \hat {\ell }_{i+1} - \hat {\ell }_{i+1}^{1-\beta/2} \geqslant d^{\varepsilon/10}(1 - \hat {\ell }_{i+1}^{-\beta/2}) \geqslant d^{\varepsilon/15}. \end{equation*}

We can now finally establish the existence of the desired bound $i^\star$ :

Lemma 6.9. There exists an integer $i^\star \geqslant 1$ such that $\ell _{i^\star } \geqslant 8d_{i^\star }$ .

Proof. As in the proof of Lemma 6.7, set $r_i \;:\!=\; d_i/\ell _i$ and $\hat{r}_i \;:\!=\; \hat{d}_i/\hat{\ell }_i$ . Suppose, towards a contradiction, that $\ell _i \lt 8d_i$ (i.e., $r_i \gt 1/8$ ) for all $i \geqslant 1$ . By Lemma 6.7, this implies that $\ell _i \geqslant d^{\varepsilon/15}$ for all $i$ . Note that $\hat{r}_i = \textsf{uncolor}_i\, \hat{r}_{i-1}$ is a decreasing sequence. Furthermore, $\textsf{keep}_j \geqslant \textsf{keep}_1 \geqslant 1-\frac{\kappa }{1+\varepsilon } \geqslant 1/2$ . Thus,

\begin{align*} r_{i} &\leqslant 2\hat{r}_{i} \\ &\leqslant 2\hat{r}_1\prod \limits _{j \lt i}\left (1 - \frac{\kappa }{\log d}\, \textsf{keep}_j\right ) \\ &\leqslant 2\hat{r}_1\left (1 - \frac{\kappa }{2\log d}\right )^i \\ & \leqslant 2\log d\, \exp{\left (-\frac{\kappa }{2\log d}\, i\right )}. \end{align*}

For $i \geqslant \frac{10}{\kappa }\,\log d\log \log d$ , the last expression is less than $1/8$ ; a contradiction.

Let $i^\star \geqslant 1$ be the smallest integer such that $\ell _{i^\star } \geqslant 8d_{i^\star }$ (which exists by Lemma 6.9). Take any $i \lt i^\star$ . We need to verify conditions (1)–(5). Note that Lemma 6.7 yields

(6.10) \begin{equation} \ell _{i} \geqslant d^{\varepsilon/15} \quad \text{and} \quad d_{i} \geqslant \frac{\ell _{i}}{8} \geqslant \frac{d^{\varepsilon/15}}{8} \geqslant d^{\varepsilon/20}. \end{equation}

Therefore, condition (1) holds assuming that $d_0 \gt \tilde{d}^{20/\varepsilon }$ . For (2), we use Lemma 6.1 to write

\begin{equation*} \frac {\ell _i}{d_i} \geqslant \frac {\ell _1}{d_1} \geqslant \frac {1}{\log d} \geqslant \eta . \end{equation*}

Next, due to (6.10), we can take $\alpha$ so small that

\begin{equation*} s \leqslant d^{\alpha \varepsilon } \leqslant d_i^\frac {1}{4} \quad \text {and} \quad t \leqslant \frac {\alpha \varepsilon \log d}{\log \log d} \leqslant \frac {\tilde {\alpha } \log d_i}{\log \log d_i}, \end{equation*}

which yields conditions (3) and (4). Finally, it follows for $d$ large enough that

\begin{equation*} \frac {1}{\log ^5d_i} \leqslant \frac {1}{(\varepsilon/20)^5\log ^5 d} \leqslant \eta \leqslant \frac {1}{\log d} \leqslant \frac {1}{\log d_i}, \end{equation*}

so (5) holds as well. As discussed earlier, we can now iteratively apply Lemma 3.1 $i^\star - 1$ times and then complete the colouring using Proposition 3.2. This completes the proof of Theorem 1.6.

Acknowledgements

We are grateful to the anonymous referee for helpful suggestions.

Footnotes

Research of the second named author was partially supported by the NSF grant DMS-2045412.

References

Achlioptas, D. and Coja-Oghlan, A. (2008) Algorithmic barriers from phase transitions. IEEE Symp. Found. Comput. Sci. (FOCS), 793802, Full version:, https://arxiv.org/abs/0803.2122 Google Scholar
Alon, N. and Assadi, S. (2020) Palette sparsification beyond $(\Delta +1)$ vertex coloring . Available at: https://arxiv.org/abs/2006.10456 (preprint).Google Scholar
Alon, N., Krivelevich, M. and Sudakov, B. (1999) Coloring graphs with sparse neighborhoods. J. Combin. Theory, Ser. B 77 7382.CrossRefGoogle Scholar
Amini, O. and Reed, B. (2008) List colouring constants of triangle free graphs. Electron. Notes Discret. Math. 30 135140.CrossRefGoogle Scholar
Anderson, J., Bernshteyn, A. and Dhawan, A. (2022) Coloring graphs with forbidden almost bipartite subgraphs. Available at: https://arxiv.org/abs/2203.07222 (preprint).CrossRefGoogle Scholar
Bernshteyn, A. (2019) The Johansson-Molloy theorem for DP-coloring. Rand. Struct. Algor. 54 653664.CrossRefGoogle Scholar
Bollobás, B. (1981) The independence ratio of regular graphs. Proc. Am. Math. Soc. 83(2) 433436.CrossRefGoogle Scholar
Bonamy, M., Perrett, T. and Postle, L. (Oct. 2018) Colouring Graphs with Sparse Neighbourhoods. Bounds Appl. arXiv: 1810.06704.Google Scholar
Bruhn, H. and Joos, F. (2018) A stronger bound for the strong chromatic index. Probab. Comput. 27 2143.CrossRefGoogle Scholar
Cambie, S. and Kang, R. (2020) Independent transversals in bipartite correspondence-covers. Available at: https://arxiv.org/abs/2009.05428 (preprint).Google Scholar
Davies, E., Kang, R., Pirot, F. and Sereni, J.-S. (2020) Graph structure via local occupancy. Available at: https://arxiv.org/abs/2003.14361 (preprint).Google Scholar
Dvǒrák, Z. and Postle, L. (2018) Correspondence coloring and its application to list-coloring planar graphs without cycles of lengths 4 to 8. J. Combin. Theory Ser. B 129 3854.CrossRefGoogle Scholar
Hyltén-Cavallius, C. (1958) On a combinatorial problem. Colloq. Math. 6 6165.CrossRefGoogle Scholar
Imrich, W. (1984) Explicit construction of regular graphs without small cycles. Combinatorica 4 5359.CrossRefGoogle Scholar
Johansson, A. (1996) Asymptotic Choice Number for Triangle Free Graphs. DIMACS, Technical Report 91–95.Google Scholar
Johansson, A. (1996) The choice number of sparse graphs. Available at: https://www.cs.cmu.edu/∼anupamg/down/jo- hansson-choice-number-of-sparse-graphs-coloring-kr-free.pdf (preprint).Google Scholar
Kang, D., Kelly, T., Kühn, D., Methuku, A. and Osthus, D. (2021) Graph and hypergraph colouring via nibble methods: A survey. Available at: https://arxiv.org/pdf/2106.13733 (preprint).Google Scholar
Kim, J.. On brooks’ theorem for sparse graphs, 4(1995) 97132.CrossRefGoogle Scholar
Kővàri, T., Sós, V. and Turán, P. (1954) On a problem of K. Zarankiewicz Colloq. Math. 3 5057.CrossRefGoogle Scholar
Margulis, G. A. (1982) Explicit constructions of graphs without short cycles and low density codes. Combinatorica 2 7178.CrossRefGoogle Scholar
Molloy, M. (2019) The list chromatic number of graphs with small clique number. J. Combin. Theory Ser. B 134 264284.CrossRefGoogle Scholar
Molloy, M. and Reed, B. (2002) Graph Colourings and the Probabilistic Method. Springer.CrossRefGoogle Scholar
Pettie, S. and Su, H.-H. (2015) Distributed coloring algorithms for triangle-free graphs. Inform. Comput. 243 263280.CrossRefGoogle Scholar
Rahman, M. and Virág, B. (2017) Local algorithms for independent sets are half-optimal. Ann. Probab. 45(3) 15431577.CrossRefGoogle Scholar
Zdeborová, L. and Krząkała, F. (2007) Phase transitions in the coloring of random graphs. Phys. Rev. E 76 031131.CrossRefGoogle ScholarPubMed