1 Introduction

In Fourier series theory, a fundamental question is how to reconstruct a function from the partial sums of its Fourier series. Carleson [2] showed that if \(f \in L^2\), then the partial sums converge to the function almost everywhere. The condition \(f \in L^2\) in the Carleson theorem was weakened by Hunt [3] (\(f\in L^p\, (p>1)\)) and Antonov [4] who proved that if f is in the class \(L\log ^+L \log ^+\log ^+\log ^+ L\), then the partial sums of the Fourier series converge to the function almost everywhere again.

On the other hand, A. N. Kolmogoroff [5] constructed his famous example of a function \(f\in L^1\) for which the partial sums \(S_mf\) diverge unboundedly almost everywhere. In another paper [6], he constructed an everywhere divergent Fourier series.

It is also of principal interest to discuss this reconstruction issue if we have a subsequence of the partial sums. Maybe in the case of some special subsequence of the partial sums of the Fourier series one can obtain some “positive” conclusions.

Even more striking results - with respect to partial sums and the Lebesgue space \(L^1\) - are due to Gosselin [7] and Totik [8]. In 1958 Gosselin showed that for each subsequence \((n_j)\) of the sequence of natural numbers there exists an integrable function f such that \(\sup _{j}|S_{n_j}f|=+\infty \) almost everywhere. His result improved by Totik who showed the existence an integrable function f such that \(\sup _{j}|S_{n_j}f|=+\infty \) everywhere. Moreover, Konyagin [9] proved that for any increasing sequence \((n_j)\) of positive integers and any nondecreasing function \(\phi : [0, +\infty ) \rightarrow [0, +\infty )\) satisfying the condition \(\phi (u)=o(u\log \log u)\), there is a function \(f \in \phi (L)\) such that \(\sup _{j}|S_{n_j}f|=+\infty \) everywhere.

In view of the fact that convergence properties of sequences can be improved by considering arithmetic means (see the classical result of Cesàro for instance in [10]), it is natural to try to do the same in the theory of Fourier series, i.e. to consider the convergence properties of

$$\begin{aligned} \frac{1}{N}\sum _{n=1}^N S_nf \end{aligned}$$
(1.1)

or more generally, of

$$\begin{aligned} \frac{1}{N}\sum _{j=1}^N S_{n_j}f. \end{aligned}$$
(1.2)

The expression in (1.1) essentially represents the Cesàro mean of order N of a function f. Its importance was first recognized by Fejér in the early 1900. We use the word “essentially” as the original definition that we usually denote by \(\sigma _{N-1}f\) is \(\sum _{n=0}^{N-1}S_nf/N\). However, the difference between the two terms is at most \(|S_Nf-S_0f|/N \le \sum _{0<|k|\le N}|{\hat{f}}(k)|/N\rightarrow 0\) in view of the Riemann-Lebesgue lemma. In 1904 Fejér proved [11]: If f is an integrable function that becomes infinite only at a finite number of locations in the interval \(\mathbb {T}\), then

$$\begin{aligned} \sigma _Nf(x) = \frac{1}{N+1}\sum _{n=0}^{N}S_nf(x) \rightarrow \frac{f(x+0)+f(x-0)}{2}, \end{aligned}$$

at any x which is a point of continuity or a point of discontinuity of the first kind of f.

We also mention Tandori’s article [12]. In his article, he proved that for any monotone increasing \((n_j), j/n_j\rightarrow \infty \), there exists an integrable function f such that the following de la Vallée-Poussin-like means \(\sum _{n=j}^{j+n_j-1}S_nf/n_j\) diverges almost everywhere. In his paper Tandori used [12] a sequence of polynomials of shifted de la Vallée-Poussin kernels. These and similar polynomials will play a central role in this article.

In contrast to the above mentioned results of Konyagin and Totik, a classical result of Lebesgue [13] (inspired by Fejér’s results) states that the sequence \(\sigma _Nf\) does converge almost everywhere to f for any integrable f. However, the study of the analogue question for the expression in (1.2) is significantly more subtle. The first to address the latter issue was Zalcwasser. In 1936, Zalcwasser [1] proved that in the case of \(n_j=j^2\) the means (1.2) converges to f a.e. for every function \(f\in L^1\).

In his paper Salem [14, page 394], writes that this theorem of Zalcwasser is extended to \(j^3\) and \(j^4\) however that was not proved in [14]. Belinsky proved [15] the existence of a sequence \(n_j\sim \exp (\root 3 \of {j})\) such that the relation \(\frac{1}{N}\sum _{j=1}^{N}S_{n_j}f\rightarrow f\) holds a.e. for every integrable function. In this paper, Belinsky also conjectured that if the sequence \((n_j)\) is convex, then the condition \(\sup _j j^{-1/2}\log n_j <+\infty \) is necessary and sufficient. In a recent paper of the author [16], it is proved (among others) that this is not the case.

The analogous problem for continuous functions and uniform convergence is significantly easier. Carleson proved [17] that if the sequence \((n_j)\) is convex, then the condition

\(\sup _j j^{-1/2}\log n_j <+\infty \) is necessary and sufficient for the uniform convergence of (1.2) for each continuous function. For more on this issue (continuous functions and uniform convergence of Cesàro means of \((S_{n_j}f)\)) see [14, 17,18,19].

Returning to the problem first examined by Zalcwasser: it is also a natural question ( [1]) whether there is any sequence of indices \((n_j)\) for which there exists an integrable function f such that \(\frac{1}{N}\sum _{j=1}^N S_{n_j}f\) fails to converge to f a.e.

In this paper, we answer this long-standing unresolved problem. In addition, we provide necessary and sufficient conditions for the subsequences \(\mathcal {N}\) of \(\mathbb {N}\) that have the following property: for any subsequence \(\mathcal {N^{\prime }} = (k_j: j\in \mathbb {N})\) of \(\mathcal {N}\) and any \(f\in L^1(\mathbb {T})\) one has \(\frac{1}{N}\sum _{j=1}^N S_{k_j}f(x) \rightarrow f(x)\) for a.e. \(x\in \mathbb {T}\). In what follows we will use the notion “super summability sequence” for this property. It is a stronger, more restrictive notion than the summability sequence. From the fact that almost everywhere convergence of \(\frac{1}{N}\sum _{j\le N} S_{n_j}f\rightarrow f\) is true for every integrable f, it does not follow that the same is true for subsequences of the sequence \((n_j)\). (Of course, if we were just looking at the Fourier sums \(S_{n_j}f\) instead of their (C, 1) means, the situation would be different and it would not make sense to talk about super-summability.)

In section 4 (“the construction”), where we introduce the polynomials needed for the counterexample, we will explain the basic ideas about how the proof of the main theorem (Theorem 3.3) works. Also, before each lemma, we will explain the meaning of the lemmas and the main ideas of their proofs. We note in advance that the most basic idea of this article is that for the de la Vallée-Poussin kernel \(\mathcal {V}_n\) we have \(S_m\mathcal {V}_{n}= \mathcal {V}_{n}\) for \(m\ge 2n-1\) and \(S_m\mathcal {V}_{n}= D_m\) (the m-th Dirichlet kernel) for \(m\le n\). This is important because the \(L^1\)-norm of \(\mathcal {V}_{n}\) is (uniformly in n) bounded and \(\Vert D_m\Vert _1 \sim \log m\).

2 Preliminaries

Throughout this article, an increasing sequence of natural numbers and the set of its members will be identified.

The system of functions \( e^{\imath nx} \quad (n=0, \pm 1, \pm 2, \dots ) \) (\(x\in \mathbb {R}, \imath = \sqrt{-1}\)) is called the trigonometric system. It is orthogonal over any interval of length \(2\pi \), specifically over \(\mathbb {T}:= [-\pi , \pi )\). Let \(f\in L^1(\mathbb {T})\), that is f is an integrable function on \(\mathbb {T}\). The kth Fourier coefficient of f is

$$\begin{aligned} {\hat{f}}(k):= \frac{1}{2\pi }\int _{\mathbb {T}}f(x) e^{-\imath kt} dt, \end{aligned}$$

where k is any integer number. The nth (\(n\in \mathbb {N}\)) partial sum of the Fourier series of f is

$$\begin{aligned} S_nf(y):= \sum _{k=-n}^{n}{\hat{f}}(k)e^{\imath ky}. \end{aligned}$$

We define the nth Dirichlet function as follows:

$$\begin{aligned} D_n(x):= \frac{1}{2}\sum _{k=-n}^{n}e^{\imath kx}. \end{aligned}$$

Then we also have (see e.g. [20])

$$\begin{aligned} \begin{aligned}&D_n(x) = \frac{1}{2} + \sum _{k=1}^{n}\cos (kx) \, (x\in \mathbb {T}), \\&D_n(x) = \frac{\sin ((n+1/2)x)}{2\sin (x/2)} \, (0\not =x\in \mathbb {T}), \quad D_n(0) = n + \frac{1}{2} \end{aligned} \end{aligned}$$
(2.1)

and

$$\begin{aligned} S_nf(y) = \frac{1}{\pi }\int _{\mathbb {T}}f(x)D_n(y-x) dx. \end{aligned}$$

We will apply several times the following trivial inequality that follows directly from the definition of \(D_n\) above.

$$\begin{aligned} \quad |D_n| \le n + \frac{1}{2}. \end{aligned}$$
(2.2)

The nth (\(n\in \mathbb {N}\)) Fejér or (C, 1) mean of the function f is defined in the following way:

$$\begin{aligned} \sigma _nf(y):= \frac{1}{n+1}\sum _{k=0}^n S_kf(y). \end{aligned}$$

It is known that

$$\begin{aligned} \sigma _nf(y) = \frac{1}{\pi }\int _{\mathbb {T}}f(x)K_n(y-x) dx, \end{aligned}$$

where the function \(K_n:= \frac{1}{n+1}\sum _{k=0}^{n}D_k\) is known as the nth Fejér kernel. We will now find an appropriate expression for it (see e.g. the book of Bary [20]), namely

$$\begin{aligned} K_n(u) = \frac{1}{2(n+1)}\left( \frac{\sin \left( \frac{u}{2}(n+1)\right) }{\sin \left( \frac{u}{2}\right) } \right) ^2. \end{aligned}$$

From this expression, one can immediately derive the following properties of the kernel. They will have an essential role later.

$$\begin{aligned}{} & {} K_n(u) \ge 0.\\{} & {} K_n(u) \le \frac{\pi ^2}{2(n+1)u^2} \quad (0< |u| \le \pi ). \end{aligned}$$

It is also known that \(\Vert K_n\Vert _1 = \pi \) (see e.g. [20]). For \(n\in \mathbb {N}\) let

$$\begin{aligned} \mathcal {V}_{n}:= \frac{1}{n}\sum _{j=n}^{2n-1}D_j \end{aligned}$$

be the nth de la Vallée-Poussin kernel function and

$$\begin{aligned} v_{n}f(y):=\frac{1}{n}\sum _{j=n}^{2n-1}S_jf(y) = \frac{1}{\pi } \int _{\mathbb {T}} f(x)\mathcal {V}_{n}(y-x) dx \end{aligned}$$

be the nth de la Vallée-Poussin mean of the integrable function f. Besides, it is a well-known fact (and an easy consequence of the same inequality for the Fejér kernel) that for any \(0\not = x\in \mathbb {T}\):

$$\begin{aligned} |\mathcal {V}_{n}(x)| \le C \frac{1}{nx^2}. \end{aligned}$$
(2.3)

We note that one and the same notation for constants at different places can represent different numbers. It is also a known inequality

$$\begin{aligned} \Vert \mathcal {V}_{n}\Vert _1 \le 2\Vert K_{2n}\Vert _1 + \Vert K_{n}\Vert _1 = 3\pi . \end{aligned}$$
(2.4)

Besides, by (2.2)

$$\begin{aligned} |\mathcal {V}_{n}| \le \frac{1}{n}\sum _{j=n}^{2n-1}\left( j + \frac{1}{2}\right) = \frac{3n}{2}. \end{aligned}$$
(2.5)

3 The main theorem

Definition 3.1

Let \(\mathcal {N} = (n_1, n_2, \dots )\) be a subsequence of the sequence of natural numbers. We say that \(\mathcal {N}\) is a super (C, 1)-summability sequence if for every infinite \( \mathcal {N^{\prime }} \subset \mathcal {N}\) and each \(f\in L^1\) one has

$$\begin{aligned} \lim _n \frac{1}{n}\sum _{j=1}^{n}S_{k_j}f = f \end{aligned}$$

a.e., where \(\mathcal {N^{\prime }} = (k_1, k_2, \dots )\).

We denote by \(\lambda _{k,\gamma }(\mathcal {N})\) the cardinality of the set \(\mathcal {N}\cap [\gamma ^k, \gamma ^{k+1})\). If there is no confusion, we will simply denote it by \(\lambda _{k,\gamma }\). Among others, we verify that if \(\sup _k\lambda _{k,\gamma }<\infty \) for some \(\gamma >1\), then \(\mathcal {N}\) is a super summability sequence. The proof of this statement is based on the following couple of sentences. A sequence \((n_k)\) of positive numbers is called lacunary if there exists a \(\gamma >1\) such that \(n_{k+1}/n_k \ge \gamma \) for every k. We say that a sequence \(\mathcal {N}\) is almost lacunary when there exists a \(\gamma >1\) such that \(\sup _k\lambda _{k,\gamma }<\infty \). We remark that it is trivial to see that \(\mathcal {N}\) is almost lacunary if and only if \(\sup _k\lambda _{k, 2}<\infty \). Besides, \(\sup _k\lambda _{k,2}<\infty \) if and only if \(\mathcal {N}\) is a finite union of lacunary sequences.

Any subsequence of a lacunary subsequence of the sequence of natural numbers is again lacunary. For lacunary subsequences the a.e. convergence of the (C, 1) means of \(S_{k_j}f\) is proved in [16, Corollary 1.2].

As a consequence, a lacunary \(\mathcal {N}\) is a super summability sequence. Consequently, an almost lacunary sequence is also a super summability sequence because if the (C, 1) means of the sequences \((a_n)\) and \((b_n)\) converge to number a, then so do the (C, 1) means of the merged sequence \((a_1, b_1, a_2, b_2, \dots )\). Hence, using the author’s recent paper ( [16]) it follows that if \(\mathcal {N}\) is almost lacunary, that is (for a set A, |A| denotes its cardinality)

$$\begin{aligned} \sup _n\left| \mathcal {N}\cap \left[ 2^n, 2^{n+1} \right) \right| < \infty , \end{aligned}$$

then it is a super summability sequence.

A real sequence \((n_j)\) is said to be a convex sequence if \(2n_j \le n_{j-1} + n_{j+1}\) for \(j=2,\dots \). We mention that \((n_j)\) is convex if and only if \(n_{j+1}-n_j\) is increasing with j. In this paper we will prove the following sufficient and necessary condition for convex subsequences of the sequence of natural numbers:

Theorem 3.2

Let \(\mathcal {N}\subset \mathbb {N}\) be a convex sequence. Then it is a super (C, 1)-summability sequence if and only if it is almost lacunary.

It is natural or common to suppose - if we may say- for \(\mathcal {N}\) to be convex. It was also supposed in the papers of Carleson [17] and Kahane and Katznelson [21] for the investigation of the (C, 1) means of sequences \((S_{k_j}f)\) in the uniform norm for continuous functions.

The next result proves that “almost lacunarity” is “close” to be necessary even when \(\mathcal {N}\) is not necessarily convex.

Theorem 3.3

Let \(\mathcal {N}\) be an increasing sequence of natural numbers. Suppose that for every \(\gamma >1 \) and every \(L\in \mathbb {N}\) there exists an \(n\in \mathbb {N}\) such that

$$\begin{aligned} \lambda _{n,\gamma }, \lambda _{n+1,\gamma }, \dots , \lambda _{n+L-1,\gamma } \ge L \end{aligned}$$
(3.1)

then \(\mathcal {N}\) is not a super (C, 1)-summability sequence.

We actually prove more, namely there is an \(f\in L^1\) and a \(\mathcal {N^{\prime }} = (k_j: j\in \mathbb {N}) \subset \mathcal {N}\) such that \(\frac{1}{N}\sum _{j=1}^N S_{k_j}f\) diverges almost everywhere.

For condition (3.1) we give an equivalent form: for every \(\gamma >1 \) and every \(L\in \mathbb {N}\) there exists an \(n\in \mathbb {N}\) such that in all of the intervals

$$\begin{aligned} {[}n, n\gamma ), \dots , [n\gamma ^{L-1}, n\gamma ^{L}) \end{aligned}$$
(3.2)

there are at least L elements from \(\mathcal {N}\).

Proof of Theorem 3.2

The sufficient condition has already been proved above. The necessity part of Theorem 3.2 is an easy consequence of Theorem 3.3. Indeed, we verify that a convex sequence \(\mathcal {N}\) which is not almost lacunary, satisfies the condition (3.1) or equivalently (3.2). Then, we can apply Theorem 3.3. If \(\mathcal {N}\) is not almost lacunary, then for every M there is an m such that there are at least \(M+1\) elements in [m, 2m), so there is an \(n_j \in [m, 2 m)\) with \(n_{j+1}-n_j \le m / M\). But then, by convexity, for all \(1 \le k \le j\) we have \(n_{k+1}-n_k \le m / M\), hence in every interval I that lies in \(\left[ n_1, m\right] \) there are at least \(|I| M / m-1\) elements from \(\mathcal {N}\), which immediately implies property (3.2) if M is large and we select n as the largest integer for which \(n \gamma ^L \le m\). We just need to make sure that the following inequality is satisfied for the shortest of the intervals \([n\gamma ^i, n\gamma ^{i+1})\) (\(i=0,\dots , L-1\)) (which is \([n, n\gamma )\)):

$$\begin{aligned} n(\gamma -1)\frac{M}{m} - 1 \ge L. \end{aligned}$$

That is, if \((L+1)m/(M(\gamma -1)) +1 \le \frac{m}{\gamma ^L}\), then there must be a required n. Finally, we have the condition (3.2) fulfilled. This and Theorem 3.3 completes the proof of Theorem 3.2. However, it will be far more complicated to verify Theorem 3.3. \(\square \)

A direct consequence of our results is:

Corollary 3.4

Let \(\mathcal {N}\) be an increasing sequence of natural numbers.

  1. (i)

    If \(\sup _k\lambda _{k,\gamma }<\infty \) for some \(\gamma >1\) (that is \(\mathcal {N}\) is almost lacunary), then \(\mathcal {N}\) is a super (C, 1)-summability sequence.

  2. (ii)

    If \(\lim _k\lambda _{k,\gamma } = \infty \) for each \(\gamma >1\), then \(\mathcal {N}\) is not a super (C, 1)-summability sequence.

  3. (iii)

    Since then the sequence of natural numbers \((n: n\in \mathbb {N})\) is not a super (C, 1)-summability sequence. That is, there exists an increasing sequence of natural numbers \((k_j)\) such that \( \frac{1}{n}\sum _{j=1}^{n}S_{k_j}f\) diverges almost everywhere for some \(f\in L^1\).

4 The construction

We say some preliminary words about the main ideas concerning the proof of this article’s main theorem (Theorem 3.3). For the counterexample construction, we use the polynomials from Tandori’s article [12]. This article on de la Vallée-Poussin means was an inspiring one for the author. Suppose that condition (3.2) holds. There are several steps to prove Theorem 3.3. First, we define a polynomial, which will be the basis of the construction of the counterexample. Let \(K, M\in \mathbb {N}\) (KM will vary later). By (3.2) we can have an \(n\ge 8K, 4K|n\) and

$$\begin{aligned} \begin{aligned}&\alpha _0 = n, \quad \alpha _{i+1} = 4\alpha _i, \quad |\mathcal {N}\cap [2\alpha _i, 4\alpha _i)| > M \quad (i=0, \dots , K-1). \end{aligned} \end{aligned}$$
(4.1)

That is, we have \(\alpha _i = 4^in\) (\(i=0,\dots , K-1\)). Besides, we set (the idea of setting this polynomial comes from [12]).

$$\begin{aligned} P_n(x)= & {} P_{n,K}(x) =\frac{1}{n}\sum _{j=-n/(2K)}^{n/(2K)-1}\sum _{i=0}^{K-1} \mathcal {V}_{\alpha _i}\left( x-(jK+i)\frac{2\pi }{n}\right) \nonumber \\= & {} \frac{1}{n}\sum _{j=-n/(2K)}^{n/(2K)-1}\sum _{i=0}^{K-1}\mathcal {V}_{\alpha _i}(t_{j,i}), \end{aligned}$$
(4.2)

where

$$\begin{aligned} t_{j,i} = x-(jK+i)\frac{2\pi }{n} \end{aligned}$$

with respect to modulo \(2\pi \). (2.5) implies that

$$\begin{aligned} |P_n| \le \max _{i=0,\dots , K-1} \frac{3n4^i}{2} = \frac{3n 4^{K-1}}{2} < n4^K. \end{aligned}$$
(4.3)

We give some ideas why we choose these polynomials \(P_n\). That is, what is the main motivation for the construction of \(P_n\). We are looking for functions whose \(L^1\)-norms are (uniformly) bounded, and some Fourier partial sums of which will be “sufficiently large” on “sufficiently large” sets. A good starting point is to consider the de la Vallée-Poussin kernel functions \(\mathcal {V}_{n}\), because their norm is bounded (uniformly in n). While at the same time for a well-chosen (i.e., \(m\le n\)) index m \(S_m\mathcal {V}_{n}\) becomes the m-th Dirichlet kernel function (which can take “large values” - since its norm blows as \(\log m\)) and sometimes (for \(m\ge 2n-1\)) \(S_m \mathcal {V}_{n}\) is \(\mathcal {V}_{n}\) itself. However, the Dirichlet kernel function \(D_m\) will be large only around zero, so we take several - with different parameters - de la Vallée-Poussin kernel functions, whose variables are shifted by different values. This way, we can obtain a polynomial \(P_n\) some Fourier partial sums of which will be “sufficiently large” not only around zero but also on a set of “sufficiently large” measure.

Then, the main idea of the counterexample construction is to consider the function \(f=\sum _jP_{n_j}/2^j\). We prove that for almost every x, there are infinitely many j such that for “sufficiently many” m and belonging to the set \(\mathcal {N}\), \(S_m P_{n_j}\) will be large at x. Furthermore, for other \(l\not =j\) the value of \(S_m P_{n_l}(x)\) will be “small”.

We set sequences of natural numbers \((a_K), (b_K)\) in a way that

$$\begin{aligned} a_K = \lfloor K/\log (K)\rfloor , \quad b_K \nearrow \infty , \quad b^2_K = o(\log (a_K)). \end{aligned}$$
(4.4)

Throughout the paper, we assume that the sequences \((a_K)\) and \((b_K)\) are as defined in (4.4). We also assume that \(b_K\ge 3\) and \(K\ge 8\). We define the disjoint union of intervals

$$\begin{aligned} \begin{aligned} I_{n,K}&:=\bigcup _{j=-n/(2K)}^{n/(2K)-1}\bigcup _{i=a_K}^{K-a_K-1} \left[ (jK+i)\frac{2\pi }{n} + \frac{2\pi }{n b_K}, (jK+i+1)\frac{2\pi }{n} - \frac{2\pi }{n b_K}\right) \\&=: \bigcup _{j=-n/(2K)}^{n/(2K)-1}\bigcup _{i=a_K}^{K-a_K-1} I_{n, K, j,i}. \end{aligned} \end{aligned}$$
(4.5)

We discuss two reasons for choosing these sets:

If \(\frac{2\pi }{nb_K}\) would disappear from the definition of \(I_{n,K}\) and i would go from 0 to \(K-1\) (instead of \(a_k\) and \(K-a_K-1\)), then \(I_{n,K}\) exactly would be equal to \(\mathbb {T} = [-\pi , \pi )\). Besides, its measure \(|I_{n,K}| \ge 2\pi - 4\pi /b_K - (n/K)\cdot 2a_K\cdot 2\pi /n = 2\pi (1- 2/b_K-2a_K/K)\) tends to \(2\pi \) as \(K\rightarrow \infty \). That is, the set \(I_{n,K}\) is “close to” \(\mathbb {T}\) itself, which means to assume a real number x is an element of \(I_{n,K}\) is almost the same as making no assumptions.

Furthermore, the construction of the set \(I_{n,K}\) shows (and should show) exactly the same shifts as the definition of the function \(P_n\). Thus, we obtain that for any given \(x\in I_{n,K}\), among the summands of the sum of the function \(P_n\) and \(S_mP_n\), the substantive main part will be the one with the same shift as in the interval containing x.

We will investigate the value of \(S_mP_n(x)\) for some m, but only for \(x\in I_{n,K}\). That is, let \(x_0\in \left\{ a_K, \dots , K-a_K-1\right\} \) and \(x_1\in \left\{ -n/(2K), \dots , n/(2K)-1\right\} \) be the unique numbers for which \(x = (x_1K + x_0)\frac{2\pi }{n} + \Delta \) with some \(\frac{2\pi }{nb_K}\le \Delta < \frac{2\pi }{n}-\frac{2\pi }{nb_K}\).

Now, pick any natural number m such as

$$\begin{aligned} 2\alpha _{x_0} \le m \le 4\alpha _{x_0} = \alpha _{x_0+1}. \quad \text{ Then } S_m \mathcal {V}_{\alpha _i} = {\left\{ \begin{array}{ll} \mathcal {V}_{\alpha _i}, &{} \text{ if } i\le x_0 \\ D_m, &{} \text{ otherwise }. \end{array}\right. } \end{aligned}$$

Therefore,

$$\begin{aligned} nS_mP_n(x)= & {} \sum _{j=-n/(2K)}^{n/(2K)-1}\sum _{i=0}^{K-1}S_m\left( \mathcal {V}_{\alpha _i}\right) \left( x-(jK+i)\frac{2\pi }{n}\right) \nonumber \\= & {} \sum _{\begin{array}{c} j\not =x_1 \end{array}}\sum _{i=0}^{x_0}\mathcal {V}_{\alpha _i}\left( t_{j,i}\right) + \sum _{\begin{array}{c} j\not =x_1 \end{array}}\sum _{i=x_0+1}^{K-1}D_m\left( t_{j,i}\right) \nonumber \\{} & {} + \sum _{i=0}^{x_0-1}\mathcal {V}_{\alpha _i}\left( x-(x_1K+i)\frac{2\pi }{n}\right) + \mathcal {V}_{\alpha _{x_0}}\left( x-(x_1K+x_0)\frac{2\pi }{n}\right) \nonumber \\{} & {} + D_m\left( x-(x_1K+x_0+1)\frac{2\pi }{n}\right) + \sum _{i=x_0+2}^{K-1}D_m\left( x-(x_1K+i)\frac{2\pi }{n}\right) \nonumber \\=: & {} n\sum _{a=1}^{6}R_a(x). \end{aligned}$$
(4.6)

The point of this decomposition is to determine what the principal part of the function \(S_mP_n(x)\) for a given x exactly is and to show that it behaves roughly like a (shifted) Dirichlet kernel function. The following simple lemma shows that the members of this decomposition are not “essential” except for the second and sixth ones, that is, they are bounded or grow “very slowly”.

Lemma 4.1

Suppose that \(K\ge 8\). For \(x\in I_{n,K}\) we have

$$\begin{aligned} |R_1(x)| + |R_3(x)| + |R_4(x)| + |R_5(x)| \le C b^2_K. \end{aligned}$$
(4.7)

Proof

For \(a=1\) we use (2.3) and (4.1):

$$\begin{aligned} \begin{aligned} |R_1(x)|&\le \frac{1}{n}\sum _{\begin{array}{c} -n/(2K)\le j< n/(2K),\\ j\not =x_1 \end{array}}\sum _{i=0}^{x_0} C\frac{1}{\alpha _i \left| (x_1K+x_0)\frac{2\pi }{n}+\Delta -(jK+i)\frac{2\pi }{n}\right| ^2}\\&\le \frac{1}{n}\sum _{\begin{array}{c} -n/(2K)\le j< n/(2K),\\ j\not =x_1, x_1+1 \end{array}}\sum _{i=0}^{x_0} C \frac{n^2}{n|x_1-j|^2K^2} \\&\quad + \frac{1}{n}\sum _{i=0}^{x_0} C \frac{1}{4^i n\left| (x_0-i-K)\frac{2\pi }{n} + \Delta \right| ^2}\\&\le C/K + C/a_K^2 \le C/K. \end{aligned} \end{aligned}$$

Similarly, for \(R_3(x)\) (taking also account again that \(n = \alpha _0< \dots < \alpha _{K-1}\)):

$$\begin{aligned} \begin{aligned}&|R_3(x)| \le \frac{1}{n}\sum _{i=0}^{x_0-1} C\frac{1}{\alpha _i \left| (x_0-i)\frac{2\pi }{n}+\Delta \right| ^2} \le C. \end{aligned} \end{aligned}$$

For \(R_4(x)\) and \(R_5(x)\) by (2.3) for \(x\in I_{n,K}\) (4.5) we have

$$\begin{aligned} \begin{aligned}&|R_4(x)| \le \frac{1}{n} C\frac{1}{\alpha _{x_0} \left| \Delta \right| ^2} \le C b^2_K, \end{aligned} \end{aligned}$$

because \(\frac{2\pi }{n b_K} \le \Delta \). Besides, with the well-known inequality \(|D_m(x)|\le C/|x| \, (0\not =x\in \mathbb {T})\):

$$\begin{aligned} \begin{aligned}&|R_5(x)| \le \frac{1}{n} C \frac{1}{|\Delta -\frac{2\pi }{n}|} \le Cb_K, \end{aligned} \end{aligned}$$

by \(\Delta < \frac{2\pi }{n}(1- 1/b_K)\). These prove

$$\begin{aligned} |R_1(x)| + |R_3(x)| + |R_4(x)| + |R_5(x)| \le Cb^2_K. \end{aligned}$$

\(\square \)

In the sequel, we give a lower bound on the function \(R_6\) and then turn our attention to the most complicated case of giving an upper bound for \(R_2(x)\). These will show that the function \(S_mP_n(x)\) (see (4.6)) behaves essentially like the function \(R_6\). We say a few words about the conditions of the following lemma and their background. This lemma states that \(R_6(x)=\sum _iD_m(x-(x_1K+i)2\pi /n)\ge \log a_K/60\). This depends on the fact that the variables of the Dirichlet kernel functions in question are “close to zero” - since \(x = (x_1K+x_0)2\pi /n+\Delta \). Furthermore, this will be true for quite a few indices \(m=m_{i,s}\). The significance of this is that their mean will also be “large”. This will require careful adjustment of some of the parameters of the lemma.

The role of the equality \(\alpha _{i+1} = 4\alpha _i\) (4.1) will be important when we choose indices \(m_{i,s}\) (for the role of m) (around \(2\alpha _i\)), so that \(S_m\mathcal {V}_{\alpha _h}\) can be no other than either \(\mathcal {V}_{\alpha _h}\) or \(D_m\).

Clearly, the length of the interval \(I_{n, K, j,i}\) (forming the set \(I_{n, K}\)) is “approximately” \(2\pi /n\) (exactly \(2\pi /n-4\pi /(nb_K)\) ). We will choose the sets \(I^{\prime }_{n, K, j,i}\) (unions of subintervals) in a way to comply with the following criterion: the function \(R_6(x)\) behaves essentially as \(\sum _h\sin (m(x-(x_1K+h)2\pi /n))/(x_0-h)\), where h is an element of the set \(\{x_0+2,\dots , K-1\}\) for each \(x=(x_1K + x_0)2\pi /n + \Delta \in I^{\prime }_{n, K, j,i}\).

Here it will play also an important role that \(k_i\) to be defined later is divisible by n. More precisely, there will be enough \(m_{i,s}\in \mathcal {N}\) (around \(k_i\)), whose residue modulo n is sufficiently small. It will follow that for any \(m=m_{i,s}\) the function \(R_6(x)\) behaves essentially like \(-\sin (k_i\Delta )\log (K)\). Then it follows that the inequality \(\sin (k_i\Delta ) <-1/2\), which is satisfied on at least one fourth (in measure) of the interval \(I_{n,K,j,i}\), is a proper definition of the set \(I^{\prime }_{n,K,j,i}\).

Lemma 4.2

Suppose that (3.2) holds. Then for every \(0< \epsilon < 1/(35\pi )\) and sufficiently large \(K \in \mathbb {N}\) there is an \(n\ge 8K\) divisible by 4K such that with the previous notations (see (4.1), (4.6) and (4.4))

  1. (i)

    for every \(i\in \{0,\dots , K-1\}\) there are at least (2K)! elements \(\{m_{i,s}\}_{s=1}^{(2K)!}\) of \(\mathcal {N}\) in the interval \([2\alpha _i, 2\alpha _i + \epsilon n/K]\),

  2. (ii)

    for every \(j\in \{-n/(2K), \dots , n/(2K)\}\), \(i\in \{a_K, \dots , K-a_K-1\}\) and \(x\in I^{\prime }_{n, K, j,i}\) we have

    $$\begin{aligned} R_6(x) \ge \frac{\log a_K}{60}, \end{aligned}$$

    where \(m \in [2\alpha _i, 2\alpha _i + \epsilon n/K]\) and \(I^{\prime }_{n, K, j,i}\) is a finite union of subintervals of \( I_{n, K, j,i}\) (see (4.5)) for which \(4|I^{\prime }_{n, K, j,i}| \ge |I_{n, K, j,i}|\). That is, we give a lower bound on the function \(R_6\) on the set

    $$\begin{aligned} I_{n,K}^{\prime } := \bigcup _{j=-n/(2K)}^{n/(2K)-1}\bigcup _{i=a_K}^{K-a_K-1} I^{\prime }_{n, K, j,i}. \end{aligned}$$
    (4.8)

Proof

We repeat the opening lines of the section “the construction” (4.1)

$$\begin{aligned} 8K \le n = \alpha _0< \alpha _1< \dots < \alpha _{K-1} = n4^{K-1}, \quad \alpha _{i+1} = 4\alpha _i. \end{aligned}$$

Then for any i we have for every interval \([2\alpha _i, 4\alpha _{i})\) the left endpoint \(k_i = 2\alpha _i = 2^{2i+1}n\) is divisible by n. Let

$$\begin{aligned} \gamma = 1 + \frac{2\epsilon }{3K4^{K}}. \end{aligned}$$

By condition (3.2) we can suppose that n is “so large” (and divisible by 4K) that

$$\begin{aligned} \left| \mathcal {N}\cap [n\gamma ^{u+1}, n\gamma ^{u+2})\right| \ge (2K)! \end{aligned}$$

for \(u \in \{0, \dots , \lceil 2K\log _{\gamma }(2)\rceil \}\). For \(k_i=2\alpha _i\) set u in a way that

$$\begin{aligned} n\gamma ^u \le k_i < n\gamma ^{u+1}. \end{aligned}$$

Since \(i=0,\dots , K-1\), then \(k_i =2\alpha _i < 4\alpha _{K-1} = 4 n4^{K-1} = n4^K\) and \(n4^{K} \le n\gamma ^{\lceil 2K\log _{\gamma }(2)\rceil }\), then “we have enough” u. Then for a fixed i (and u) choose \(m_{i,s}\in \mathcal {N}\cap [n\gamma ^{u+1}, n\gamma ^{u+2})\), \(s=1,\dots , (2K)!\). Let i run from 0 to \(K-1\). Finally, we have

$$\begin{aligned} \begin{aligned} 0&\le m_{i,s} - k_i \le n\gamma ^{u+2} - n\gamma ^{u} = n\gamma ^{u}(\gamma ^2-1) \le k_i (\gamma ^2-1) \le 2\alpha _{K-1}(\gamma ^2-1) \\&< 2n 4^{K-1}(\gamma -1)3 = \frac{3}{2} n4^{K}\frac{2\epsilon }{3K4^{K}} = \epsilon n/K \end{aligned} \end{aligned}$$

for every i and s. We define the sets

$$\begin{aligned} \begin{aligned} I^{\prime }_{n, K, j,i}&:= \left\{ x\in I_{n, K, j,i} : \sin (k_ix)< -1/2\right\} \\&= \Biggl \{x\in \Biggl [(jK+i)\frac{2\pi }{n} + \frac{2\pi }{nb_K}, (jK+i+1)\frac{2\pi }{n} - \frac{2\pi }{nb_K}\Biggr ) \\&\quad : \sin (k_ix) < -1/2\Biggr \}. \end{aligned} \end{aligned}$$
(4.9)

Then \(4|I^{\prime }_{n, K, j,i}| \ge |I_{n, K, j,i}| = \frac{2\pi }{n}(1-2/b_K)\) for every ij if K large enough. For a \(x\in I^{\prime }_{n, K, j,i}\) we check the values of \(R_6(x)\), where m is any of \([2\alpha _i, 2\alpha _i + \epsilon n/K]\) (\(i=a_K,\dots , K-a_K-1\)).

By the help of (2.1) it is trivial to have

$$\begin{aligned} D_m(t) = \frac{\sin (mt)}{2\tan (t/2)} + \frac{\cos (mt)}{2}. \end{aligned}$$
(4.10)

Let m be any integer in \([2\alpha _i, 2\alpha _i + \epsilon n/K]\) and express m as \(m=m_1+m_0\), where \(n|m_1, 0\le m_0<n\). That is, \(m\equiv m_0\) modulo n. Then of course \(m_1 = 2\alpha _i = k_i\). Besides, we also have that \(0\le m_0 \le \epsilon n/K\), where \(1/(35\pi )> \epsilon >0\) (as we said \(\epsilon \) is some fixed “small” positive number). With the help of

$$\begin{aligned} \left| \sin (\alpha +\beta ) - \sin (\alpha )\right| = \left| 2\sin (\frac{\beta }{2})\cos (\alpha + \frac{\beta }{2})\right| \le |\beta | \end{aligned}$$

we have for \(0\le h < K\)

$$\begin{aligned} \begin{aligned}&\left| \sin m\left( x-(x_1K+h)\frac{2\pi }{n}\right) - \sin (m_1x)\right| \\&\quad = \left| \sin m\left( x-(x_1K+h)\frac{2\pi }{n}\right) - \sin (m_1\Delta )\right| \\&\quad = \left| \sin \left( m_1\Delta + m_0\Delta + m_0(x_0-h)\frac{2\pi }{n}\right) - \sin (m_1\Delta )\right| \\&\quad \le m_0\left( \frac{2\pi }{n} + K \frac{2\pi }{n}\right) \le \epsilon 6\pi . \end{aligned} \end{aligned}$$
(4.11)

Back to the notation of this lemma, we emphasize that later it will be set: \(m = m_{i,s}\) and \(m_1 = k_i = 2\alpha _i\) (for some is). That is, \(m_{i,s} = k_i+m_{0,i,s}\), where \(n| k_i\) and \(0\le m_{0,i,s} \le \epsilon n/K\). But now, in this lemma, m is an arbitrary integer in the interval \([2\alpha _i, 2\alpha _i + \epsilon n/K]\).

For any \(x\in I_{n,K}^{\prime }\) there exist unique \(i=0,\dots , K-1\) (actually even \(a_K\le i\le K-a_K-1\)) and \(j=-n/(2K),\dots , n/(2K)-1\) such that we have

$$\begin{aligned} x\in I_{n,K, j,i} = \left[ (jK+i)\frac{2\pi }{n} + \frac{2\pi }{nb_K}, (jK+i+1)\frac{2\pi }{n} - \frac{2\pi }{nb_K}\right) . \end{aligned}$$

That is, \(x = (x_1K + x_0)2\pi /n + \Delta \) means \(x_1=j, x_0=i\) here. Then by \(\sin (m_1\Delta ) = \sin (k_i\Delta ) = \sin (k_ix) < -1/2\). Moreover, by (4.6), (4.10), (4.11), \(0<\epsilon < 1/(35\pi )\) and \(x_0 < K - a_K\)

$$\begin{aligned}{} & {} R_6(x)- \frac{1}{n}\sum _{h=x_0+2}^{K-1} \frac{\cos (m\left( x-(x_1K+h)\frac{2\pi }{n}\right) )}{2}\\{} & {} \quad =\frac{1}{n}\sum _{h=x_0+2}^{K-1}\frac{\sin \left( m\left( x-(x_1K+h)\frac{2\pi }{n}\right) \right) }{2\tan \left( \frac{\Delta }{2} + (x_0-h)\frac{\pi }{n} \right) }\\{} & {} \quad \ge \left( \frac{1}{2}- \epsilon 6\pi \right) \frac{1}{n}\sum _{h=x_0+2}^{K-1}\frac{1}{2\tan \left( -\frac{\Delta }{2} + (h-x_0)\frac{\pi }{n} \right) }\\{} & {} \quad \ge \frac{1}{50}\frac{1}{n}\sum _{h=x_0+2}^{K-1}\frac{n}{h-x_0} \ge \frac{\log (K-x_0)}{55} - \frac{1}{50}. \end{aligned}$$

Then \(a_K=o(K)\) (4.4) imply for large K:

$$\begin{aligned} R_6(x)\ge \frac{\log a_K}{60}. \end{aligned}$$

\(\square \)

In the following lemma, we use the same notation as in the one we just verified. For instance, \(0<\epsilon < \frac{1}{35\pi }\) is some fixed real. As we wrote earlier, the decomposition of the function \(nS_mP_n\) in (4.6) is important in order to determine what the essential part of the function \(S_mP_n\) is. In the previous lemma (Lemma 4.2), we saw that \(R_6(x)\) becomes the “essential”, i.e., “determinant part”, if we also prove that \(R_2(x)= o(\log K)\). The proof of this (somewhat simplified) consists of the following steps:

First, we show that the Dirichlet kernel function \(D_m\) at the point \(t_{j,i}\) is essentially \(\sin (mt_{j,i})\) divided by \(2\tan (t_{j,i}/2)\), where \(t_{j,i}=x-(jK+i)2\pi /n = (pK-k)2\pi /n +\Delta \) and \(-n/(2K) \le j< n/(2K), 0\le i\le K-1\) and \(p=x_1-j, k=i-x_0\). After some consideration, this allows us to estimate the function \(R_2(x)\) with the following expression (added to some term for \(|p|=1\)).

$$\begin{aligned}{} & {} \left| \frac{K}{n}\sum _{\begin{array}{c} 1< |p|\le n/(2K) \end{array}} \frac{\cos \left( m_0 pK\frac{2\pi }{n}\right) }{2\tan ((pK-k)\frac{\pi }{n}+\Delta /2)}\right| +\left| \frac{K}{n}{\sum _{{\begin{array}{c} 1< |p|\le /(2K) \end{array}}}} \frac{\sin \left( m_0 pK\frac{2\pi }{n}\right) }{2\tan ((pK-k)\frac{\pi }{n}+\Delta /2)}\right| . \end{aligned}$$

Finally, we will see that both sums in the line above are bounded. The only thing that will be necessary for this step is that the function \(\tan (x)\) is odd. Let us then formulate this lemma (Lemma 4.3) and see its detailed proof below.

Lemma 4.3

Let Knm be chosen as in Lemma 4.2. Then for every \(x\in I_{n, K}\) the estimate \(|R_2(x)| \le C\log (K/a_K)\) is valid.

Proof

We recall (4.6) that

$$\begin{aligned} R_2(x) = \frac{1}{n}\sum _{\begin{array}{c} -n/(2K)\le j< n/(2K),\\ j\not =x_1 \end{array}}\sum _{i=x_0+1}^{K-1}D_m\left( t_{j,i}\right) \end{aligned}$$

and then by (4.10) it is enough to prove for

$$\begin{aligned} R_{2,1}(x):= \frac{1}{n}\sum _{\begin{array}{c} -n/(2K)\le j< n/(2K),\\ j\not =x_1 \end{array}}\sum _{i=x_0+1}^{K-1}\frac{\sin (mt_{j,i})}{2\tan (t_{j,i}/2)} \end{aligned}$$

that \(|R_{2,1}|\le C\log (K/a_K)\), where

$$\begin{aligned} t_{j,i} = x-(jK+i)\frac{2\pi }{n} = ((x_1-j)K+x_0-i)\frac{2\pi }{n}+\Delta =: (pK-k)\frac{2\pi }{n}+\Delta , \end{aligned}$$

\(-n/(2K) \le j< n/(2K), 0\le i\le K-1\) and \(p=x_1-j\), \(k=i-x_0\). Subtraction \(p=x_1-j\) is taken modulo n/K (so that |p| should be at most n/(2K) in the following summation) which is possible in view of \(2\pi \)-periodicity. That is, we have to check the absolute value of

$$\begin{aligned} R_{2,1}(x) = \frac{1}{n}\sum _{\begin{array}{c} 1\le |p|\le n/(2K) \end{array}}\sum _{k=1}^{K-x_0-1}\frac{\sin m((pK-k)\frac{2\pi }{n}+\Delta )}{2\tan ((pK-k)\frac{\pi }{n}+\Delta /2)}. \end{aligned}$$

If \(p=1\), then the corresponding sum in \(R_{2,1}(x)\) can be estimated by \(\frac{1}{n}\sum _{k=1}^{K-x_0-1}\frac{Cn}{K-k}\le C\log (K/x_0) \le C\log (K/a_K)\). In a similar vein, for \(p=-1\) the corresponding sum is bounded, so from now on we can assume that \(|p|>1\).

Again, let \(m = m_1 + m_0\), where \(n|m_1\) and \(0\le m_0\le \epsilon n/K\). By the addition formulas for sine and cosine functions we have

$$\begin{aligned} \begin{aligned} \sin m\left( (pK-k)\frac{2\pi }{n}+\Delta \right)&= \sin (m_1\Delta )\cos m_0\left( (pK-k)\frac{2\pi }{n}+\Delta \right) \\&\quad + \cos (m_1\Delta ) \sin m_0\left( (pK-k)\frac{2\pi }{n}+\Delta \right) \\&=\sin (m_1\Delta )\left[ \cos \left( m_0 pK\frac{2\pi }{n}\right) \cos m_0\left( k\frac{2\pi }{n}-\Delta \right) \right. \\&\quad \left. + \sin \left( m_0 pK\frac{2\pi }{n}\right) \sin m_0\left( k\frac{2\pi }{n}-\Delta \right) \right] \\&\quad + \cos (m_1\Delta ) \left[ \sin \left( m_0 pK\frac{2\pi }{n}\right) \cos m_0\left( k\frac{2\pi }{n}-\Delta \right) \right. \\&\quad \left. - \cos \left( m_0 pK\frac{2\pi }{n}\right) \sin m_0\left( k\frac{2\pi }{n}-\Delta \right) \right] . \end{aligned} \end{aligned}$$
(4.12)

Now, by (4.12) we bound \(|R_{2,1}(x)|\) in the following way: we give upper bounds in the case of every \(k=1,\dots , K-1\) for

$$\begin{aligned} |R_{2,1,1}(x)|:= \left| \frac{K}{n}\sum _{\begin{array}{c} 1< |p|\le n/(2K) \end{array}} \frac{\cos \left( m_0 pK\frac{2\pi }{n}\right) }{2\tan ((pK-k)\frac{\pi }{n}+\Delta /2)}\right| \end{aligned}$$

and

$$\begin{aligned} |R_{2,1,2}(x)|:= \left| \frac{K}{n}\sum _{\begin{array}{c} 1< |p|\le n/(2K) \end{array}} \frac{\sin \left( m_0 pK\frac{2\pi }{n}\right) }{2\tan ((pK-k)\frac{\pi }{n}+\Delta /2)}\right| . \end{aligned}$$

We shall prove that \(|R_{2,1,1}(x)|, |R_{2,1,2}(x)|\le C\) for every \(k=1,\dots , K-1\), which, together with (4.12) will imply that the sum of terms in \(R_{2,1}\) with \(|p|>1\) is bounded. This fact, and the already discussed cases \(p=1\) and \(p=-1\) yield that \(|R_{2,1}(x)|\le C\log (K/a_K)\) completing the proof of Lemma 4.3. In both sums (\(R_{2,1,1}, R_{2,1,2}\)) we can assume that \(|p|\le n/(4K)\) (that is, \(|pK\pi /n| \le \pi /4\)) because both sums have terms bounded by some C in other cases, so there is nothing to prove where \(|pK\pi /n| > \pi /4\). More precisely, we have an absolute upper bound for the sum of these members (in absolute values) of \(R_{2,1,1}(x)\) and \(R_{2,1,2}(x)\).

Let us start this investigation with \(|R_{2,1,1}(x)|\) and see what happens if p changes its sign in the sum. We use the well-known trigonometric formula

$$\begin{aligned} \tan \alpha + \tan \beta = \tan (\alpha +\beta ) (1-\tan \alpha \tan \beta ). \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned}&\left| \frac{1}{2\tan \left( (pK-k)\frac{\pi }{n}+\Delta /2\right) }+\frac{1}{2\tan ((-pK-k)\frac{\pi }{n}+\Delta /2)}\right| \\&\quad \le C\frac{|\tan \alpha + \tan \beta |}{p^2K^2/n^2} \\&\quad \le C\frac{n^2}{p^2K^2} |\tan (\alpha +\beta )| (1 + |\tan \alpha \tan \beta |)\\&\quad \le C\frac{n^2}{p^2K^2}\frac{K}{n}\left( 1 + \left( \frac{pK}{n}\right) ^2\right) \\&\quad \le C\frac{n}{p^2K}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \alpha = \frac{pK\pi }{n} - \frac{k\pi }{n} + \frac{\Delta }{2}, \quad \beta = \frac{-pK\pi }{n} - \frac{k\pi }{n} + \frac{\Delta }{2}. \end{aligned}$$

This immediately gives

$$\begin{aligned} \begin{aligned} |R_{2,1,1}(x)|&\le C + \left| \frac{K}{n}\sum _{\begin{array}{c} 1< |p|\le n/(4K) \end{array}} \frac{\cos \left( m_0 pK\frac{2\pi }{n}\right) }{2\tan ((pK-k)\frac{\pi }{n}+\Delta /2)}\right| \\&\le C + C\frac{K}{n}\sum _{\begin{array}{c} 1< p\le n/(4K) \end{array}} \frac{n}{p^2K}\le C + C. \end{aligned} \end{aligned}$$

Therefore, in order to complete the proof of Lemma 4.3 we have to investigate \(R_{2,1,2}(x)\). Similarly as above

$$\begin{aligned} \begin{aligned}&\left| \frac{1}{2\tan ((pK-k)\frac{\pi }{n}+\Delta /2)}-\frac{1}{2\tan (pK\frac{\pi }{n})}\right| \\&\quad \le C\frac{|\tan \alpha - \tan \beta |}{p^2K^2/n^2} \\&\quad \le C\frac{n^2}{p^2K^2} |\tan (\alpha -\beta )| (1 + |\tan \alpha \tan \beta |)\\&\quad \le C\frac{n^2}{p^2K^2}\frac{K}{n}\left( 1 + \left( \frac{pK}{n}\right) ^2\right) \\&\quad \le C\frac{n}{p^2K}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \alpha = \frac{pK}{n} - \frac{k\pi }{n} + \frac{\Delta }{2}, \quad \beta = \frac{pK}{n}. \end{aligned}$$

Finally, since

$$\begin{aligned} \frac{K}{n}\sum _{\begin{array}{c} 1< |p|\le n/(4K) \end{array}}\frac{n}{p^2K}\le C, \end{aligned}$$

the only inequality we need to have \(|R_{2,1,2}(x)|\le C\) is:

$$\begin{aligned} \frac{K}{n}\left| \sum _{\begin{array}{c} 1< |p|\le n/(4K) \end{array}} \frac{\sin \left( m_0 pK\frac{2\pi }{n}\right) }{2\tan (pK\frac{\pi }{n})}\right| \le C. \end{aligned}$$

This inequality is a direct consequence of (4.10) and the following Lemma 4.4 in the case of \(L=n/K, m=m_0, a=4\). We note that each term in the sum above in Lemma 4.4 is at most \(m_0+1< L\). Also, recall that we earlier assumed \(m_0 \le \epsilon n/K\). \(\square \)

In the proof of the main theorem, the construction of the counterexample function will be given as \(f=\sum \frac{1}{2^j} P_{n_j}\). We will need to investigate the partial sums of the Fourier series of f, that is, \(S_mf\). In the cases where \(n_j\) is “relatively large compared to” m, \(S_mP_{n_j}\) will be the sum of shifted versions of the Dirichlet kernel function \(D_m\) instead of the de la Vallée-Poussin kernel functions shown in the definition of \(P_{n_j}\) (4.2).

We will discuss this case with the help of the following lemma (Lemma 4.4).

Lemma 4.4

Let Lam be positive integers, \(m\le L/2\). Then

$$\begin{aligned} \frac{1}{L}\left| \sum _{s=0}^{L/a-1}D_m\left( \frac{s2\pi }{L}\right) \right| \le C_a \end{aligned}$$

with some constant \(C_a>0\) depending only on a.

Proof

Recall that (2.1)

$$\begin{aligned} D_m(t) = \frac{1}{2} + \cos t + \dots + \cos mt = \frac{1}{2} + \Re \sum _{k=1}^{m} e^{\imath kt}. \end{aligned}$$

Suppose that (just here and now in the proof of this lemma) \(R:= \lfloor L/a\rfloor \). To prove the lemma, we have to investigate the real part of

$$\begin{aligned} \sum _{s=0}^{R-1}\sum _{k=1}^{m} e^{\imath ks \frac{2\pi }{L}}. \end{aligned}$$

Change the order of summation and see what happens in the inner sum:

$$\begin{aligned} \sum _{s=0}^{R-1} e^{\imath ks \frac{2\pi }{L}} = \frac{e^{\imath \frac{2kR\pi }{L}}-1}{e^{\imath \frac{2k\pi }{L}}-1} = \left( \cos \frac{2kR\pi }{L} -1 + \imath \sin \frac{2kR\pi }{L}\right) \left( \frac{-1}{2}-\frac{\imath }{2}\cot \frac{2\pi k}{2L}\right) . \end{aligned}$$

If \(a=1\), then by \(k\le m \le L/2=R/2\) the sum above is zero. That is, we can suppose \(a\ge 2\). We will first discuss the case \(a=2\). So let \(a=2\). Then,

$$\begin{aligned} \Re \sum _{s=0}^{R-1}e^{\imath \frac{2\pi ks}{L}} = \frac{1}{2}\left( 1-\cos \frac{2\pi Rk}{L} + \sin \frac{Rk2\pi }{L}\cot \frac{k\pi }{L}\right) . \end{aligned}$$

Therefore, we only need to give an upper bound for the absolute value of the following sum:

$$\begin{aligned} \frac{1}{L}\sum _{k=1}^{m}\sin \frac{Rk2\pi }{L}\cot \frac{k\pi }{L}. \end{aligned}$$
(4.13)

Since \(R = \lfloor L/2\rfloor \), then \(L = 2R + L_0\), where \(L_0\in \{0, 1\}\).

$$\begin{aligned} - \cot \frac{k\pi }{L} + \cot \frac{k\pi }{2R} = \left( 1 + \cot \frac{k\pi }{L}\cot \frac{k\pi }{2R}\right) \tan k\pi \left( \frac{1}{L} - \frac{1}{2R}\right) \end{aligned}$$

gives

$$\begin{aligned} \left| \cot \frac{k\pi }{L} - \cot \frac{k\pi }{2R}\right| \le C \frac{L^2}{k^2} k \frac{|2R - L|}{L^2} \le C\frac{1}{k}. \end{aligned}$$

Then

$$\begin{aligned} \frac{1}{L}\left| \sum _{k=1}^{m}\sin \frac{Rk2\pi }{L}\left( \cot \frac{k\pi }{L}- \cot \frac{k\pi }{2R}\right) \right| \le C\frac{\log m}{L}. \end{aligned}$$
(4.14)

By (4.13) and (4.14), it is enough to have an upper bound for the absolute value of

$$\begin{aligned} \frac{1}{L}\sum _{k=1}^{m}\sin \frac{Rk2\pi }{L} \cot \frac{k\pi }{2R} \end{aligned}$$
(4.15)

in order to complete the proof of Lemma 4.4. Let \(k = 2k_1 + k_0, k_0\in \{0, 1\}\), that is, \(k\equiv k_0\) modulo 2. The term with \(k=1\) in (4.15) is easily seen to be at most \(CR\le CL\) which divided by L gives at most a constant. Therefore, in what follows we may assume \(k_1\not = 0\). Then by

$$\begin{aligned} \begin{aligned}&\frac{1}{L}\sum _{k=1, k_1\not =0}^{m}\left| \cot \frac{(2k_1+k_0)\pi }{2R}- \cot \frac{k_1\pi }{R}\right| \\&\quad \le \frac{1}{L}\sum _{k=1, k_1\not =0}^{m} \left| 1 + \cot \frac{k\pi }{2R}\cot \frac{2k_1\pi }{2R}\right| \left| \tan \frac{k_0\pi }{2R}\right| \\&\quad \le C\frac{1}{L}\sum _{k=1}^{\infty }\frac{R^2}{k^2}\frac{1}{R}\le C. \end{aligned} \end{aligned}$$

Thus, instead of (4.15) it is enough to investigate

$$\begin{aligned} \frac{1}{L}\sum _{k_1=1}^{m/2}\sum _{k_0=0}^{1}\sin \frac{2\pi (2k_1+k_0)R }{L} \cot \frac{k_1\pi }{R}. \end{aligned}$$
(4.16)

Check the inner sum in (4.16) out. It is

$$\begin{aligned} \Im \left( \sum _{k_0=0}^{1}e^{2\pi \imath \left( \frac{2k_1R}{L} + \frac{k_0R}{L}\right) } \right) = \Im \left( e^{2\pi \imath \frac{2k_1R}{L}}\cdot \frac{e^{2\pi \imath \frac{2R}{L}}-1}{e^{2\pi \imath \frac{R}{L}}-1}\right) . \end{aligned}$$

Since \(|e^{2\pi \imath \frac{R}{L}}-1| \ge C\) (recall that \(R=\lfloor L/2\rfloor \)) and besides,

$$\begin{aligned} \begin{aligned} \left| e^{2\pi \imath \frac{2R}{L}}-1\right| = \left| e^{2\pi \imath \frac{2R}{2R+L_0}}-e^{2\pi \imath \frac{2R}{2R}}\right| = \left| e^{2\pi \imath \frac{-2RL_0}{2R(2R+L_0)}}-1\right| \le \frac{C}{R}. \end{aligned} \end{aligned}$$

Thus, we get the following estimation for (4.16):

$$\begin{aligned} \begin{aligned} \left| \frac{1}{L}\sum _{k_1=1}^{m/2}\sum _{k_0=0}^{1}\sin \frac{2\pi (2k_1+k_0)R }{L} \cot \frac{k_1\pi }{R}\right|&\le C\frac{1}{L}\sum _{k_1=1}^{m/2}\frac{1}{R}\left| \cot \frac{k_1\pi }{R}\right| \\&\le C\frac{1}{L}\sum _{k_1=1}^{m/2}\frac{1}{k_1} \\&\le C \frac{\log m}{L} \le C. \end{aligned} \end{aligned}$$

This completes the proof of Lemma 4.3 in the case of \(a=2\). The case \(a\ge 3\) is given by the following consideration.

$$\begin{aligned} \begin{aligned} \frac{1}{L}\left| \sum _{s=0}^{L/a-1}D_m\left( \frac{s2\pi }{L}\right) \right|&\le \frac{1}{L}\left| \sum _{s=0}^{L/2-1}D_m\left( \frac{s2\pi }{L}\right) \right| + \frac{1}{L}\sum _{s=L/a}^{L/2-1}\left| D_m\left( \frac{s2\pi }{L}\right) \right| \\&\le C + C\frac{1}{L}\sum _{s=L/a}^{L/2-1}\frac{L}{s} \le C\log a. \end{aligned} \end{aligned}$$

The proof of Lemma 4.4 is complete. \(\square \)

Thereafter, we turn our attention to giving a lower bound on the values of the partial sums of the Fourier series of polynomials \(P_n\). Lemma 4.5 below summarizes what has been achieved so far in this paper. We prove a lower bound for the function \(S_mP_n\) similarly (with the only difference that it is \(\log a_K/64\) instead of \(\log a_K/60\)) as in Lemma 4.2 for the function \(R_6\) under exactly the same conditions. Recall that the sequence \((a_K)\) is defined at (4.4) as \(a_K = \lfloor K/\log (K)\rfloor \) and the definition of \(P_n\) can be find at (4.2).

Lemma 4.5

Suppose that condition (3.2) holds for \(\mathcal {N}\subset \mathbb {N}\) and let \(0<\epsilon <1/(35\pi )\). With the notations of Lemma 4.2 for every \(j\in \{-n/(2K),\dots , n/(2K)-1\}\), \(i\in \{a_K,\dots , K-a_K-1\}\), \(m\in [2\alpha _i, 2\alpha _i+\epsilon n/K]\) and \(x\in I^{\prime }_{n, K, j, i}\) we have

$$\begin{aligned} S_mP_n(x) \ge \frac{\log a_K}{64}. \end{aligned}$$

Proof

We apply Lemma  4.2, Lemma 4.1 and 4.3. In view of (4.6) and (4.4)

$$\begin{aligned} \begin{aligned} S_mP_n(x)&= \sum _{a=1}^{6}R_a(x) \ge R_6(x) - (|R_1(x)| + |R_3(x)| + |R_4(x)| + |R_5(x)|) - |R_2(x)|\\&\ge \frac{\log a_K}{60} - Cb_K^2 - C\log (K/a_K)\ge \frac{\log a_K}{64} \end{aligned} \end{aligned}$$

for “large enough” K. This completes the proof of Lemma 4.5. \(\square \)

It is necessary to take a short detour before continuing our journey any further. The following two lemmas will also have a prominent role in the proof of this paper’s main theorem as they are necessary for the divergence construction. The dyadic subintervals of \(\mathbb {T}\) are defined in the following way:

$$\begin{aligned} \mathcal {I}_0:= & {} \left\{ \mathbb {T}\right\} , \quad \mathcal {I}_1:= \left\{ [-\pi ,0), [0,\pi )\right\} ,\\ \mathcal {I}_2:= & {} \left\{ [-\pi ,-\pi /2), [-\pi /2,0), [0,\pi /2), [\pi /2,\pi )\right\} , \dots \\ \mathcal {I}:= & {} \bigcup _{n=0}^{\infty }\mathcal {I}_n. \end{aligned}$$

The elements of \(\mathcal {I}\) are said to be dyadic intervals. If \(F\in \mathcal {I}\), then there exists a unique \(n\in \mathbb {N}\) such that \(F\in \mathcal {I}_n\). Consequently, \(|F| = \frac{2\pi }{2^n}\). Each \(\mathcal {I}_n\) has \(2^n\) disjoint elements (\(n\in \mathbb {N}\)).

The following Calderon-Zygmund type decomposition lemma can be found for instance in [22, page 17] or [23, page 90] (more precisely, in a slightly different way) or in [24] (with an elementary proof).

Lemma 4.6

Let \(f\in L^1(\mathbb {T})\), and \(\lambda >\Vert f\Vert _1/(2\pi )\). Then there exists a sequence of integrable functions \((f_i)\) such that

$$\begin{aligned} f= & {} \sum _{i=0}^{\infty }f_i \quad \text{ a.e. },\\{} & {} \Vert f_0\Vert _{\infty } \le 2\lambda , \quad \Vert f_0\Vert _{1} \le 2\Vert f\Vert _{1}, \quad \text{ and }\\{} & {} {{\,\textrm{supp}\,}}f_i \subset I^{i}, \quad \text{ where } \end{aligned}$$

\(I^{i}\in \mathcal {I}\) are disjoint dyadic intervals depending only on |f| (and \(\lambda \)),

$$\begin{aligned} \left| I^{i}\right| = \frac{2\pi }{2^{k_i}} \quad \text{ for } \text{ some } \end{aligned}$$

\(k_i \ge 1\, (i\ge 1)\). Moreover, \(\int _{\mathbb {T}} f_i(x) dx=\int _{I^i} f_i(x) dx=0\, (i\ge 1)\),

$$\begin{aligned} \lambda < \frac{1}{\left| I^{i}\right| }\int _{I^{i}}|f|\le 2\lambda , \quad \frac{1}{\left| I^{i}\right| }\int _{I^{i}}|f_i| \le 4\lambda \end{aligned}$$

and for the union

$$\begin{aligned} F:= \bigcup _{i=1}^{\infty }I^{i} \end{aligned}$$

of the disjoint dyadic intervals \(I^{i}\) (\(i\ge 1\)) we have \(|F| \le \Vert f\Vert _{1}/\lambda \).

Using the notation of Lemma 4.6 we define \(\mathcal {F}_{}:= \left\{ I^{i}: i=1,\dots ,\right\} \). That is, \(\mathcal {F}_{}\) is the set of dyadic intervals whose union is the set F. Moreover, we remark that, by the proof of Lemma 4.6, for any dyadic interval I, \(I\in \mathcal {F}_{}\) if and only if \(|I|^{-1}\int _{I}|f| >\lambda \) and \(|J|^{-1}\int _{J}|f| \le \lambda \) for every dyadic interval \(J\supsetneq I\).

For any dyadic interval \(I\in \mathcal {I}\) let 7I be the interval with the same center as I and with length 7 times the length of I, and set

$$\begin{aligned} 7F:= \bigcup _{I\in \mathcal {F}_{}} 7I. \end{aligned}$$

Lemma 4.7

[16, Lemma 5.2] Let \(l\in \mathbb {N}\) and \(f\in L^1(\mathbb {T}), \lambda >\Vert f\Vert _1/(2\pi )\). Then the inequality

$$\begin{aligned} \int _{\mathbb {T}\setminus 7F}|S_l f(y)|^2 dy \le C\Vert f\Vert _1\lambda \end{aligned}$$

holds. The constant C is uniform in fl and \(\lambda \).

The following lemma, so to speak, is a sort of summary of what has been achieved so far and it is the last necessary tool to start proving the main theorem (Theorem 3.3). In Lemma 4.8 we use the notation above, in particular, those that we used in Lemmas 4.2 and 4.5.

To show the meaning of this lemma, go back to Lemma 4.5, which proved that \(S_mP_n(x)\) is “large” (\(\log a_K\)) for any \(x\in I^{\prime }_{n, K, j,i}\), where \(m=m_{i,s}\) for all \(s=1,\dots , (K+i)!\) (moreover even for \(s\le (2K)!\)). That is, their average is also “large”. However, in order to talk about Cesàro means of the partial sums, we need to make this average “large” even if we include all the partial sums \(S_mP_n\) for \(m=m_{h,s}\), where \(h<i\) in the sum. More precisely, it will be enough to have this property of the means of all the partial sums \(S_{m}P_n(x)\) on a set \(x\in I^{\prime }_{n, K, j,i}{\setminus } T_K\), where the measure of the set \(T_K\) is “very small”. This will later imply (with finite exceptions regarding the indices m) that the expected inequality would be satisfied at almost every point x in set \(I^{\prime }_{n, K, j,i}\). The measure of this set \(T_K\) will be estimated by Lemma 4.7.

Lemma 4.8

Suppose that condition (3.2) holds for \(\mathcal {N}\subset \mathbb {N}\). Then for any \(K\in \mathbb {N}\) (“large enough”) there are 4K|n, \(8K\le n\),

$$\begin{aligned} \begin{aligned} \mathcal {N}_K&:= (m_{0,1}, m_{0,2}, \dots , m_{0,(K)!}, m_{1,1}, m_{1,2},\dots , m_{1,(K+1)!},\dots , \\&\quad m_{K-1, 1}, \dots , m_{K-1, (2K-1)!}) \\&= (m_{i,s}, s=1,\dots , (K+i)!, i=0,\dots , K-1) \subset \mathcal {N} \end{aligned} \end{aligned}$$

and a set \(T_K\subset \mathbb {T}\) with properties discussed below. The parameters Kn and \(m_{i,s}\) (rarefied in a desired way) are chosen according to Lemma 4.5. For \(i=0,\dots , K-1\) set

$$\begin{aligned} \mathcal {N}_{K,i}: = \mathcal {N}_K \cap [0, m_{i,(K+i)!}] = (m_{j,s}, s=1,\dots , (K+j)!, j=0,\dots , i), \quad \mathcal {N}_{K,-1}:= \emptyset . \end{aligned}$$

Then for every \(j\in \{-n/(2K),\dots , n/(2K)-1\}\), \(i\in \{a_K,\dots , K-a_K-1\}\) and \(x\in I^{\prime }_{n,K,j,i}{\setminus } T_K\) (see the definition of the sets \(I^{\prime }_{n,K,j,i}, I^{\prime }_{n,K}\) in Lemma 4.2) we have

$$\begin{aligned} \sigma _{\mathcal {N}_{K,i}}P_n(x):= \frac{1}{|\mathcal {N}_{K,i}|}\sum _{m\in \mathcal {N}_{K,i}}S_mP_n(x) \ge \frac{\log a_K}{256} \end{aligned}$$

and besides,

$$\begin{aligned} |T_K| \le \frac{C}{\log a_K} \end{aligned}$$

(where |A| denotes either the cardinality or the measure of the set A).

Proof

We recall that \( I^{\prime }_{n, K, j, i}\) is a finite union of some subintervals of the interval \(I_{n, K, j, i}\). Besides,

$$\begin{aligned} I_{n,K}^{\prime }:= \bigcup _{j=-n/(2K)}^{n/(2K)-1}\bigcup _{i=a_K}^{K-a_K-1} I^{\prime }_{n, K, j, i}, \end{aligned}$$

where \(I^{\prime }_{n, K, j, i} \subset I_{n, K, j, i}\), \(4|I^{\prime }_{n, K, j, i}| \ge |I_{n, K, j, i}|\) for every ij.

We apply the Calderon-Zygmund decomposition lemma, that is, Lemma 4.6 for \(P_n\) and \(\lambda = \log a_K\). Since \(P_n\) is the arithmetical mean of some “shifted” de la Vallée-Poussin kernels, then \(\Vert P_{n}\Vert _1\le C\). Since \(\Vert P_n\Vert _1 \le C\) and \(a_K\rightarrow \infty \), this lemma can be applied for large enough K. Moreover, let \(T^1_K = F\) be the set coming from Lemma 4.6. Consequently, we have

$$\begin{aligned} |T^1_K| \le \frac{C}{\log a_K}. \end{aligned}$$

Then, let

$$\begin{aligned} T_K^2:= \left\{ x\in \mathbb {T}\setminus 7T_K^1: \sup _{i=0,\dots , K-1} \frac{1}{\left| \mathcal {N}_{K,i}\right| }\left| \sum _{m\in \mathcal {N}_{K,i-1}}S_mP_n(x)\right| > \frac{\log a_K}{256}\right\} . \end{aligned}$$

In the sequel, we investigate the measure of the set \(T_K^2\) and we give some upper estimation for it. By the well-known inequality between the arithmetical and quadratic means we have

$$\begin{aligned} \begin{aligned} |T_K^2|&\le \sum _{i=0,\dots , K-1}\left| \left\{ x\in \mathbb {T}\setminus 7T_K^1: \frac{1}{|\mathcal {N}_{K,i}|}\left| \sum _{m\in \mathcal {N}_{K,i-1}}S_mP_n(x)\right| > \frac{\log a_K}{256}\right\} \right| \\&\le \sum _{i=0,\dots , K-1}\frac{256^2}{\log ^2 a_K} \int _{\mathbb {T}\setminus 7T_K^1} \left| \frac{1}{|\mathcal {N}_{K,i}|}\sum _{m\in \mathcal {N}_{K,i-1}}S_mP_n(x)\right| ^2\\&\le \sum _{i=0,\dots , K-1}\frac{256^2}{\log ^2 a_K} \int _{\mathbb {T}\setminus 7T_K^1} \frac{|\mathcal {N}_{K,i-1}|}{|\mathcal {N}_{K,i}|^2}\sum _{m\in \mathcal {N}_{K,i-1}}\left| S_mP_n(x)\right| ^2. \end{aligned} \end{aligned}$$

Then by Lemma 4.7, we get

$$\begin{aligned} \begin{aligned}&|T_K^2| \le \frac{C}{\log a_K}\Vert P_n\Vert _1 \sum _{i=0,\dots , K-1}\frac{|\mathcal {N}_{K,i-1}|^2}{|\mathcal {N}_{K,i}|^2}. \end{aligned} \end{aligned}$$

Since \(\Vert P_n\Vert _1\le C\) and \(|\mathcal {N}_{K,i}|= (K)! + (K+1)! + \dots +(K+i)!\), we also have \(|T_K^2| \le \frac{C}{\log a_K}\). Now, let

$$\begin{aligned} T_K:= 7T_K^1 \cup T_K^2. \end{aligned}$$

We have proved that: \(|T_K|\le \frac{C}{\log a_K}\).

We suppose that \(x\in I^{\prime }_{n,K}\setminus T_K\) for the rest of this lemma’s proof i.e. that

$$\begin{aligned} x\in \bigcup _{j=-n/(2K)}^{n/(2K)-1}\bigcup _{i=a_K}^{K-a_K-1} I^{\prime }_{n, K, j, i} \setminus T_K. \end{aligned}$$

Consequently there exists a unique \(j\in \{-n/(2K),\dots , n/(2K)-1\}\) and \(i\in \{a_K, \dots , K-a_K-1\}\) such that \(x\in I^{\prime }_{n, K, j, i} {\setminus } T_K\). Therefore, by Lemma 4.5

$$\begin{aligned} S_mP_n(x) \ge \frac{\log a_K}{64}, \end{aligned}$$

where \(m=m_{i,s}\) in the case of \(s=1,\dots , (K+i)!\). Consequently, by \(x\notin T_K\):

$$\begin{aligned} \begin{aligned} \sigma _{\mathcal {N}_{K,i}}P_n(x)&= \frac{1}{|\mathcal {N}_{K,i}|}\sum _{m\in \mathcal {N}_{K,i}}S_mP_n(x) \\&\ge \frac{1}{|\mathcal {N}_{K,i}|}\sum _{s=1,\dots , (K+i)!}S_{m_{i,s}}P_n(x) \\&- \left| \frac{1}{|\mathcal {N}_{K,i}|}\sum _{m\in \mathcal {N}_{K,i-1}}S_mP_n(x)\right| \\&\ge \frac{\log a_K}{64}\cdot \frac{(K+i)!}{|\mathcal {N}_{K,i}|}- \frac{\log a_K}{256} \ge \frac{\log a_K}{256}. \end{aligned} \end{aligned}$$

This completes the proof of Lemma 4.8. \(\square \)

5 The proof of the main theorem (Theorem 3.3)

We now turn to the proof of the main theorem (Theorem 3.3). In the first part of the proof (immediately after the definition of the counterexample function), we describe some intuitive ideas about how the main theorem works.

The proof of Theorem 3.3

We suppose that condition (3.2) holds, which is equivalent to condition (3.1). If we have any sequence \(K = (K_j)\) of natural numbers, then let the sequence \(n = (n_j)\) be such as given by Lemmas 4.24.5 and 4.8. In other words, we have a sequence of pairs \((K, n) = (K_j, n_j)\).

We choose a sequence \(K = (K_j) \nearrow \infty \), where the convergence is “fast enough”. We discuss the meaning of the phrase “fast enough” later. (Basically, \(K_{j+1}\) should be “much larger compared to” \(n_j\).)

Suppose that the sequences \((K_j), (n_j), (a_{K_j}), (b_{K_j})\) satisfy (4.4) and

$$\begin{aligned} \begin{aligned}&\sum _{j=1}^{\infty }\frac{1}{\log a_{K_j}}<\infty , \quad \sum _{j=1}^{\infty }\frac{a_{K_j}}{K_j}<\infty , \quad \sum _{j=1}^{\infty }\frac{1}{b_{K_j}} <\infty , \quad 3n_j4^{K_j} \le K_{j+1},\\&\sum _{s=0}^{j-1}|\mathcal {N}_{K_s}| \max \mathcal {N}_{K_s} \le (K_j)!, \\&6\cdot 256 \cdot 2^j\sum _{u=1}^{j-1}\frac{1}{2^u} n_u 4^{K_u} \le \log a_{K_j}, \quad \frac{\log (a_{K_j})}{2^j}\rightarrow \infty ,\quad \left( \max \mathcal {N}_{K_j}\right) ^2 \le K_{j+1} \end{aligned} \end{aligned}$$
(5.1)

for every \(j\in \mathbb {N}\). We recall that \(\mathcal {N}_{K}\) is defined in Lemma 4.8. The four inequalities in the first line of (5.1) will serve the purpose of proving that the measure of the divergence set in \(\mathbb {T}\) is \(2\pi \), that is, “we have divergence almost everywhere.” Meanwhile, all the other inequalities will help to prove that there is divergence indeed.

The counterexample function is given as

$$\begin{aligned} f:= \sum _{j=1}^{\infty }\frac{1}{2^j} P_{n_j}. \end{aligned}$$

Since \(P_n\) is the arithmetical mean of some “shifted” de la Vallée-Poussin kernels, by \(\Vert P_{n_j}\Vert _1\le C\) we have

$$\begin{aligned} \Vert f\Vert _1 \le C. \end{aligned}$$

We give some intuitive ideas about the main steps of the proof. First, we prove that for almost all x we have \(x\in I^{\prime }_{n_j,K_j}\) for infinitely many j. (This will be the point when we prove divergence holds almost everywhere.) The arithmetical mean of a subsequence of the partial sums of the Fourier series of f will be equal to some \(A_1-|A_2|-|A_3|\), where: \(A_1, A_2\) and \(A_3\) will be the arithmetical means of the corresponding partial Fourier sums of the functions \(\frac{1}{2^j} P_{n_j}\), \(\sum _{u<j}\frac{1}{2^u} P_{n_u}\) and \(\sum _{u>j} \frac{1}{2^u} P_{n_u}\), respectively. \(A_1\) will be the “main/large” part. That is, the situation will be exactly what we saw in Lemma 4.8. \(A_2\) will be the case when \(u<j\). Then, \(n_u\) will be relatively small compared to m and therefore \(S_mP_{n_u} = P_{n_u}\). We split the expression \(A_2\) into two parts. To estimate \(A_{2,1}\), we will use \(\Vert P_{n_u}\Vert _1\le C\). On the other hand, \(A_{2,2}\) will be “small” compared to \(\log a_{K_j}/2^j\) because \(a_{K_j}\) grows “quite fast”. Consequently, \(A_2\) will be small compared to \(A_1\). In the case of \(A_3\), m will be “small” compared to the indices \(n_u\), therefore \(S_m P_{n_u} = D_m\). Thus, this is the point where we will apply Lemma 4.4 to see the boundedness of \(A_3\). This will just essentially mean that the sequence of the integrals of the Dirichlet kernel functions is bounded.

Now, let us start constructing the divergence set by applying the notation of Lemma 4.8 (and consequently also the notation of Lemmas 4.2 and 4.5). Let

$$\begin{aligned} T^{\prime }_j:= I^{\prime }_{n_j,K_j}\setminus T_{K_j}, \quad j\in \mathbb {N}. \end{aligned}$$

Let \((X_j)\) be a sequence of subsets of \(\mathbb {R}\). We define the limit superior of this sequence as \(\limsup _j X_j:= \cap _{n=1}^{\infty }\cup _{j=n}^{\infty }X_j\). In the sequel, we prove that for

$$\begin{aligned} T^{\prime }:= \limsup _j T_j^{\prime } \quad \text{ we } \text{ have }\quad |\mathbb {T}\setminus T^{\prime }| =0. \end{aligned}$$

It is well-known that \( T^{\prime }\) is the set of \(x\in \mathbb {T}\) belonging to infinitely many sets \(T_j^{\prime }\). It would be enough to prove that

$$\begin{aligned} \left| \limsup _j I^{\prime }_{n_j,K_j}\right| = 2\pi \end{aligned}$$

since the measure of x’s in \(\mathbb {T}\) belonging to infinitely many \(T_{K_j}\) is zero. This fact comes from Lemma 4.8 and (5.1) as

$$\begin{aligned} \left| \bigcup _{j=i}^{\infty }T_{K_j}\right| \le \sum _{j=i}^{\infty }\left| T_{K_j}\right| \le C\sum _{j=i}^{\infty }\frac{1}{\log a_{K_j}} \rightarrow 0 \quad (i\rightarrow \infty ). \end{aligned}$$

We recall from Lemma 4.2 and (4.8)

$$\begin{aligned} I_{n_j,K_j}^{\prime }= & {} \bigcup _{l=-n_j/(2K_j)}^{n_j/(2K_j)-1}\bigcup _{s=a_{K_j}}^{K_j-a_{K_j}-1} I^{\prime }_{n_j, K_j, l,s}\\= & {} \bigcup _{l=-n_j/(2K_j)}^{n_j/(2K_j)-1}\bigcup _{s=0}^{K_j-1} I^{\prime }_{n_j, K_j, l,s} \\{} & {} \setminus \left( \bigcup _{l=-n_j/(2K_j)}^{n_j/(2K_j)-1}\bigcup _{s=K_j-a_{K_j}}^{K_j-1} I^{\prime }_{n_j, K_j, l,s} \cup \bigcup _{l=-n_j/(2K_j)}^{n_j/(2K_j)-1}\bigcup _{s=0}^{a_{K_j}-1} I^{\prime }_{n_j, K_j, l,s}\right) \\=: & {} J_j^{\prime } \setminus \left( J_j^1 \bigcup J_j^2\right) , \end{aligned}$$

where \(I^{\prime }_{n_j, K_j, l, s} \subset I_{n_j, K_j, l, s}\), \(4|I^{\prime }_{n_j, K_j, l, s}| \ge |I_{n_j, K_j, l, s}|\) for every lsj. Then it holds the equality

$$\begin{aligned} |\limsup _j J_j^v| =0, \end{aligned}$$

(\(v=1,2\)) which is given as in the case of the measure of \(\limsup T_{K_j}\), that is it comes from

$$\begin{aligned} \left| \bigcup _{j=1}^{\infty }J_j^v\right| \le \sum _{j=1}^{\infty }|J_j^v| \le C\sum _{j=1}^{\infty }\frac{a_{K_j}}{K_j} <\infty \end{aligned}$$

(\(v=1,2\)) by condition (5.1). In other words, it is enough to investigate the measure of the limes superior of sets \(J_j^{\prime }= \bigcup _{l=-n_j/(2K_j)}^{n_j/(2K_j)-1}\bigcup _{s=0}^{K_j-1} I^{\prime }_{n_j, K_j, l,s}\). That is, to prove that the measure of \(\mathbb {T}\setminus \limsup J_j^{\prime }\) is zero. We recall ((4.5), (4.9)) that

$$\begin{aligned} I_{n_j, K_j, l, s} = \left[ \left( lK_j+s\right) \frac{2\pi }{n_j}+\frac{2\pi }{n_jb_{K_j}}, \left( lK_j+s+1\right) \frac{2\pi }{n_j}-\frac{2\pi }{n_jb_{K_j}}\right) \end{aligned}$$

and

$$\begin{aligned} I^{\prime }_{n_j, K_j, l, s} =\left\{ x\in I_{n_j, K_j, l, s}: \sin (k_{j, s}x) < -1/2\right\} . \end{aligned}$$

Moreover, we set

$$\begin{aligned} \begin{aligned} I^{\circ }_{n_j, K_j, l, s}&= \left[ \left( lK_j+s\right) \frac{2\pi }{n_j}, \left( lK_j+s+1\right) \frac{2\pi }{n_j}\right) , \\ I^{\prime , \circ }_{n_j, K_j, l, s}&= \left\{ x\in I^{\circ }_{n_j, K_j, l, s} : \sin (k_{j, s}x)< -1/2\right\} , \\ I^{\prime \prime , \circ }_{n_j, K_j, l, s}&= \left\{ x\in I^{\circ }_{n_j, K_j, l, s} : \sin (k_{j, s}x) \ge -1/2\right\} ,\\&-n_j/(2K_j)\le l< n/(2K_j), 0\le s\le K_j-1. \end{aligned} \end{aligned}$$
(5.2)

Then by

$$\begin{aligned} \left| I^{\prime , \circ }_{n_j, K_j, l, s}\setminus I^{\prime }_{n_j, K_j, l, s}\right| \le \frac{4\pi }{n_jb_{K_j}} \end{aligned}$$

and (see the first line of (5.1))

$$\begin{aligned} \left| \bigcup _{j=1}^{\infty } \left( J_j^{\prime , \circ }\setminus J_j^{\prime }\right) \right| \le \sum _{j=1}^{\infty }\frac{C}{b_{K_j}} < \infty , \end{aligned}$$

where

$$\begin{aligned} J_j^{\prime }= & {} \bigcup _{l=-n_j/(2K_j)}^{n_j/(2K_j)-1}\bigcup _{s=0}^{K_j-1} I^{\prime }_{n_j, K_j, l,s}\quad \text {and}\\ J_j^{\prime , \circ }:= & {} \bigcup _{l=-n_j/(2K_j)}^{n_j/(2K_j)-1}\bigcup _{s=0}^{K_j-1} I^{\prime , \circ }_{n_j, K_j, l,s} \end{aligned}$$

we have: in order to prove that the measure of \(\mathbb {T}{\setminus } \limsup J_j^{\prime }\) is zero, it is enough to prove that the measure of \(\mathbb {T}\setminus \limsup J^{\prime ,\circ }_j\) is zero.

By (5.2) we have

$$\begin{aligned} \begin{aligned}&\left| I^{\circ }_{n_j, K_j, l, s}\right| = \frac{2\pi }{n_j}, \quad \left| I^{\prime \prime , \circ }_{n_j, K_j, l, s}\right| \le \frac{3}{4} \left| I^{\circ }_{n_j, K_j, l, s}\right| = \frac{2\pi }{n_j}\frac{3}{4}, \\&\left| I^{\prime \prime , \circ }_{n_j, K_j, l, s}\cap \bigcup _{\tilde{l}=-n_{j+1}/(2K_{j+1})}^{n_{j+1}/(2K_{j+1})-1} \bigcup _{\tilde{s}=0}^{K_{j+1}-1} I^{\prime \prime , \circ }_{n_{j+1}, K_{j+1}, \tilde{l}, \tilde{s}} \right| \le \left| I^{\prime \prime , \circ }_{n_j, K_j, l, s}\right| \frac{3}{4} \le \frac{2\pi }{n_j}\left( \frac{3}{4}\right) ^2. \end{aligned} \end{aligned}$$

Consequently, \(|(\mathbb {T}{\setminus } J_j^{\prime , \circ })\cap (\mathbb {T}{\setminus } J_{j+1}^{\prime , \circ })|\le 2\pi (3/4)^2 \). This argument can be iterated in the form \(|(\mathbb {T}{\setminus } J_j^{\prime , \circ })\cap \cdots \cap (\mathbb {T}{\setminus } J_{j+r}^{\prime , \circ })|\le 2\pi (3/4)^{1+r}\) if \(K_j\) grows sufficiently fast. Then, we have

$$\begin{aligned} \left| \bigcap _{j=u}^{\infty }(\mathbb {T}\setminus J_j^{\prime , \circ })\right| = 0 \quad (u\in \mathbb {N}). \end{aligned}$$

This gives

$$\begin{aligned} \left| \mathbb {\mathbb {T}} \setminus \limsup _j J^{\prime , \circ }_{j}\right| = \left| \bigcup _{u=1}^{\infty }\bigcap _{j=u}^{\infty }(\mathbb {T}\setminus J_j^{\prime , \circ })\right| = 0 \end{aligned}$$

and by the above written (\(T^{\prime } = \limsup _j T_j^{\prime }\))

$$\begin{aligned} |\mathbb {T}\setminus T^{\prime }| = 0. \end{aligned}$$

After this we turn our attention to prove the divergence regarding the arithmetical means of some partial sums of the Fourier series of f on the set \(T^{\prime }\). We apply Lemmas 4.54.8 and let

$$\begin{aligned} \mathcal {N^{\prime }} = \bigcup _{j=1}^{\infty }\mathcal {N}_{K_j} \subset \mathcal {N}. \end{aligned}$$

We recall that \(\mathcal {N}_{K_j}\) (for \(K_j\in \mathbb {N}\)) is defined in Lemma 4.8 and we also recall that the largest element of \(\mathcal {N}_{K_j}\) is less than the smallest element of \(\mathcal {N}_{K_{j+1}}\) as \(m_{j, K_j-1, (2K_j-1)!} \le k_{j, K_j-1} + \epsilon n_j/K_j = 2 n_j 4^{K_j-1} + \epsilon n_j/K_j < 3n_j4^{K_j} \le K_{j+1}, 8K_{j+1}\le n_{j+1}\) (for \(0<\epsilon <1/(35\pi )\) see (5.1)).

Let \(x\in T^{\prime }\). There are infinite many j’s such that \(x\in T^{\prime }_j = I^{\prime }_{n_j,K_j}{\setminus } T_{K_j}\). Let j be a such type index. Then, \(x\in I^{\prime }_{n_j,K_j, l, i}{\setminus } T_{K_j}\) for some \(l\in \left\{ -n_j/(2K_j),\dots , n_j/(2K_j)-1\right\} , i\in \left\{ a_{K_j}, \dots , K_j-a_{K_j}-1\right\} \). We set \(N_x = \max \mathcal {N}_{K_j, i}\). By Lemma 4.8 and condition (5.1), we have

$$\begin{aligned} \begin{aligned}&\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{m\in \mathcal {N^{\prime }}\cap [0, N_x]} S_mf(x) \\&\quad \ge \frac{1}{2^j}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{m\in \mathcal {N^{\prime }}\cap [0, N_x]} S_mP_{n_j}(x)\\&\quad - \left| \sum _{u=1}^{j-1}\frac{1}{2^u}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{m\in \mathcal {N^{\prime }}\cap [0, N_x]} S_mP_{n_u}(x)\right| \\&\quad - \left| \sum _{u=j+1}^{\infty }\frac{1}{2^u}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{m\in \mathcal {N^{\prime }}\cap [0, N_x]} S_mP_{n_u}(x)\right| \\&\quad =: A_1 - A_2 - A_3. \end{aligned} \end{aligned}$$

We will split the set of m’s, \(\mathcal {N^{\prime }}\cap [0, N_x]\) into two disjoint parts: \(\cup _{s=0}^{j-1}\mathcal {N}_{K_s}\) and \(\mathcal {N}_{K_j, i}\).

$$\begin{aligned} \begin{aligned} A_1&= \frac{1}{2^j}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{m\in \mathcal {N}_{K_j, i}} S_mP_{n_j}(x) +\frac{1}{2^j}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{m\in \cup _{s=0}^{j-1}\mathcal {N}_{K_s}} S_mP_{n_j}(x)\\&=: A_{1,1} + A_{1,2}. \end{aligned} \end{aligned}$$

First, for \(A_{1,1}\) (by Lemma 4.8 and by (5.1) - second line) we have

$$\begin{aligned} A_{1,1}\ge & {} \frac{\left| \mathcal {N}_{K_j, i}\right| }{|\mathcal {N^{\prime }}\cap [0, N_x]|}\frac{\log a_{K_j}}{256}\frac{1}{2^j} \\\ge & {} \frac{\log a_{K_j}}{256}\frac{1}{2^j}\frac{|\mathcal {N}_{K_j, i}|}{|\mathcal {N}_{K_j, i}| + \sum _{s=0}^{j-1}|\mathcal {N}_{K_s}|}\\\ge & {} \frac{\log a_{K_j}}{256}\frac{1}{2^j} \frac{(K_j+i)!}{ (K_j+i)!+ \sum _{s=0}^{j-1}|\mathcal {N}_{K_s}|} \\\ge & {} \frac{\log a_{K_j}}{3\cdot 256}\frac{1}{2^j}. \end{aligned}$$

On the other hand, by (5.1) (second line) and by the fact that \(|S_mP_{n_j}| \le (m + 1/2)\Vert P_{n_j}\Vert _1\)

$$\begin{aligned} \begin{aligned} |A_{1,2}|&\le C\frac{1}{2^j}\frac{1}{(K_j)!}\sum _{s=0}^{j-1}\sum _{m\in \mathcal {N}_{K_s}}\max \mathcal {N}_{K_s}\Vert P_{n_j}\Vert _1 \\&\le C \frac{1}{(K_j)!} \frac{1}{2^j}\sum _{s=0}^{j-1} |\mathcal {N}_{K_s}| \max \mathcal {N}_{K_s}\\&\le C\frac{1}{2^j}. \end{aligned} \end{aligned}$$

Then

$$\begin{aligned} A_1 \ge A_{1,1} - |A_{1,2}| \ge \frac{\log a_{K_j}}{3\cdot 256}\frac{1}{2^j} - C\frac{1}{2^j}. \end{aligned}$$

Next, we turn our attention to \(A_2\). We give an upper estimation for \(A_2\) in the very same way as above and we split the set of m’s. That is, \(\mathcal {N^{\prime }}\cap [0, N_x]\) is the disjoint union of \(\cup _{s=0}^{j-1}\mathcal {N}_{K_s}\) and \(\mathcal {N}_{K_j, i}\). In the first situation, for \(m\in \cup _{s=0}^{j-1}\mathcal {N}_{K_s}\), the number of m’s will be “small” compared to \(|\mathcal {N^{\prime }}\cap [0, N_x]|\). In the second situation \(S_mP_{n_u}(x)\) will just be \(P_{n_u}(x)\).

That is,

$$\begin{aligned} \begin{aligned} A_{2,1}&:=\left| \sum _{u=1}^{j-1}\frac{1}{2^u}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{s=0}^{j-1}\sum _{m\in \mathcal {N}_{K_s}} S_mP_{n_u}(x)\right| \\&\le C\sum _{u=1}^{j-1}\frac{1}{2^u}\frac{1}{(K_j)!}\sum _{s=0}^{j-1}\sum _{m\in \mathcal {N}_{K_s}}\max \mathcal {N}_{K_s}\Vert P_{n_u}\Vert _1 \\&\le C \frac{1}{(K_j)!} \sum _{u=1}^{j-1}\frac{1}{2^u}\sum _{s=0}^{j-1} |\mathcal {N}_{K_s}| \max \mathcal {N}_{K_s}\\&\le C, \end{aligned} \end{aligned}$$

by condition (5.1). On the other hand, for \(u<j\) and \(m\in \mathcal {N}_{K_j}\cap [0, N_x]\) we have \(S_mP_{n_u}(x) = P_{n_u}(x)\) and this implies

$$\begin{aligned} A_{2,2}:= & {} \left| \sum _{u=1}^{j-1}\frac{1}{2^u}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|} \sum _{m\in \mathcal {N}_{K_j, i}} S_mP_{n_u}(x)\right| \\= & {} \left| \sum _{u=1}^{j-1}\frac{1}{2^u}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|} \sum _{m\in \mathcal {N}_{K_j, i}} P_{n_u}(x)\right| \\\le & {} \sum _{u=1}^{j-1}\frac{1}{2^u} \left| P_{n_u}(x) \right| . \end{aligned}$$

Then, (4.3) and (5.1) give

$$\begin{aligned} A_{2,2} \le \sum _{u=1}^{j-1}\frac{1}{2^u} n_u4^{K_u} \le \frac{\log a_{K_j}}{6\cdot 256}\frac{1}{2^j}. \end{aligned}$$

That is,

$$\begin{aligned} A_1 - A_2 \ge A_1 - A_{2,1} - A_{2,2} \ge \frac{\log a_{K_j}}{6\cdot 256}\frac{1}{2^j} - C. \end{aligned}$$

Finally, we investigate \(A_3\) by giving an upper estimation for its absolute value. In this situation we have to investigate \(S_mP_{n_u}(x)\) for \(u>j\) and \(m\in \mathcal {N^{\prime }}\cap [0, N_x]\subset \cup _{s=0}^j\mathcal {N}_{K_s}\). This means that \(m\le \max \mathcal {N}_{K_j}\). We recall the construction of the polynomials \(P_n\) (4.2):

$$\begin{aligned} P_{n_u}(x)= & {} \frac{1}{n_u}\sum _{h=-n_u/(2K_u)}^{n_u/(2K_u)-1}\sum _{i=0}^{K_u-1}\mathcal {V}_{\alpha _i} \left( x-(hK_u+i)\frac{2\pi }{n_u}\right) \\= & {} \frac{1}{n_u}\sum _{h=-n_u/(2K_u)}^{n_u/(2K_u)-1}\sum _{i=0}^{K_u-1} \mathcal {V}_{\alpha _i}(t_{h, i}). \end{aligned}$$

The de la Vallée-Poussin kernels \(\mathcal {V}_{\alpha _i}\) (\(\alpha _i = n_u4^{i}, i=0,\dots , K_u-1\) (4.1)) are the arithmetical means of Dirichlet kernels, the degrees of which are at least as large as \(n_u \ge n_{j+1}\) which is “by far” greater than m. Consequently, \(S_m(\mathcal {V}_{\alpha _i}) = D_m\) and

$$\begin{aligned} \left| S_mP_{n_u}(x)\right| = \left| \frac{1}{n_u}\sum _{h=-n_u/(2K_u)}^{n_u/(2K_u)-1}\sum _{i=0}^{K_u-1}D_m \left( x-(hK_u+i)\frac{2\pi }{n_u}\right) \right| . \end{aligned}$$

We recall the notation introduced in (4.2) at the beginning of the fourth section \(x= (x_1K_u + x_0)2\pi /n_u + \Delta \) (\(0\le \Delta < 2\pi /n_u\)),

$$\begin{aligned} t_{h, i}= & {} x- \left( hK_u + i\right) \frac{2\pi }{n_u} = \left( (x_1-h)K_u + (x_0-i)\right) \frac{2\pi }{n_u} + \Delta \nonumber \\=: & {} {\tilde{t}}_{h, i} + \Delta , \end{aligned}$$

where \(x_1, h\in \{-n_u/(2K_u),\dots , n_u/(2K_u)-1\}, x_0, i\in \{0,\dots , K_u-1\}\). The inequalities \(\left( \max \mathcal {N}_{K_j}\right) ^2 \le K_{j+1}, 8K_{j+1}\le n_{j+1} \le n_u\) (it is the last inequality in (5.1)) for any \(m \le N_x \le \max \mathcal {N}_{K_j}\) give \(m^2 \le n_u\).

Since Lagrange mean value theorem implies \(|\cos (kt_{h,i})-\cos (k{\tilde{t}}_{h,i})| \le k|t_{h,i}-{\tilde{t}}_{h,i}|\) (\(k\in \mathbb {N}\)), then we have

$$\begin{aligned} |D_m(t_{h, i}) - D_m({\tilde{t}}_{h, i})| \le Cm^2\Delta \le C\frac{m^2}{n_u} \le C. \end{aligned}$$

Consequently, with the help of Lemma 4.4 (\(a=1\), \(L=n_u\)), we get

$$\begin{aligned} \begin{aligned} \left| S_mP_{n_u}(x)\right|&= \left| \frac{1}{n_u}\sum _{h=-n_u/(2K_u)}^{n_u/(2K_u)-1}\sum _{i=0}^{K_u-1}D_m\left( x-(hK_u+i)\frac{2\pi }{n_u}\right) \right| \\&= \left| \frac{1}{n_u}\sum _{h=-n_u/(2K_u)}^{n_u/(2K_u)-1}\sum _{i=0}^{K_u-1}D_m(t_{h, i})\right| \\&\le C + C\left| \frac{1}{n_u}\sum _{h=-n_u/(2K_u)}^{n_u/(2K_u)-1}\sum _{i=0}^{K_u-1}D_m({\tilde{t}}_{h, i})\right| \le C. \end{aligned} \end{aligned}$$

This inequality immediately implies

$$\begin{aligned} A_3 =\left| \sum _{u=j+1}^{\infty }\frac{1}{2^u}\frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{m\in \mathcal {N^{\prime }}\cap [0, N_x]} S_mP_{n_u}(x)\right| \le C\sum _{u=j+1}^{\infty }\frac{1}{2^u} \le C. \end{aligned}$$

Finally, by what we have proven above, the proof of Theorem 3.3 follows as

$$\begin{aligned} \frac{1}{|\mathcal {N^{\prime }}\cap [0, N_x]|}\sum _{m\in \mathcal {N^{\prime }}\cap [0, N_x]} S_mf(x) \ge A_1 - A_2 - A_3 \ge \frac{\log a_{K_j}}{6\cdot 256}\frac{1}{2^j} - C. \end{aligned}$$

More precisely,

$$\begin{aligned} \limsup _N\frac{1}{N}\sum _{l=1}^{N}S_{m_l}f(x) = +\infty \quad (x\in T^{\prime }, |\mathbb {T}\setminus T^{\prime }|=0, \mathcal {N^{\prime }} = (m_1, m_2, \dots )). \end{aligned}$$

\(\square \)