A Discrete Elementary Algebra for Evaluating Polynomial Divergent Series

April 30, 2026

TLDR

We construct an "elementary" (see definition in the formal problem statement) finite discrete algebra with additional rules for divergent series handling from which one can compute the values of $\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}$ at all negative integer values.

The discrete algebra is in accordance with Abel summation and other zeta regularization methods, though the computation needs no concept of limits (the proof admittedly relies on it).

Bernoulli numbers emerge as a consequence of the algebra (via von Staudt–Clausen), and provably as the GCD of the tail of the second-order finite difference series of any $\zeta(-k)$, for $k \geq 1$.

Zeta is well-studied and its integer values are well known; see Wikipedia. The interest is not the values but the method: a discrete system that recovers them without directly invoking analytic machinery in its computation.

Basic Definitions

The Riemann zeta function. For $\text{Re}(s) > 1$:

$$\zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s}$$

This series converges only for $\text{Re}(s) > 1$. Values at other points (like negative integers) are defined by analytic continuation.

The Dirichlet eta function. The alternating counterpart of $\zeta$:

$$\eta(s) = \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n^s} = 1 - \frac{1}{2^s} + \frac{1}{3^s} - \cdots$$

This converges for $\text{Re}(s) > 0$ and is related to $\zeta$ by $\eta(s) = (1 - 2^{1-s})\,\zeta(s)$.

Convergence and divergence. A series $\sum a_n$ converges if its partial sums $S_N = \sum_{n=1}^{N} a_n$ approach a finite limit as $N \to \infty$. It diverges if they don't. The series $1 + 2 + 3 + \cdots$ diverges. The series $1 - 1/2 + 1/3 - \cdots$ converges (to $\ln 2$).

Finite differences. Given a sequence $a(1), a(2), a(3), \ldots$, the first forward difference is $\Delta a(n) = a(n+1) - a(n)$. Applying $\Delta$ again gives the second difference $\Delta^2 a(n) = a(n+2) - 2a(n+1) + a(n)$. If $a(n)$ is a polynomial of degree $d$, then $\Delta^d a(n)$ is constant and $\Delta^{d+1} a(n) = 0$. This is the discrete analog of differentiation reducing polynomial degree.

Tail of a series. Given a series $\sum_{n=1}^{\infty} a_n$, the tail starting at position $m$ is $\sum_{n=m}^{\infty} a_n$ — everything after the first $m-1$ terms. When we say "the tail has $\gcd = d$," we mean $\gcd(a_m, a_{m+1}, a_{m+2}, \ldots) = d$.

Intro: The Invention of the Devil: Some History

"Divergent series are the invention of the devil, and it is shameful to base on them any demonstration whatsoever."

— Niels Henrik Abel, 1828

Long before Abel said this, a century before, Euler made his own brilliantly shameful demonstration of such, without strict proof, but with sharp intuition. His results were later shown to agree with zeta-regularization formalism shortly after Abel said this.

After this quote:

  1. Abel himself would later go on to develop his own regularization scheme for divergent series, which we now understand agrees with $\zeta$ regularization.
  2. Bernhard Riemann would publish his manuscript deriving the analytic continuation of the $\zeta$ function, thirty years later.
  3. The next century, an untrained Ramanujan would assign a finite value to one of the most (in)famous divergent series:
$$1 + 2 + 3 + 4 + \cdots = -\frac{1}{12}$$

From his notebooks, we know Ramanujan's logic followed very closely to the following:

Start with the Grandi series:

$$1 - 1 + 1 - 1 + \cdots = \frac{1}{2},$$

This, Ramanujan assumed. The series oscillates and does not classically converge, though there are some current regularization techniques which would assign the same value.

Now define:

$$\begin{aligned} S &= 1 + 2 + 3 + 4 + \cdots \\ S_A &= 1 - 2 + 3 - 4 + \cdots \end{aligned}$$

He then applied two formal operations:

Shift-and-self-add:

$$\begin{aligned} S_A &= 1 - 2 + 3 - 4 + \cdots \\ (S_A)_{\text{shift}} &= 0 + 1 - 2 + 3 - \cdots \\ \hline 2S_A &= 1 - 1 + 1 - 1 + \cdots = \frac{1}{2} \implies S_A = \frac{1}{4} \end{aligned}$$

Subtract:

$$\begin{aligned} S - S_A &= 4S \\ S &= -\frac{1}{12} \end{aligned}$$

None of these manipulations are valid under classical convergence. And yet, the result agrees exactly with the analytic continuation of the Riemann zeta function:

$$\zeta(-1) = \sum_{n=1}^\infty n = -\frac{1}{12}$$

This raises a natural question:

Informal Problem Statement and Motivation

Is there a natural extension of this "discrete algebra" method beyond $\zeta(-1)$?

Ramanujan's later work, via the Euler–Maclaurin formula, systematically recovers all values $\zeta(-k)$.

But the method is analytic. It relies on integrals, derivatives, and continuity, not the informal discrete manipulations above. It also requires Bernoulli numbers as input.

The derivation above does something very different. It seems to use only discrete operations in its computation.

It uses:

acting directly on divergent series.

In this post, I show that this discrete algebra admits a consistent extension that, starting from the Grandi series anchor, reproduces all values $\zeta(-k), k \geq 0$.

Formalizing the Problem Statement

Does there exist a finite set of elementary discrete operations on polynomial divergent series such that, starting from a single anchor value (The Grandi Series), they uniquely determine $\eta(-k)$ for all $k \geq 0$ — and the resulting values agree with the analytic continuation $\zeta(-k)$?

By "elementary" we mean operations of the same character as Ramanujan's original derivation:

  1. Term-by-term algebraic operations on formal series (addition, subtraction, scalar multiplication, index shifts).
  2. No limits, integrals, or analytic continuation in its computation.
  3. No pre-existing number-theoretic inputs (e.g., Bernoulli numbers may not appear as coefficients. They must arise as consequences of the algebra, not as inputs).

If such a system exists, then its permissible operations cannot have the same rules as ordinary finite arithmetic.

This is of course easy to see. Treating the constant series $1 + 1 + 1 + \cdots$ as if it behaved like a finite sum immediately leads to inconsistency:

$S = 1 + S \Rightarrow 0 = 1$

So, of course, we must be exceedingly careful to not run into such contradictions.

The goal is then to "find" this set of permissible operations for polynomial divergent series (if they exist), and show their equivalence to existing zeta regularization values.

To be clear about scope: this is not a new theory of divergent series. It is a discrete computation that reproduces known values, with a proof (via Abel summation) of why it works. The interest is that such a computation exists and terminates, not that it replaces existing analytic methods.

The Claim

Such a system exists and determines the values of the zeta function at negative integers, starting from a single chosen anchor series.

This construction proceeds by first determining the corresponding values of the alternating zeta function

$$\eta(-k) = \sum_{n=1}^\infty (-1)^{n-1} n^k,$$

from which the values of $\zeta(-k)$ follow via the standard relation between $\eta$ and $\zeta$.

Article Sketch

In this article, I will chronologically:

  1. Explain the rules of the computation of divergent series. Again, here, we will make no assumption on limits or continuity.
  2. Show the computation of a few zeta values, including powers of 3, 5, and 7.
  3. Show a "compressed" version of the algorithm presented in (2), and present a script showing verification of powers up to 101 (don't worry about overflow).
  4. Give some intuition as per why this method works (all in the finite differences).
  5. Prove, for all negative integers, why this discrete computation method works.
  6. Explain how the Bernoulli numbers emerge naturally as a result of this algebra.

The (Ramanujan-Extended) Algebra of Divergent Sums

The Rules

We introduce a set of rules governing evaluation of formal series. For a sequence $\{a_n\}_{n=1}^{\infty}$, with $a_n > 0$, define: $S = \sum_{n=1}^{\infty} a_n$ and $S_A = \sum_{n=1}^{\infty} (-1)^{n-1} a_n$.

  1. Anchor. $\sum_{n=1}^{\infty} (-1)^{n-1} = \frac{1}{2}$. This is not derivable from inside the algebra on its own. It is an axiom.
  1. Valid Operations on Polynomial Series. We distinguish two classes of formal series, for which different operations are permitted. Let $\{a_n\}$ be a positive sequence.

2a. Positive Series. For non-alternating series, the only permitted operation is:

  1. Reduction to alternating form. Given a positive series $S$ and its alternating counterpart $S_A$, we may form the relation $S - S_A$. In particular, for polynomial sequences $a_n = n^k$, this induces $(1 - 2^{k+1})S = S_A$, which gives the familiar relation: $$S=\frac{S_A}{1 - 2^{k+1}}$$, or in conventional notation (assuming $S$ follows a zeta sequence): $$\zeta(-k) = \frac{\eta(-k)}{1 - 2^{k+1}}.$$

2b. Alternating Series. For alternating series $S_A$, the following operations are permitted:

  1. Shift invariance (shift-and-add). Shifting by one index and adding yields $2S_A = S_A + (S_A)_{\text{shift}}$:
$$\begin{aligned} S_A &= a_1 - a_2 + a_3 - a_4 + \cdots \\ (S_A)_{\text{shift}} &= 0 + a_1 - a_2 + a_3 - \cdots \\ \hline 2S_A &= a_1 + (a_1 - a_2) + (a_3 - a_2) + (a_3 - a_4) + \cdots \end{aligned}$$
  1. Leading-term separation. If the tail admits a common factor $d > 1$, separate the leading term and treat the remaining tail as an independent series regardless whether the series diverges: $\gcd(a_2, a_3, \ldots) = d \implies \sum_{n=1}^{\infty} (-1)^{n-1} a_n = a_1 + d \sum_{n=2}^{\infty} (-1)^{n-1} \frac{a_n}{d}$$.

In other words, rearrangement of the leading term is only valid if the tail shares a common factor.

  1. Linearity on polynomial expressions. Linearity holds termwise for polynomial decompositions: $\sum (-1)^{n-1}(p(n) + q(n)) = \sum(-1)^{n-1} p(n) + \sum (-1)^{n-1}q(n).$

Of the above, the conditional stability that the leading-term separation rule gives allows the algebra to terminate.

A Quick Note on GCD Factoring: In the computations below, we do not prove that the tail of the infinite series has a GCD at all (though it is guaranteed by polynomial structure). The fuller proof of why we can perform this operation is in Why Bernoulli Numbers always appear as the GCD of $\Delta^2[n^k]$

Testing The Algebra

$\zeta(-1)$ has already been derived in the Ramanujan-style algebra shown in the introduction.

The First Even Power: $\sum_{n=1}^\infty n^2 = \zeta(-2)$

Fair warning: negative even powers of zeta are the trivial zeros of the function. A more interesting case begins with the negative odd integers.

Step 1. Write the alternating series of squares:

$$\eta(-2) = 1 - 4 + 9 - 16 + 25 - 36 + \cdots$$

Step 2. Apply shift-and-add twice (Rule 2b.i):

$$\begin{aligned} \eta(-2) &: & 1,& \quad -4,& \quad 9,& \quad -16,& \quad 25,& \quad -36,& \quad \ldots \\ 2\eta(-2) &: & 1,& \quad -3,& \quad 5,& \quad -7,& \quad 9,& \quad -11,& \quad \ldots \\ 4\eta(-2) &: & 1,& \quad -2,& \quad 2,& \quad -2,& \quad 2,& \quad -2,& \quad \ldots \end{aligned}$$

Step 3. The tail $\{-2, 2, -2, 2, -2, \ldots\}$ has $\gcd = 2$. Factor it out (Rule 2b.ii):

$$4\eta(-2) = 1 + 2\left(-1 + 1 - 1 + 1 - 1 + \cdots\right)$$

Step 4. The remaining series is $-(1 - 1 + 1 - 1 + \cdots) = -\eta(0) = -1/2$. This is where the even chain's anchor enters. Solving:

$$4\eta(-2) = 1 + 2\cdot\left(-\frac{1}{2}\right) = 1 - 1 = 0 \implies \eta(-2) = 0$$

Step 5. Recover $\zeta(-2)$ via Rule 2a:

$$\zeta(-2) = \frac{\eta(-2)}{1 - 2^3} = \frac{0}{-7} = 0$$

This is the first trivial zero of the Riemann zeta function. All even negative integers give $\zeta(-2k) = 0$, and the cascade reproduces this: the even-power chain always terminates at $\eta(0) = 1/2$, and the arithmetic cancels to zero every time. In the standard analytic continuation, the trivial zeros follow from the $\sin(\pi s/2)$ factor in the functional equation, which vanishes at every negative even integer, combined with the fact that $\zeta$ and $\Gamma$ have no zeros or poles (respectively) at those points to compensate. Here we get the same result from pure series arithmetic.

Cubic Powers Test: $\sum_{n=1}^\infty n^3 = \zeta(-3)$

Step 1. Write the alternating series of cubes:

$$\eta(-3) = 1 - 8 + 27 - 64 + 125 - 216 + \cdots$$

Step 2. Apply shift-and-add twice (Rule 2b.i). Each application doubles the regularized value, so two rounds give us $4\eta(-3)$:

$$\begin{aligned} \eta(-3) &: & 1,& \quad -8,& \quad 27,& \quad -64,& \quad 125,& \quad -216,& \quad \ldots \\ 2\eta(-3) &: & 1,& \quad -7,& \quad 19,& \quad -37,& \quad 61,& \quad -91,& \quad \ldots \\ 4\eta(-3) &: & 1,& \quad -6,& \quad 12,& \quad -18,& \quad 24,& \quad -30,& \quad \ldots \end{aligned}$$

Step 3. The tail $\{-6, 12, -18, 24, -30, \ldots\}$ has $\gcd = 6 > 1$. By Rule 2b.ii (leading-term separation), we may extract the leading term and factor the tail:

$$4\eta(-3) = 1 + 6\left(-1 + 2 - 3 + 4 - 5 + \cdots\right)$$

Step 4. By Rule 2b.iii (linearity), the remaining series decomposes into known $\eta$ values. Here the polynomial is just $n^1$, so:

$$-\left(1 - 2 + 3 - 4 + \cdots\right) = -\eta(-1) = -\frac{1}{4}$$

Solving:

$$4\eta(-3) = 1 + 6 \cdot \left(-\frac{1}{4}\right) = 1 - \frac{3}{2} = -\frac{1}{2} \implies \eta(-3) = -\frac{1}{8}$$

Step 5. Recover $\zeta(-3)$ via Rule 2a (reduction to alternating form):

$$\zeta(-3) = \frac{\eta(-3)}{1 - 2^{4}} = \frac{-1/8}{-15} = \frac{1}{120}$$

The cascade feeds each result into the next computation. $\eta(-1)$ was used to compute $\eta(-3)$. The pattern continues.

Powers of Five Test: $\sum_{n=1}^\infty n^5 = \zeta(-5)$

Step 1. Write the alternating series of fifth powers:

$$\eta(-5) = 1 - 32 + 243 - 1024 + 3125 - 7776 + \cdots$$

Step 2. Apply shift-and-add twice (Rule 2b.i):

$$\begin{aligned} \eta(-5) &: & 1,& \quad -32,& \quad 243,& \quad -1024,& \quad 3125,& \quad -7776,& \quad \ldots \\ 2\eta(-5) &: & 1,& \quad -31,& \quad 211,& \quad -781,& \quad 2101,& \quad -4651,& \quad \ldots \\ 4\eta(-5) &: & 1,& \quad -30,& \quad 180,& \quad -570,& \quad 1320,& \quad -2550,& \quad \ldots \end{aligned}$$

Step 3. The tail $\{-30, 180, -570, 1320, -2550, \ldots\}$ has $\gcd = 30 > 1$. By Rule 2b.ii (leading-term separation):

$$4\eta(-5) = 1 + 30\left(-1 + 6 - 19 + 44 - 85 + \cdots\right)$$

Step 4. The remaining series is itself divergent. Apply shift-and-add twice more (Rule 2b.i) to reduce it:

$$\begin{aligned} S_{\text{rem}} &: & -1,& \quad 6,& \quad -19,& \quad 44,& \quad -85,& \quad 146,& \quad \ldots \\ 2S_{\text{rem}} &: & -1,& \quad 5,& \quad -13,& \quad 25,& \quad -41,& \quad 61,& \quad \ldots \\ 4S_{\text{rem}} &: & -1,& \quad 4,& \quad -8,& \quad 12,& \quad -16,& \quad 20,& \quad \ldots \end{aligned}$$

Step 5. The tail $\{4, -8, 12, -16, 20, \ldots\}$ has $\gcd = 4 > 1$. By Rule 2b.ii again:

$$4S_{\text{rem}} = -1 + 4\left(1 - 2 + 3 - 4 + 5 - \cdots\right) = -1 + 4\eta(-1) = -1 + 4 \cdot \frac{1}{4} = 0$$

So $S_{\text{rem}} = 0$, and therefore:

$$4\eta(-5) = 1 + 30 \cdot 0 = 1 \implies \eta(-5) = \frac{1}{4}$$

Step 6. Recover $\zeta(-5)$ via Rule 2a:

$$\zeta(-5) = \frac{\eta(-5)}{1 - 2^{6}} = \frac{1/4}{-63} = -\frac{1}{252}$$
The Tedious Powers of Seven: $\sum_{n=1}^\infty n^7 = \zeta(-7)$

Computationally, this is quite boring. I give the algebra on the powers of seven to show it works below. However, there is a compressed version of what we have been doing so far. If you are not interested in the computation, feel free to skip ahead to The Shortcut: Polynomial Decomposition.

If you don't want to take my word for it, more power to you, here is the computation.

Steps 1–3 proceed as before. Write $\eta(-7)$, apply shift-and-add twice, factor the GCD:

$$\begin{aligned} \eta(-7) &: & 1,& \quad -128,& \quad 2187,& \quad -16384,& \quad 78125,& \quad -279936,& \quad 823543,& \quad \ldots \\ 2\eta(-7) &: & 1,& \quad -127,& \quad 2059,& \quad -14197,& \quad 61741,& \quad -201811,& \quad 543607,& \quad \ldots \\ 4\eta(-7) &: & 1,& \quad -126,& \quad 1932,& \quad -12138,& \quad 47544,& \quad -140070,& \quad 341796,& \quad \ldots \end{aligned}$$

The tail has $\gcd = 42$. But notice: the second term is $126 = 3 \times 42$. The GCD is no longer the second term. This is the first time that's happened.

$$4\eta(-7) = 1 + 42\left(-3 + 46 - 289 + 1132 - 3335 + 8138 - 17381 + \cdots\right)$$

Step 4. The remainder starts with $-3$, not $-1$. Try the same trick anyway: apply shift-and-add twice more (Rule 2b.i):

$$\begin{aligned} S_{\text{rem}} &: & -3,& \quad 46,& \quad -289,& \quad 1132,& \quad -3335,& \quad 8138,& \quad -17381,& \quad \ldots \\ 2S_{\text{rem}} &: & -3,& \quad 43,& \quad -243,& \quad 843,& \quad -2203,& \quad 4803,& \quad -9243,& \quad \ldots \\ 4S_{\text{rem}} &: & -3,& \quad 40,& \quad -200,& \quad 600,& \quad -1360,& \quad 2600,& \quad -4440,& \quad \ldots \end{aligned}$$

(One round gives $\gcd = 1$ — nothing to factor. But two rounds gives $\gcd = 40$.)

Step 5. Factor out 40 from the tail of $4S_{\text{rem}}$ (Rule 2b.ii):

$$4S_{\text{rem}} = -3 + 40\left(1 - 5 + 15 - 34 + 65 - \cdots\right)$$

Step 6. Apply shift-and-add twice more to this inner remainder:

$$\begin{aligned} S_{\text{rem2}} &: & 1,& \quad -5,& \quad 15,& \quad -34,& \quad 65,& \quad \ldots \\ 2S_{\text{rem2}} &: & 1,& \quad -4,& \quad 10,& \quad -19,& \quad 31,& \quad \ldots \\ 4S_{\text{rem2}} &: & 1,& \quad -3,& \quad 6,& \quad -9,& \quad 12,& \quad \ldots \end{aligned}$$

The tail has $\gcd = 3$. Factor it out (Rule 2b.ii):

$$4S_{\text{rem2}} = 1 + 3\left(-1 + 2 - 3 + 4 - \cdots\right) = 1 + 3\cdot(-\eta(-1)) = 1 - \frac{3}{4} = \frac{1}{4}$$

Solve backwards:

$$S_{\text{rem2}} = \frac{1}{16}$$ $$4S_{\text{rem}} = -3 + 40 \cdot \frac{1}{16} = -3 + \frac{5}{2} = -\frac{1}{2} \implies S_{\text{rem}} = -\frac{1}{8}$$ $$4\eta(-7) = 1 + 42 \cdot \left(-\frac{1}{8}\right) = 1 - \frac{21}{4} = -\frac{17}{4} \implies \eta(-7) = -\frac{17}{16}$$

Step 7. Recover $\zeta(-7)$ via Rule 2a:

$$\zeta(-7) = \frac{-17/16}{1 - 2^8} = \frac{-17/16}{-255} = \frac{1}{240}$$

It works, but it took three nested layers of shift-and-add-then-factor to get there, compared to one layer for $\zeta(-3)$ and two for $\zeta(-5)$. Each higher power adds another nesting level. This gets tedious fast.

The Shortcut: Polynomial Decomposition

There is a more compact route. Go back to the remainder after the first GCD factoring:

$$S_{\text{rem}} = -3 + 46 - 289 + 1132 - 3335 + 8138 - 17381 + \cdots$$

It turns out, the terms $3, 46, 289, 1132, 3335, 8138, 17381, \ldots$ are values of a polynomial.

Before going into this, a prerequisite to understanding how we can attain this polynomial comes from a method derived long before Ramanujan or Riemann: Newton's forward difference reconstruction of polynomials.

Newton's Forward Differences

Given a sequence of values $a(1), a(2), a(3), \ldots$ that you suspect are outputs of a degree-$d$ polynomial, Newton's method recovers that polynomial using only subtraction. Build a triangle by taking successive differences:

$$\begin{aligned} \Delta^0 &: & a(1),& \quad a(2),& \quad a(3),& \quad a(4),& \quad \ldots \\ \Delta^1 &: & a(2)-a(1),& \quad a(3)-a(2),& \quad a(4)-a(3),& \quad \ldots & \\ \Delta^2 &: & \Delta^1_2-\Delta^1_1,& \quad \Delta^1_3-\Delta^1_2,& \quad \ldots & & \\ &\vdots & & & & & \end{aligned}$$

Each row is the differences of the row above. If $a(n)$ is a polynomial of degree $d$, then $\Delta^d$ is constant and $\Delta^{d+1} = 0$. The polynomial is reconstructed from the leading entries of each row:

$$a(n) = \sum_{j=0}^{d} \binom{n-1}{j}\,\Delta^j[1]$$

where $\Delta^j[1]$ is the first entry of the $j$-th difference row. This is exact: no curve fitting, no linear algebra. Just subtraction and binomial coefficients.

A simple example: the sequence $1, 4, 9, 16, 25$ (i.e. $n^2$). The difference triangle is:

$$\begin{aligned} \Delta^0 &: & 1,& \quad 4,& \quad 9,& \quad 16,& \quad 25 \\ \Delta^1 &: & 3,& \quad 5,& \quad 7,& \quad 9 & \\ \Delta^2 &: & 2,& \quad 2,& \quad 2 & & \end{aligned}$$

Leading entries: $1, 3, 2$. Reconstruct: $a(n) = 1\cdot\binom{n-1}{0} + 3\cdot\binom{n-1}{1} + 2\cdot\binom{n-1}{2} = 1 + 3(n-1) + (n-1)(n-2) = n^2$. Done.

For any supplementary work, see: https://mathworld.wolfram.com/NewtonsForwardDifferenceFormula.html .

Recall that each finite difference reduces polynomial degree by 1. We started with $n^7$ (degree 7) and applied shift-and-add twice (two finite differences), so we expect the tail to be a degree-5 polynomial. And it is: after dividing out the GCD of 42, the reduced sequence $a(1) = 3,\ a(2) = 46,\ a(3) = 289,\ a(4) = 1132,\ a(5) = 3335,\ a(6) = 8138,\ a(7) = 17381$ is the evaluation of a degree-5 polynomial. To find it, take successive finite differences:

$$\begin{aligned} \Delta^0 &: & 3,& \quad 46,& \quad 289,& \quad 1132,& \quad 3335,& \quad 8138,& \quad 17381 \\ \Delta^1 &: & 43,& \quad 243,& \quad 843,& \quad 2203,& \quad 4803,& \quad 9243 & \\ \Delta^2 &: & 200,& \quad 600,& \quad 1360,& \quad 2600,& \quad 4440 & & \\ \Delta^3 &: & 400,& \quad 760,& \quad 1240,& \quad 1840 & & & \\ \Delta^4 &: & 360,& \quad 480,& \quad 600 & & & & \\ \Delta^5 &: & 120,& \quad 120 & & & & & \\ \Delta^6 &: & 0 & & & & & & \end{aligned}$$

$\Delta^5$ is constant and $\Delta^6 = 0$, confirming degree 5.

The leading entries $3, 43, 200, 400, 360, 120$ are the Newton forward-difference coefficients. Reconstructing the polynomial via $a(n) = \sum_{j} \binom{n-1}{j} \Delta^j[1]$ and collecting powers of $n$ yields:

$$a(n) = n^5 + \frac{5}{3}n^3 + \frac{1}{3}n$$

Rewriting the sum:

$$\sum_{n=1}^\infty (-1)^{n-1}[n^5 + \frac{5}{3} n^3 + \frac{1}{3}n]$$

This is a polynomial in odd powers only. By Rule 2b.iii (linearity), the alternating sum decomposes into previously computed $\eta$ values:

$$=\sum_{n=1}^\infty (-1)^{n-1}n^5 + \frac{5}{3} \sum_{n=1}^\infty(-1)^{n-1} n^3 + \frac{1}{3}\sum_{n=1}^\infty (-1)^{n-1}n$$ $$= - \eta(-5) - \frac{5}{3}\eta(-3) -\frac{1}{3}\eta(-1) $$ $$= -\frac{1}{4} - \frac{5}{3}\cdot\left(-\frac{1}{8}\right) - \frac{1}{3}\cdot\frac{1}{4} = -\frac{1}{4} + \frac{5}{24} - \frac{1}{12} = -\frac{1}{8}$$

This gives $S_{\text{rem}} = -1/8$ in one step, the same answer as the three nested layers above. The polynomial decomposition collapses the entire recursive chain into a single computation.

For $\zeta(-3)$ and $\zeta(-5)$, the polynomial decomposition was available but unnecessary (the remainder reduced to $\eta(-1)$ directly). Here, it becomes essential, or at least strongly preferred over three levels of nesting.

This method can be applied in an analogous manner for any $k$: powers of 9, powers of 25, powers of 101 (though I would not recommend the last one by hand).

Scripting the Method

Indeed, I wrote a script implementing this method using Python's fractions module (arbitrary-precision rationals) and verified agreement with the Bernoulli number formula for $\zeta(-k)$ through $k = 111$. No floating-point arithmetic is used at any point; every intermediate value is an exact fraction, so there is no accumulated rounding error and no overflow (and the script is written in Python, where numbers have arbitrary bit precision based on their size).

For the interested/diligent/suspicious reader, the script is quite simple and public on my Github: cascade.py.

A Proof: Why This Matches Zeta

The algebra's output is not just consistent with $\zeta(-k)$ up to $k = 111$. It must produce the correct values for all $k$. In this proof, I show that the discrete algebra developed above agrees with the analytic continuation of $\zeta(s)$ to negative integers using continuous machinery: I show that every operation in the algebra preserves values under a known regularization method: Abel summation. As Abel summation agrees with the analytic continuation of $\eta(s)$ at negative integers, this establishes that the algebra computes the correct $\zeta(-k)$ for all $k$.

Prerequisite: Abel Summation

Given a series $\sum_{n=1}^{\infty} a_n$ (possibly divergent), the Abel sum introduces a regulator $x$:

$$A(x) = \sum_{n=1}^{\infty} a_n\, x^n, |x| < 1$$

If the power series converges for $|x| < 1$ and $\lim_{x \to 1^-} A(x)$ exists, that limit is the Abel sum of the series.

As previously discussed, Abel summation also allows regularization of divergent series and agrees with zeta regularization. For $k = 0$, the Abel generating function is the alternating geometric series:

$$f_0(x) = \sum_{n=1}^{\infty} (-1)^{n-1} x^n = x - x^2 + x^3 - \cdots = \frac{x}{1+x}$$

So the Abel sum of the Grandi series is $\lim_{x \to 1^-} f_0(x) = 1/(1+1) = 1/2$, recovering our anchor.

For higher $k$, each power of $n$ in the sum comes from differentiating and multiplying by $x$. For instance:

$$f_1(x) = \sum_{n=1}^{\infty} (-1)^{n-1}\, n\, x^n = x\,\frac{d}{dx}\!\left[\frac{x}{1+x}\right] = \frac{x}{(1+x)^2}$$

Each application raises the power of $(1+x)$ in the denominator by one. In general:

$$f_k(x) = \sum_{n=1}^{\infty} (-1)^{n-1}\, n^k\, x^n = \frac{P_k(x)}{(1+x)^{k+1}}$$

where $P_k(x)$ is a polynomial of degree $\leq k$. The only pole is at $x = -1$, so $f_k$ is continuous at $x = 1$. This means the Abel sum is just direct evaluation. There is no limit needed, we simply plug in $x = 1$:

$$V_k \;=\; f_k(1) \;=\; \frac{P_k(1)}{2^{k+1}}$$

a well-defined rational number.

The proof below shows that every operation in the cascade: shift-and-add, leading-term separation, linearity, preserves $V_k$ under this framework.

Full Proof

Setup. As established in the Abel summation prerequisite above, for each integer $k \geq 0$, the generating function $f_k(x) = \sum (-1)^{n-1} n^k x^n = P_k(x)/(1+x)^{k+1}$ is rational with its only pole at $x = -1$, so $V_k = f_k(1) = P_k(1)/2^{k+1}$ is a well-defined rational number.

Claim 1: The anchor holds. For $k = 0$:

$$f_0(x) = \sum_{n=1}^{\infty} (-1)^{n-1} x^n = \frac{x}{1+x}$$

So $V_0 = f_0(1) = 1/2$, which matches the Grandi anchor. $\square$

Claim 2: Shift-and-add preserves values.

Write out the original and shifted series with the regulator $x^n$:

$$\begin{aligned} f_k(x) &= a_1\,x - a_2\,x^2 + a_3\,x^3 - \cdots \\ (f_k)_{\text{shift}}(x) &= 0\cdot x + a_1\,x^2 - a_2\,x^3 + \cdots \end{aligned}$$

where $a_n = n^k$. The shifted series is obtained by multiplying each $x^n$ term by an additional factor of $x$ (shifting all positions right by one), so $(f_k)_{\text{shift}}(x) = x \cdot f_k(x)$.

Adding:

$$f_k(x) + x\,f_k(x) = (1+x)\,f_k(x)$$

At $x = 1$: $(1+1)\,f_k(1) = 2\,V_k$.

Therefore, the shift-and-add operation, which the algebra claims produces $2\eta(-k)$, produces $2V_k$ when evaluated via generating functions. Applying shift-and-add twice gives $4V_k$. $\square$

Claim 3: Leading-term separation preserves values.

After two rounds of shift-and-add, we have a series with generating function $(1+x)^2 f_k(x) = 4V_k$ at $x = 1$. The algebra extracts the leading term (a finite operation) and factors a constant $d$ from the tail. In terms of the generating function, if $g(x) = c_1 x + c_2 x^2 + c_3 x^3 + \cdots$, then:

$$g(x) = c_1 x + d\sum_{n=2}^{\infty} (-1)^{n-1} \frac{c_n}{d}\, x^n$$

Both sides are equal for all $|x| < 1$, hence also at $x = 1$ by continuity of rational functions. $\square$

Claim 4: Linearity preserves values.

If $p(n) = q(n) + r(n)$ are polynomials, then for all $|x| < 1$:

$$\sum (-1)^{n-1} p(n)\,x^n = \sum (-1)^{n-1} q(n)\,x^n + \sum (-1)^{n-1} r(n)\,x^n$$

Each side is a rational function continuous at $x = 1$, so evaluating at $x = 1$:

$$V_p = V_q + V_r \qquad \square$$

Claim 5: The algebra terminates in finitely many steps.

Each application of shift-and-add computes one forward difference, reducing the polynomial degree of the sequence by 1. Two rounds reduce degree $k$ to $k - 2$. The resulting polynomial decomposes into matching-parity powers (odd $k \to$ odd powers, even $k \to$ even powers), each referencing a strictly lower $V_j$ with $j < k$. The recursion bottoms out at $V_0 = 1/2$ (even chain) or $V_1 = 1/4$ (odd chain, derived from $V_0$ via the Ramanujan derivation in the introduction). $\square$

Claim 6: Uniqueness.

At each step $k$, the algebra produces a single linear equation:

$$4V_k = 1 + d \cdot \left(\sum_{j} c_j V_j\right)$$

where $d$ is the GCD, $c_j$ are the polynomial decomposition coefficients, and all $V_j$ for $j < k$ have been previously determined. This is one equation in one unknown ($V_k$), with a unique solution. There are no free parameters. $\square$

Conclusion. The algebra starts from $V_0 = 1/2$, applies operations that preserve the generating function values $V_k = f_k(1)$, and terminates in finitely many steps with a unique result at each level. Therefore, for all $k \geq 0$, the algebra computes exactly $V_k = f_k(1)$.

To connect to $\zeta$: since $f_k(1) = \eta(-k)$ (both are defined as the value assigned to the alternating series $\sum (-1)^{n-1} n^k$ by the Abel/generating function evaluation), we recover $\zeta(-k) = V_k / (1 - 2^{k+1})$, which equals $-B_{k+1}/(k+1)$. $\square$

Why Bernoulli Numbers always appear as the GCD of $\Delta^2[n^k]$ and How the Mechanism Inherently Uses Bernoulli Structure

The GCDs that appear at each step are not arbitrary:

$k$GCD2nd termNormalizes?
366
53030
742126✗ (ratio 3)
930510✗ (ratio 17)
11662046✗ (ratio 31)
1327308190✗ (ratio 3)

The GCD column is exactly the sequence of Bernoulli number denominators: $\text{denom}(B_2) = 6$, $\text{denom}(B_4) = 30$, $\text{denom}(B_6) = 42$, $\text{denom}(B_8) = 30$, $\text{denom}(B_{10}) = 66$, $\text{denom}(B_{12}) = 2730$.

A Definitive Payoff of the Algebra

The algebra gives the prime structure of Bernoulli numbers through pure arithmetic. The denominators emerge from GCD computations on sequences of power differences.

More precisely, the GCD at step $k$ (for odd $k$) equals $\text{denom}(B_{k-1})$: the denominator of the $(k-1)$-th Bernoulli number. The mapping is $k = 3 \to B_2$, $k = 5 \to B_4$, $k = 7 \to B_6$, and so on.

At $\zeta(-11)$, the numerator 691 appears — the famous irregular prime associated with $B_{12} = -691/2730$. It falls out of the cascade naturally, as the accumulated arithmetic of eleven levels of bootstrapping.

Proof: The GCD of the Second-Order Finite Difference of $n^k$, for odd $k$, is a Bernoulli Denominator

Prerequisites

Von Staudt-Clausen Theorem

The denominator of every even-indexed Bernoulli number $B_{2n}$ is determined entirely by a divisibility condition on primes:

$$\text{denom}(B_{2n}) = \prod_{\substack{p \text{ prime} \\ (p-1) \mid 2n}} p$$

That is, a prime $p$ appears in the denominator of $B_{2n}$ if and only if $(p-1)$ divides $2n$. For example, $\text{denom}(B_2) = 2 \cdot 3 = 6$ because $p - 1 \in \{1, 2\}$ divides $2$, giving primes $2$ and $3$. See Wikipedia for the full statement and proof.

Fermat's Little Theorem

If $p$ is prime and $p \nmid a$, then:

$$a^{p-1} \equiv 1 \pmod{p}$$

Equivalently, $a^p \equiv a \pmod{p}$ for all integers $a$ (including multiples of $p$). See Wikipedia.

Primitive roots and order

The order of an element $g$ modulo $p$ is the smallest positive integer $d$ such that $g^d \equiv 1 \pmod{p}$. A key property: if $g^m \equiv 1 \pmod{p}$, then $\text{ord}(g)$ divides $m$. A primitive root modulo a prime $p$ is an element $g$ whose order is $p-1$ (the maximum possible). Every prime has at least one primitive root. See Wikipedia.

Claim

For odd $k \geq 3$:

$$\gcd\left(\Delta^2[m^k] : m \geq 0\right) = \text{denom}(B_{k-1})$$

where $\Delta^2[m^k] = (m+2)^k - 2(m+1)^k + m^k$ is the second forward difference of the power function.

Proof

A prime $p$ divides this GCD if and only if $p \mid \Delta^2[m^k]$ for every integer $m$. We show this happens exactly when $(p-1) \mid (k-1)$. By the von Staudt–Clausen theorem, these are precisely the primes in $\text{denom}(B_{k-1})$.

Forward direction: $\left((p-1) \mid (k-1) \implies p \mid \Delta^2[m^k] \text{ for all } m\right)$:

By Fermat's little theorem:

$n^{p-1} \equiv 1 \pmod{p}$ for $n \not\equiv 0$.

Since $(p-1) \mid (k-1)$, this gives $n^{k-1} \equiv 1$, so $n^k \equiv n \pmod{p}$.

This also holds when $n \equiv 0$ (both sides vanish). Therefore, for all $n$:

$$\Delta^2[n^k] \equiv n - 2(n-1) + (n-2) = 0 \pmod{p} \qquad \square$$

Reverse direction $\left((p-1) \nmid (k-1) \implies p \nmid \gcd\right)$:

We prove the contrapositive: if $p \mid \Delta^2[n^k]$ for all $n$, then $(p-1) \mid (k-1)$.

Suppose $\Delta^2[n^k] \equiv 0 \pmod{p}$ for all $n$. Define $f(n) = n^k \bmod p$. The assumption says:

$$f(n+2) - 2f(n+1) + f(n) = 0$$

Rewrite this as:

$$f(n+2) - f(n+1) = f(n+1) - f(n)$$

The difference between consecutive terms is constant. Let $d = f(1) - f(0)$.

Then $f(n) = f(0) + nd$ for all $n$, i.e. $f$ is affine: $f(n) = b + dn$.

Now pin down $b$ and $d$. Since $f(n) = n^k \bmod p$:

Therefore $n^k \equiv n \pmod{p}$ for all $n$.

Now let $g$ be a primitive root modulo $p$ (see prerequisites).

Since $g \not\equiv 0$, we have $g^k \equiv g$, so $g^{k-1} \equiv 1 \pmod{p}$. The order of $g$ is $p-1$, and since $g^{k-1} \equiv 1$, the order must divide $k-1$. Therefore $(p-1) \mid (k-1)$. $\square$

Meaning

The primes dividing the GCD are exactly those with $(p-1) \mid (k-1)$. By the von Staudt–Clausen theorem, $\text{denom}(B_{k-1}) = \prod_{(p-1) \mid (k-1)} p$. Therefore $\gcd = \text{denom}(B_{k-1})$. $\square$

This algebra re-discovers von Staudt–Clausen's theorem through pure arithmetic on divergent series, without invoking Bernoulli numbers or their definition.

Bonus: How Euler Found These Values (A Hundred Years Before Riemann)

The Bernoulli numbers just fell out of our GCD computations. Euler found the same numbers, and the same $\zeta(-k)$ values, over a century before Riemann's analytic continuation. His route was completely different.

Start with the geometric series: $\sum_{n=1}^{\infty} x^n = \frac{x}{1-x}$, with first term $x$ and ratio $x$, converging for $|x| < 1$. By differentiating with respect to $x$, you can build $\sum n^k x^n$ as a rational function for any $k$. These all have poles at $x = 1$, which is exactly the point corresponding to $\sum n^k$.

Euler's trick was to substitute $x = e^{-t}$ and study what happens as $t \to 0^+$ (which sends $x \to 1^-$). The base case becomes:

$$\sum_{n=1}^{\infty} e^{-nt} = \frac{e^{-t}}{1 - e^{-t}} = \frac{1}{e^t - 1}$$

Near $t = 0$, this has a pole. To expand it, write $\frac{1}{e^t - 1} = \frac{1}{t} \cdot \frac{t}{e^t - 1}$. The second factor is analytic at $t = 0$ (the pole cancels), so it has a Taylor series. That Taylor series is where Bernoulli numbers come from:

$$\frac{t}{e^t - 1} = \sum_{m=0}^{\infty} \frac{B_m}{m!}\, t^m = 1 - \frac{t}{2} + \frac{t^2}{12} - \frac{t^4}{720} + \cdots$$

So $\frac{1}{e^t - 1} = \frac{1}{t} - \frac{1}{2} + \frac{t}{12} - \frac{t^3}{720} + \cdots$. To get $\sum n^k e^{-nt}$, differentiate $k$ times. Each differentiation shifts the Laurent series, and the constant term works out to $-B_{k+1}/(k+1)$.

The series still diverges as $t \to 0$ (the leading $1/t^{k+1}$ blows up). Euler, a hundred years before anyone had a framework to justify it, extracted the finite part and discarded the rest:

$$\zeta(-k) = -\frac{B_{k+1}}{k+1}$$

The same formula we now derive from analytic continuation.

Euler's method requires the substitution $x = e^{-t}$, the generating function $t/(e^t - 1)$, and Laurent expansion near a singularity. The Bernoulli numbers enter as Taylor coefficients of $t/(e^t - 1)$. They are built into the machinery. In the algebra above, they are not. They appear as GCDs of integer sequences, forced by Fermat's little theorem and the structure of polynomial finite differences. Same numbers, arrived at from opposite directions.

Conclusion

In the definition of "elementary" given in this piece, there does exist an elementary discrete algebra by which polynomial divergent series may be regularized such that the regularized values agree with conventional zeta regularization methods.

Intuitively, the algebra works largely because it is not universal over all divergent series. The operations of this algebra are only valid on alternating polynomial series (other than the rule which gives rise to the known zeta/eta relation). This, along with its conditional stability, allows the algebra to avoid contradictions which normally arise from other attempts at such an elementary algebra.

The algebra does not invoke Abel summation or generating functions at any point in its computation. However, as shown in the proof, the algebra cleanly maps onto known regularization techniques; I find it interesting nonetheless, as the topic is relatively taboo due to the contradictions one can generate if not careful. In my own circle, I do not know a mathematically inclined person who would claim an elementary algebra exists for such evaluation.

I also find it independently interesting that Bernoulli denominators appear as the GCD of second-order finite differences of power sequences.

Either way, it was a fun journey to discover this elementary discrete algebra.

I hope you enjoyed.

Stay curious.

Cynic Callouts

"The anchor is a regularization choice, so this isn't self-contained." Yes. The anchor $\eta(0) = 1/2$ is an axiom, stated explicitly as such. The claim is not that the system derives this value from nothing — it's that from this single axiom, all $\zeta(-k)$ are uniquely determined by finite discrete operations.

"Shift-and-add doesn't justify evaluation of divergent series." Correct. The operations do not self-justify. The proof section shows they preserve Abel sums, which is the justification. The computation is discrete; the proof of correctness is analytic. This distinction is the point, not a gap.

"The proof reintroduces Abel summation. Isn't that circular?" The claim is not "this system avoids analysis entirely." It is "the computation uses no limits; the proof that the computation is correct does." The proof shows the operations preserve a known invariant. That is verification, not circularity.

GCD isn't a well-defined operation for an infinite tail unless you first prove divisibility for all entries! In the computation you didn't do this!

This is a very fair callout. The blog is partially written and taken from my own experience finding this algebra, during which I was experimentally trying different rules I thought could extend Ramanujan-style algebra. But formally, this is completely fair. However, this is justified in the proof why the tail of the second order difference always has a GCD of a Bernoulli number. But, yes, it remains true that in the computation section, it is not rigorously justified (until much later). I found the blog read better this way.

"This isn't 'algebra' in the formal sense! An algebra is over a vector space with a bilinear product operation!"

Yup. I use "algebra" in an informal sense here. See my "a note on terminology" at the top.

A General Note

This algebra was discovered from the problem statement and motiviation. Yes, it shadows existing methods. It was an intruiging process to discover it and learn more about the literature nonetheless. Besides this anticipation of rebuttals, it was a rewarding journey. The post is to mostly document the journey and an attempt to formalize the intuition.

Disclaimer

Generative AI (Claude) was used as an editorial aid for LaTeX formatting, prose refinement, and proof formalization.