fix(honeypot/naive): apply robot9001 style delays (#1632)

Currently the honeypotting feature has no limits or delays anywhere and
uses that to feed an internal greylist of IP networks. This can cause
issues such as in #1613 where Claude's crawler seemed to pick up on it
and egress data at over one megabit per second until the administrator
noticed and blocked the address range.

This takes a different approach by inspiration of how the classic #xkcd
IRC bot Robot9000 works. The first time a given IPv4 /24 or IPv6 /48
visits a honepot page, Anubis sleeps for 1 millisecond. The second it
sleeps for two milliseconds. The third is four milliseconds and so on.
The goal of this is to make the scraping inherently self-limiting such
that the scrapers go off in their own corner where they won't really
hurt anyone.

Let's see if this works out according to keikaku.

Ref: https://github.com/TecharoHQ/anubis/issues/1613

Signed-off-by: Xe Iaso <me@xeiaso.net>
This commit is contained in:
Xe Iaso
2026-05-15 17:56:37 -04:00
committed by GitHub
parent 276b537776
commit b57508afcd
2 changed files with 5 additions and 0 deletions
+4
View File
@@ -5,6 +5,7 @@ import (
_ "embed"
"fmt"
"log/slog"
"math"
"math/rand/v2"
"net/http"
"net/netip"
@@ -168,6 +169,9 @@ func (i *Impl) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
}
millisecondAmount := math.Pow(float64(networkCount), 2)
time.Sleep(time.Duration(millisecondAmount) * time.Millisecond)
spins := i.makeSpins()
affirmations := i.makeAffirmations()
title := i.makeTitle()