mirror of
https://github.com/TecharoHQ/anubis.git
synced 2026-05-17 12:03:09 +00:00
fix(honeypot/naive): apply robot9001 style delays
Currently the honeypotting feature has no limits or delays anywhere and uses that to feed an internal greylist of IP networks. This can cause issues such as in #1613 where Claude's crawler seemed to pick up on it and egress data at over one megabit per second until the administrator noticed and blocked the address range. This takes a different approach by inspiration of how the classic #xkcd IRC bot Robot9000 works. The first time a given IPv4 /24 or IPv6 /48 visits a honepot page, Anubis sleeps for 1 millisecond. The second it sleeps for two milliseconds. The third is four milliseconds and so on. The goal of this is to make the scraping inherently self-limiting such that the scrapers go off in their own corner where they won't really hurt anyone. Let's see if this works out according to keikaku. Ref: https://github.com/TecharoHQ/anubis/issues/1613 Signed-off-by: Xe Iaso <me@xeiaso.net>
This commit is contained in:
@@ -5,6 +5,7 @@ import (
|
||||
_ "embed"
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"math"
|
||||
"math/rand/v2"
|
||||
"net/http"
|
||||
"net/netip"
|
||||
@@ -168,6 +169,9 @@ func (i *Impl) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
}
|
||||
|
||||
millisecondAmount := math.Pow(float64(networkCount), 2)
|
||||
time.Sleep(time.Duration(millisecondAmount) * time.Millisecond)
|
||||
|
||||
spins := i.makeSpins()
|
||||
affirmations := i.makeAffirmations()
|
||||
title := i.makeTitle()
|
||||
|
||||
Reference in New Issue
Block a user