mirror of
https://github.com/TecharoHQ/anubis.git
synced 2026-05-17 12:03:09 +00:00
fix(honeypot/naive): apply robot9001 style delays
Currently the honeypotting feature has no limits or delays anywhere and uses that to feed an internal greylist of IP networks. This can cause issues such as in #1613 where Claude's crawler seemed to pick up on it and egress data at over one megabit per second until the administrator noticed and blocked the address range. This takes a different approach by inspiration of how the classic #xkcd IRC bot Robot9000 works. The first time a given IPv4 /24 or IPv6 /48 visits a honepot page, Anubis sleeps for 1 millisecond. The second it sleeps for two milliseconds. The third is four milliseconds and so on. The goal of this is to make the scraping inherently self-limiting such that the scrapers go off in their own corner where they won't really hurt anyone. Let's see if this works out according to keikaku. Ref: https://github.com/TecharoHQ/anubis/issues/1613 Signed-off-by: Xe Iaso <me@xeiaso.net>
This commit is contained in:
@@ -14,6 +14,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
<!-- This changes the project to: -->
|
||||
|
||||
- Patch [GHSA-6wcg-mqvh-fcvg](https://github.com/TecharoHQ/anubis/security/advisories/GHSA-6wcg-mqvh-fcvg) by containing subrequest logic to Anubis instances in subrequest mode.
|
||||
- Implement robot9001 style delays on the honeypot feature so that the first hit takes 1 millisecond, the second takes 2, etc.
|
||||
- Move metrics server configuration to [the policy file](./admin/policies.mdx#metrics-server).
|
||||
- Expose [pprof endpoints](https://pkg.go.dev/net/http/pprof) on the metrics listener to enable profiling Anubis in production.
|
||||
- fix: prevent nil pointer panic in challenge validation when threshold rules match during PassChallenge (#1463)
|
||||
|
||||
Reference in New Issue
Block a user