mirror of
https://github.com/TecharoHQ/anubis.git
synced 2026-05-17 03:53:10 +00:00
b57508afcd
Currently the honeypotting feature has no limits or delays anywhere and uses that to feed an internal greylist of IP networks. This can cause issues such as in #1613 where Claude's crawler seemed to pick up on it and egress data at over one megabit per second until the administrator noticed and blocked the address range. This takes a different approach by inspiration of how the classic #xkcd IRC bot Robot9000 works. The first time a given IPv4 /24 or IPv6 /48 visits a honepot page, Anubis sleeps for 1 millisecond. The second it sleeps for two milliseconds. The third is four milliseconds and so on. The goal of this is to make the scraping inherently self-limiting such that the scrapers go off in their own corner where they won't really hurt anyone. Let's see if this works out according to keikaku. Ref: https://github.com/TecharoHQ/anubis/issues/1613 Signed-off-by: Xe Iaso <me@xeiaso.net>
Website
This website is built using Docusaurus, a modern static website generator.
Installation
$ yarn
Local Development
$ yarn start
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
Build
$ yarn build
This command generates static content into the build directory and can be served using any static contents hosting service.
Deployment
Using SSH:
$ USE_SSH=true yarn deploy
Not using SSH:
$ GIT_USER=<Your GitHub username> yarn deploy
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the gh-pages branch.